id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
ee365059403f75de9bf11bc90476354e842ad6a3
EE 1301: Introduction to Computing Systems IoT Laboratory #3 Internet Connectivity Flying (not so) High in the Cloud Created by: David Orser, Kia Bazargan, and John Sartori Many thanks to the students, teaching assistants, and faculty that work to continually improve this document. Together we make better labs! Please send comments and suggestions to orser@umn.edu Copyright 2018 In the previous labs, you only used the wifi connection to program (flash) the microcontroller, after which the device was on its own (that’s a lie, BTW). The device only interfaced with sensors and actuators directly connected to it, using the data from sensor to figure out what was going on in its environment (e.g., the room getting too warm), and responding by sending out commands to actuators (such as a fan or an A/C unit to cool down the room). In this lab, we are going to look into how we can use the internet to connect devices that are physically far apart. For example, you might want your vegetable garden to send you an email or text message when the water level in its water reservoir is getting too low. To fill the reservoir remotely, you could send a command to a web-connected water valve to open for one minute and fill the reservoir. In this lab, you will learn how to communicate with and control your microcontroller remotely using the cloud. After completing this lab, you will be able to call functions on your microcontroller and read the values of variables in your program using the web. You will also be able to log data gathered by your microcontroller into an online google spreadsheet and use events signalled by your microcontroller to trigger actions online. Prelab Reading Material Read up on cloud functionalities in Particle’s reference documentation: Specifically you should review the following closely: - Particle.variable() - Particle.function() - Particle.publish() - Particle.subscribe() Lab Procedure Registering Cloud Variables, Functions, and Events in your Program Code and Accessing them in the Cloud The internet of things (IoT) is powered by microcontrollers that can be queried and controlled by networked devices like cellphones and computers. In this section, we will learn how to program our microcontrollers to send data to the cloud, and how we can call functions on our microcontrollers using a networked device. For the purpose of this lab, we can define the cloud as a location on the web where we can write data or read data that has been written. This space in the cloud is convenient for us because we can use it to communicate with our microcontroller remotely. For example, we can write data to the cloud with our microcontroller and then access the data with a networked device. Likewise, we can write data to the cloud with a networked device and access the data with our microcontroller. Cloud Variables One way our microcontroller sends data to the cloud is with a **cloud variable**. A cloud variable is just like a normal program variable, except that when a cloud variable is updated in the program running on your microcontroller, the microcontroller also updates the value of the variable in the cloud. This allows us to get data from an IoT device from far away. For example, I could query a web-connected thermostat to find out the temperature of my house when I am away on vacation. Currently, four types of cloud variables are supported -- INT (for fixed point / integer data), DOUBLE (for floating point data), STRING (for character-based data), and BOOLEAN (for true / false data). The example below shows how to declare three variables in a program and register them as cloud variables that will be written and updated in the cloud when the program is running. The usual place to register cloud variables is in the `setup()` function. The `Particle.variable()` function registers a cloud variable; it takes two parameters -- the name of the cloud variable being registered and the name of the program variable the cloud variable is associated with. ```c int integer_value = 0; double double_value = 0.0; char string_value[16] = "This is a string"; void setup() { // max length of a cloud variable name is 12 characters Particle.variable("intVal", integer_value); Particle.variable("doubleVal", double_value); Particle.variable("stringVal", string_value); } ``` **NOTE:** Currently, only up to 20 cloud variables may be defined, and each **cloud variable name is limited to a maximum of 12 characters**. In other words, the space reserved for you in the cloud can only hold 20 variables, and the names of the variables cannot be longer than 12 characters. The name of a cloud variable can either be the same or different than its program variable name, as long as the name is 12 characters or fewer. Querying the value of a cloud variable: Once cloud variables are written to the cloud by the microcontroller, they are associated with a unique URL and can be accessed by a networked device (e.g., computer, cellphone, etc.) by entering the URL in a web browser. The following URL queries the value of the cloud variable called stringVal in our example above. After entering the URL into a browser, the browser returns the value of the string, e.g., “This is a string”. ``` "https://api.particle.io/v1/devices/0123456789abcdef/stringVal?access_token=123412341234" ``` NOTE that the URL contains two unique character strings that are used to securely identify one particular microcontroller, so that only you (or those you grant access to) can access and control your microcontroller through the cloud. The first string is the Device ID -- this is the name of your microcontroller. The next string is the access token -- this is like a password associated with your device for security purposes. Your microcontroller will only respond to requests made using the correct device ID and access token. In the example above, the device ID is “0123456789abcdef”, and the access token is “123412341234”. You can find the device ID of your device by clicking on the “Devices” tab in Particle Build, and you can find your access token by clicking on the “Settings” tab. In general, the URL used to access the value of a cloud variable is formatted as follows. ``` "https://api.particle.io/v1/devices/[DEVICE ID]/[CLOUD VARIABLE NAME]?access_token=[ACCESS TOKEN]" ``` EXERCISE 1 - Create a program and corresponding circuit that uses your microcontroller and a TMP36 sensor to take temperature measurements. (HINT: You should have done something very similar in IoT Lab 2.) - Add a cloud variable in your program that will update the temperature values read by your microcontroller in the cloud. - Open a browser and type in a URL to read a temperature value that is being sampled by your microcontroller. Refresh the browser a few times to gather additional samples. See if you can make the value change (e.g., by heating up or cooling down the sensor). Cloud Functions Just as cloud variables are used to read and write data in the cloud, a **cloud function** can be used to call a function on the microcontroller using a networked device. This allows us to send commands to our IoT devices from far away. For example, I could call a cloud function on a web-connected thermostat telling it to start heating up the house when I am on my way home. Currently, your reserved space in the cloud allows you to register **up to 15 cloud functions**. The `Particle.function()` function registers a cloud function. It requires two arguments -- first, the name of the cloud function (a string), and second, a C-function in your code. The C-function must take **only one argument, a string**. The length of the string will be **limited to 63 characters**. Although the argument must be passed as a string, it can easily be converted to whatever data type(s) the program requires. The example below shows how to register a cloud function using the function `Particle.function("nameInCloud", nameInProgram)`. ```c enum thermostat_mode_t { COOL, OFF, HEAT, }; thermostat_mode_t mode = OFF; // If you are not familiar with the "enum" construct, the above code acts // exactly like the following three lines: // // #define COOL 0 // #define OFF 1 // #define HEAT 2 // // and defining an int variable "mode" that is initialized to OFF void setup() { // register the cloud function // first argument is the name called from the web (12 chars or less) // second argument is the name of the cloud function in the program Particle.function("setMode", setModeFromString); Serial.begin(9600); } void loop() { // the loop function is called repeatedly, as long as microcontroller is turned on } // this function automagically gets called upon a matching POST request // Note that the return type must be int and the argument must be a string int setModeFromString(String inputString) { if (inputString == "Cool") { mode = COOL; return 1; } else if (inputString == "Off") { mode = OFF; return 1; } } ``` Once a cloud function has been registered, it can be called using a POST HTML request. Unfortunately, POST requests can not be generated with a URL alone (i.e., you can't just type something into the browser address bar.) Instead, we'll have to create a simple webpage of our own to issue a POST request. The following is a template to start from: ```html <html> <body> <form action="https://api.particle.io/v1/devices/31003f00154f343339383037/setMode?access_token=cd5d28f3144dbae6724d9eefbd7c4ecdc493f3802" method="POST"> <input type="radio" name="args" value="Heat">Set thermostat mode to HEAT<br> <input type="radio" name="args" value="Off">Set thermostat mode to OFF<br> <input type="radio" name="args" value="Cool">Set thermostat mode to COOL<br> <input type="submit" value="Do it!"> </form> </body> </html> ``` For now, copy and paste this text into a new text file called “test.html” on your desktop. Sometimes copying from a pdf file garbles text a bit, so make sure the file looks exactly like the text in the box above. Replace the Device ID and Access Token in the code with your own. Open the file in your web browser; it should look something like this: ![Webpage for controlling thermostat mode](image) When you click on “Do it!” the cloud will initiate a request to your Photon to run the `setModeFromString` function. When the function completes, the cloud will relay the return value back to your web browser. It should look something like this: In the last section of this lab manual, you will learn more about how to write a webpage that dynamically displays variables and can be used to call cloud functions interactively. **EXERCISE 2** - Add a cloud function to your temperature measurement program that changes the value of a mode variable. - Add an RGB LED to your microcontroller circuit that changes color when the mode variable changes. You should have at least three modes with distinct colors. - Use the HTML form to change the mode on your microcontroller. **Cloud Events** Cloud events are another way that our microcontroller can communicate through the cloud. A cloud event is a way that the microcontroller can signal that something has happened. For example, you could connect a motion sensor to the microcontroller and have it publish an event whenever motion is detected. The microcontroller publishes an event to the cloud using the `Particle.publish()` function. Anyone who subscribes to the event gets notified whenever the event is published. Cloud events have the following properties: - **name** (1–63 ASCII characters) - **public/private** (default public) - **optional data** (up to 255 bytes) Anyone may subscribe to public events, but only the owner of a device will be able to subscribe to private events published by the device. **NOTE** that a device cannot publish events beginning with a case-insensitive match for "spark" or "particle". Such events are reserved for officially curated data originating from the Cloud. **Also** NOTE that currently, a device can publish events at a maximum rate of about 1 event per second. Trying to publish events faster than that may result in strange behavior from your microcontroller, since the Particle servers will automatically throttle your event publishing rate. The examples below demonstrate how to publish events with and without data. ```c if(digitalRead(MOTION_DETECTOR_PIN) == HIGH){ // publish an event to signal that motion has been detected Particle.publish("motion-detected"); } // publish an event with data to indicate where motion was detected if(digitalRead(FRONT_MOTION_DETECTOR_PIN) == HIGH){ Particle.publish("motion-detected", "FRONT DOOR"); } if(digitalRead(BACK_MOTION_DETECTOR_PIN) == HIGH){ Particle.publish("motion-detected", "BACK DOOR"); } ``` As stated above, anyone (with permission) who subscribes to an event will be notified whenever the event is published. The `Particle.subscribe()` function is called within the `setup()` function to subscribe to a cloud event. The first argument to `Particle.subscribe()` is the name of the event, and the second argument is the name of a handler function that will be called in the subscriber’s code when the event is published. A handler function must return `void` and take two arguments, both of which are C strings (`const char *`). The first argument will be the full name of the published event. The second argument (which may be NULL) is any data that came along with the event. ```c void motionHandler(const char *event, const char *data) { Serial.print("Motion was detected at the "); Serial.print(data); Serial.println(" door"); } void setup() { // with this subscription, the motionHandler function will be called // whenever the motion-detected event is published Particle.subscribe("motion-detected", motionHandler); Serial.begin(9600); // only listen to events published by your devices: use the MY_DEVICES argument Particle.subscribe("event_name", motionHandler, MY_DEVICES); // subscribe to events from a single device by specifying the device’s ID. //Particle.subscribe("event_name", motionHandler, "55ff70064989495339432587"); } ``` NOTE: A device can register up to 4 event handlers. This means you can call `Particle.subscribe()` a maximum of 4 times in your code; after that it will return false. NOTE: You can monitor cloud events at: https://console.particle.io/events **An introduction to If This Then That (IFTTT)** As we’ve seen in this course, a conditional statement is a very powerful tool in programming. IF This Then That (IFTTT) is a website that specializes in creating conditional statements for various internet-connected API’s. It does this in a very user-friendly and intuitive way, allowing the user to control a huge array of online services without detailed knowledge of how those online services communicate. A few useful examples of online services you can control with IFTTT are: - Gmail - Google Drive - Particle (Photon) - Instagram IFTTT also supports certain functions embedded in your iOS or Android phone. For example, it can use GPS to detect when your phone enters a certain region. These advanced features require that your phone be running the IFTTT app and have certain background features enabled (like location services and notification). Setting up an account at IFTTT.com 1.) Go to IFTTT.com 2.) Click “Sign up” 3.) Enter your email and choose a password That’s it, you’re done! Well, it’s not quite that simple; IFTTT is just a middleman. We will need to set up links to your accounts at other services to fully utilize IFTTT. 1. Creating your first applet 1.) Go to “My Applets” and choose to create a new applet. You should see something like the following. ![If this then that icon] 2.) Click “this” 3.) Search for the “weather underground” channel We’ll now connect to the weather channel. This channel is relatively simple to connect because it only requires you to enter a fixed location (no personal or account information). Fill out the forms as requested. 4.) Fill out the information necessary to connect the Weather channel and click continue. 5.) You will be prompted to “Choose a Trigger”; click “Tomorrow’s weather report” 6.) Select a “Time of day” that is after you wake up but before you’ve left the house. 7.) Click “Create Trigger” Your applet should look something like this: ![Image](image.png) Click “that”, search for “email”, and select ☃️. 8.) Click “Send me an email” You should end up on a screen that looks like the one on the right: 9.) If desired, you can edit the message to be sent. You can insert “ingredients” into the recipe. This allows you to add data from the “this” element into the email. Ingredients are context-sensitive and fairly intuitive. 10.) Click “Create Action”, then “Finish”. You should be returned to your “My Applets” page and should see a description of the applet you just created. 11.) Now, say we don’t want to wait until tomorrow at the chosen time to test our applet. Click on the gear icon and edit your applet to trigger a few minutes from now. Save your changes. 12.) Continue on with the lab below, but shortly, you should receive an email from IFTTT looking something like this: ![Email Example](email_example.png) Now would be a good time to poke around IFTTT (by creating a couple new applets) and look into the various services with which you can interact. Some of the more interesting channels relate to your cell phone (both iOS and Android are supported). Connecting your Photon to IFTTT 1.) Create a new applet, select “this”, search for “Particle”, and select it. 2.) Click “Connect”. 3.) Enter your Particle account information. 4.) It will prompt you to allow IFTTT to take control all the Cybermen in range of your Photon, don’t worry, and click OK. 5.) Click “Monitor your device status”. 6.) Fill out the form with your device name and “Online”. 7.) Click “Create Trigger”. 8.) Click “that”. 9.) Choose a channel of your choice. a.) An email is easy b.) If you’re feeling adventurous, install the iOS or Android app from Particle called “IFTTT”, then select either the iOS Notifications or Android SMS Channel. 10.) Click “Create Action” and “Create Recipe” 11.) Now, plug in (or reset) your Photon 12.) If you click on your applet, you can see when the applet last ran and even choose to “Check now” for any activity. In this section, we explored setting up IFTTT and several IFTTT channels. IFTTT can form a strong component in your upcoming project. It is well worth your time to poke around and explore what IFTTT is capable of, especially the cell phone features available through the app “IFTTT” (available in both the iOS and Android app stores). Good luck, and happy hacking. EXERCISE 3 ● Add a mode to your program called REDALERT. ● Modify your program so that an event called REDALERT is published every time your program enters REDALERT mode. ● Use IFTTT to send yourself an email whenever the REDALERT event is published. ● Use the cloud function you created earlier to put your microcontroller in REDALERT mode, and observe that you receive an email notification. REMINDER: You can monitor cloud events in the console: https://console.particle.io/events Data Logging with Google Drive and IFTTT Logging data is one of the most useful features of internet-connected devices. Not only does it provide a record of what has happened with a device (a great debugging tool) it expands the capabilities of the entire Internet of Things, allowing objects completely disconnected from your microcontroller to take action based on your microcontroller’s observations. In this section, we’ll walk through setting up your microcontroller to dump temperature data to a Google Sheets spreadsheet. The focus will be on both the code needed in your microcontroller application and the Recipe required on IFTTT. You are encouraged to supplement the temperature data in the example below with the data available from a sensor of your choice. Modify the example below to suit your needs. Creating a Cloud Event with Data We will use a cloud event to publish our data to the cloud. Include the following **Particle.publish()** call somewhere inside your **loop()** function. ``` // change the name of the event to something unique Particle.publish("orserTemp", sTemp); ``` “orserTemp” is the name of the event. Since this a “public” event, anyone can subscribe to it. Thus, it must be named uniquely. Replace “orser” with your x.500 ID to give your event a unique name. The “data” field of the event must be a string. “sTemp” is a string describing the temperature. I defined it as follows: ``` sTemp = String( ( data1 - 620 ) / 12.4 ); ``` Make sure your event is published at most once every ten seconds (i.e., add a Timer or **delay()** of at least 10000ms). We don’t want to upset anyone by spamming the servers. If you publish an event more than 4 times per second, you will be timed out for 4 seconds. A maximum long-term average rate of one event per second is supported by Particle. Compile and Flash the code you created to your Photon. It should result in event(s) you can see being published online at: https://console.particle.io/events Creating an IFTTT Recipe to Store the Data Once you have a cloud event occurring on a regular basis, an IFTTT recipe can be created to add a row to a spreadsheet for every event that occurs. Follow the instructions below. 1.) Go to the IFTTT website and choose to create a new applet. 2.) Click on “this” 3.) Search for “particle” and select it. 4.) Click on “New event published” 5.) Fill out the form with your information, similar to: 6.) Click “Create Trigger” and then “that” 7.) Search for “Google Sheets” and select it. a.) At this stage, you may be required to link your Google account to your IFTTT account; please follow the instructions provided. (NOTE: Be warned that mobile browsers might have issues with this step.) 8.) Select “Add row to spreadsheet” 9.) The defaults for this form will work fine, but we suggest you change the “Drive folder path” to a more descriptive name than “events”. Example below: IFTTT: Completed “that” form for “Google Sheets” → “Add row to spreadsheet” NOTE: There are many useful “ingredients” available for this form; explore them! 10.) Click “Create Action” and then review the applet and click “Finish”. 11.) Wait about 20 seconds and then go find the new spreadsheet in your Google Drive! It should look something like this: ![Automagic Google Drive Sheet Example](https://docs.google.com/spreadsheets/d/1ljOJ367YMptj0f1t) Your completed Photon code may look something like the following. Full Example Code for Event Creation ```cpp int data0; int data1; int Temp; String sTemp; unsigned Timer1; unsigned Timer2; bool HeartBeatState = FALSE; void setup() { Timer1 = 0; Timer2 = 0; Serial.begin(9600); pinMode(D7, OUTPUT); digitalWrite(D7, HeartBeatState); } void loop() { if (Timer1<=millis()) { Timer1 = millis()+2000; // Once every 2 seconds read our sensors HeartBeatState = !HeartBeatState; digitalWrite(D7, HeartBeatState); data0 = analogRead(A0); data1 = analogRead(A1); sTemp = String((data1 - 620) / 12.4); Serial.print(data0); Serial.print(","); Serial.print(data1); Serial.print(","); Serial.print(sTemp); Serial.println(":"); } if (Timer2<=millis()) { Timer2 = millis()+5*60*1000; // Once every 5 minutes Spark.publish("djoTemp", sTemp); } } ``` NOTE: This code uses software “timers” as described in the Quick Lesson - Programming Constructs. They are not required, but extremely useful, please review that document if you have questions. Dynamic Webpages - View and control your Photon online! Creating a webpage that interfaces with your microcontroller project allows you to control the microcontroller and read the values of program variables from anywhere you can access the web. Let’s learn how to write a webpage that can read the value of a cloud variable and call a cloud function. The basic language of the web is called hypertext markup language (HTML). HTML is made up of tags (text inside pointy-brackets) and content. Tags typically structure the document and describe how to display the content. For example, the content could be text and a tag could specify that the text should be displayed in bold. Every HTML document starts with <HTML> and ends with </HTML>. In general, for every tag <TAG>, there is a matching closing tag </TAG>. There are two basic fields inside an HTML document. The <HEAD> (header), which is optional, defines things like the title bar (<TITLE>) and metadata for search engines. The <BODY> contains the contents of the document. Many HTML elements exist to make webpage contents look nice, but for this tutorial we will ignore most of these and stick to the basics. You can find links at the bottom of the document for further reading on this topic, if you want to improve the appearance of your webpages. HTML was originally intended to describe the contents of a webpage and provide links to other webpages. HTML in its natural state doesn’t deal well with information that changes (also referred to as dynamic webpages). One way to improve a webpage’s ability to process data and change its output is by using a scripting language called Javascript. Anything inside <script> and </script> tags is processed by the web browser as a script rather than an HTML document. Javascript is a programming language, like C++, and a script is a type of program, like the ones you write in C++. The description of our webpage will start with the following code. ```html <!DOCTYPE HTML> <html> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript" charset="utf-8"></script> <body> ``` The <script src="http://..."> above simply loads a library of functions for use in this webpage (like a #include statement). You can simply copy and paste this html code at the top of the file for your webpage. Let’s learn how to display the value of Javascript variables in our webpage. The <SPAN> tag creates a blank field in a webpage that can be populated with text and updated dynamically. The text around the <SPAN> tags describes the text surrounding the field. For example, the first statement below will display “Current Temperature _____°F”, where the blank can be filled in by updating the value of the current_temp variable. ``` Current Temperature:<span id="current_temp"></span>&deg; F<br> Desired Temperature:<span id="desired_temp"></span>&deg; F<br> ``` The `<br>` tag at the end of each line creates a line break. This behaves just like the new line character (`\n`). The spans will eventually be accessed in Javascript by their IDs (current_temp and desired_temp). The next item we will place in our webpage is a button. When the button is clicked, it will run a function called `start()`. The text on the button will read **Refresh Data**. ``` <button id="connectbutton" onclick="start()">Refresh Data</button> ``` Now, let’s see how to write data into our spans. We do this by using the `jquery` Javascript library (initialized above), `document` class, and the `innerHTML` function. Here’s a simplified example of how `innerHTML` works: ``` <script type="text/javascript"> document.getElementById("current_temp").innerHTML = "1234"; document.getElementById("desired_temp").innerHTML = "5678"; </script> ``` When embedded in the HTML above, this Javascript would produce the following webpage. ![Webpage screenshot](image) ``` Current Temperature:1234° F Desired Temperature:5678° F ``` However, we don’t want the temp to be a static number (like 1234 or 5678), but a dynamic number retrieved from our Photon. To do this, **we need to replace the `<script>` above with one that does more work.** Now let’s write the `start()` function that gets called when we push the button. Inside this function, we will issue `get requests` to get the values of our cloud variables from the microcontroller. Remember that each cloud variable has a unique URL, and we can get the value of the variable by accessing the URL. To do this, `current_temp` and `desired_temp` must be declared as cloud variables in the microcontroller code. After issuing get requests for the variables, we will take the values returned from the cloud and display them in our spans. As discussed in Cloud Variables, you can get the value of a cloud variable by accessing the URL where the variable is stored in the cloud. Our get requests will simply create the appropriate URL and use the URL to get the variable’s value. Don’t forget, every microcontroller has its own device ID and access token that must be used in the URL. The code below appends several text fields together to create a URL called requestURL that contains the device ID, the access token, and the requested cloud variable name. Then, the URL is queried using the getJSON function. When the variable value is returned, it is placed into the appropriate span by the innerHTML. ```javascript <script type="text/javascript"> function start(objButton) { var deviceID = "YOUR_DEVICE_ID_GOES_HERE"; var accessToken = "YOUR_ACCESS_TOKEN_GOES_HERE"; var baseURL = "https://api.particle.io/v1/devices/ var varName = "current_temp"; requestURL = baseURL + deviceID + "/" + varName + "/?access_token=спор accessToken; $.getJSON(requestURL, function(json) document.getElementById("current_temp").innerHTML = + json.result; }); var varName = "desired_temp"; requestURL = baseURL + deviceID + "/" + varName + "/?access_token=спор accessToken; $.getJSON(requestURL, function(json) { document.getElementById("desired_temp").innerHTML = json.result; }); } </script> ``` Notice the </script> tag at the end of the HTML code above. This ends the javascript script. Now that we know how to get the values of cloud variables and put them in our webpage, let’s see how to set the value of a variable on the Photon by calling a cloud function. We will do this using a simple HTML form that makes a post request. The post request calls a cloud function called set_temp and passes the form’s input value (args) as the string argument of the function. Initially, the form’s input field will show the text (50-90), reminding the user that the input temperature for the set_temp function should be between 50-90 degrees. The text on the button will read Set Temperature. Since our webpage is now done, we can use the </body> tag to end the body of the webpage and the </html> tag to end the webpage. ```html ``` Copyright 2018 HINT: You can view Javascript errors on the “Console” in Google Chrome by clicking → More Tools → Developer Tools, then clicking on the >> and selecting Console. See the picture below: Our webpage should end up looking something like this: And the code looks like this: While you may not understand all the details of HTML and Javascript, the good news is that it is relatively simple to copy existing HTML code, like the examples above, and modify it to suit your needs. Instead of writing the code from scratch, copy and paste existing code and modify it to work for the particular cloud variables and cloud functions in your program. **EXERCISE 4** - Create a Particle App and HTML webpage (with Javascript) that does the following: - Displays a temperature value sampled by your microcontroller every time you request a sample on the webpage. - Has a webpage input that allows you to change the mode on your microcontroller. - Indicates the mode of the microcontroller with an LED - Show your TA the functionality of your IoT Thermostat. Sample some temperature measurements and change modes on your microcontroller using your webpage. **NOTE:** You do not need to change modes automatically (i.e., it doesn’t have to change modes based on the difference in temperature,) unless you’re really excited to do so. ;^) HINT: You can either use a text input form or a radio button form to select and set the mode. The following webpage gives an example of how to create a radio button form. Lab Report Put together a brief report on the your work in this lab. It should include the following: - **Intro**: A one paragraph introduction stating what was accomplished in the lab - **Section 1 (programming)**: Describe the code you created in this lab. Add a table to show five temperature values samples from the cloud. Along with each measurement, include a note to describe the conditions under which the measurement was sampled (e.g., room temperature, blowing on sensor, etc.). - **Section 2 (IFTTT)**: Explain how you used IFTTT to receive email notifications when your microcontroller enters REDALERT mode. Also explain how you used IFTTT to log data to a spreadsheet. Include a table that shows the first five rows of the spreadsheet created by your microcontroller. - **Section 3**: Show a screenshot of the webpage you created to read temperature values and change modes on your microcontroller. Briefly describe the functionality of your webpage. **NOTE**: When “describing code”, it is important to break the code into sections and describe how and why you chose the method of implementation you used. Further Reading Basic information on HTML tags: http://www.99lime.com/_bak/topics/you-only-need-10-tags/ Lots of additional information on HTML: http://www.w3schools.com/html/default.asp An interactive testbed for HTML Forms: Javascript Tutorials: http://www.webmonkey.com/2010/02/get_started_with_jquery/
{"Source-Url": "https://docs.google.com/document/d/1SUsruREyskFwxlyz9A-HknyszFTcVkA3Mre4JRVTq_k/export?format=pdf", "len_cl100k_base": 7584, "olmocr-version": "0.1.50", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 42674, "total-output-tokens": 8968, "length": "2e12", "weborganizer": {"__label__adult": 0.0004301071166992187, "__label__art_design": 0.0007381439208984375, "__label__crime_law": 0.00029349327087402344, "__label__education_jobs": 0.00443267822265625, "__label__entertainment": 0.00011485815048217772, "__label__fashion_beauty": 0.00021207332611083984, "__label__finance_business": 0.0001379251480102539, "__label__food_dining": 0.0004544258117675781, "__label__games": 0.0006003379821777344, "__label__hardware": 0.005931854248046875, "__label__health": 0.00047659873962402344, "__label__history": 0.0003039836883544922, "__label__home_hobbies": 0.00030922889709472656, "__label__industrial": 0.00064849853515625, "__label__literature": 0.00028443336486816406, "__label__politics": 0.0001742839813232422, "__label__religion": 0.0006284713745117188, "__label__science_tech": 0.04296875, "__label__social_life": 0.00017511844635009766, "__label__software": 0.00960540771484375, "__label__software_dev": 0.9296875, "__label__sports_fitness": 0.0003402233123779297, "__label__transportation": 0.0009107589721679688, "__label__travel": 0.00025963783264160156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34183, 0.01735]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34183, 0.61991]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34183, 0.8495]], "google_gemma-3-12b-it_contains_pii": [[0, 388, false], [388, 2621, null], [2621, 4869, null], [4869, 7016, null], [7016, 9130, null], [9130, 10607, null], [10607, 12130, null], [12130, 14334, null], [14334, 16234, null], [16234, 17702, null], [17702, 17702, null], [17702, 19838, null], [19838, 21859, null], [21859, 22345, null], [22345, 22870, null], [22870, 23915, null], [23915, 26542, null], [26542, 28534, null], [28534, 31023, null], [31023, 31295, null], [31295, 32354, null], [32354, 34183, null]], "google_gemma-3-12b-it_is_public_document": [[0, 388, true], [388, 2621, null], [2621, 4869, null], [4869, 7016, null], [7016, 9130, null], [9130, 10607, null], [10607, 12130, null], [12130, 14334, null], [14334, 16234, null], [16234, 17702, null], [17702, 17702, null], [17702, 19838, null], [19838, 21859, null], [21859, 22345, null], [22345, 22870, null], [22870, 23915, null], [23915, 26542, null], [26542, 28534, null], [28534, 31023, null], [31023, 31295, null], [31295, 32354, null], [32354, 34183, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 34183, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34183, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34183, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34183, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 34183, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34183, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34183, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34183, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34183, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34183, null]], "pdf_page_numbers": [[0, 388, 1], [388, 2621, 2], [2621, 4869, 3], [4869, 7016, 4], [7016, 9130, 5], [9130, 10607, 6], [10607, 12130, 7], [12130, 14334, 8], [14334, 16234, 9], [16234, 17702, 10], [17702, 17702, 11], [17702, 19838, 12], [19838, 21859, 13], [21859, 22345, 14], [22345, 22870, 15], [22870, 23915, 16], [23915, 26542, 17], [26542, 28534, 18], [28534, 31023, 19], [31023, 31295, 20], [31295, 32354, 21], [32354, 34183, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34183, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
875db6286eb11757ef837b7051691965d20f6625
[REMOVED]
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-030-32079-9_23.pdf", "len_cl100k_base": 6515, "olmocr-version": "0.1.42", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 30646, "total-output-tokens": 8731, "length": "2e12", "weborganizer": {"__label__adult": 0.0003740787506103515, "__label__art_design": 0.0002949237823486328, "__label__crime_law": 0.0004067420959472656, "__label__education_jobs": 0.00036525726318359375, "__label__entertainment": 6.943941116333008e-05, "__label__fashion_beauty": 0.00016415119171142578, "__label__finance_business": 0.0002046823501586914, "__label__food_dining": 0.00034618377685546875, "__label__games": 0.0005979537963867188, "__label__hardware": 0.0008864402770996094, "__label__health": 0.0005002021789550781, "__label__history": 0.00020945072174072263, "__label__home_hobbies": 8.440017700195312e-05, "__label__industrial": 0.0004608631134033203, "__label__literature": 0.00023066997528076172, "__label__politics": 0.0003409385681152344, "__label__religion": 0.0005192756652832031, "__label__science_tech": 0.02490234375, "__label__social_life": 9.131431579589844e-05, "__label__software": 0.005229949951171875, "__label__software_dev": 0.96240234375, "__label__sports_fitness": 0.0003559589385986328, "__label__transportation": 0.0006260871887207031, "__label__travel": 0.00020551681518554688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29202, 0.03695]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29202, 0.44726]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29202, 0.78911]], "google_gemma-3-12b-it_contains_pii": [[0, 2358, false], [2358, 6412, null], [6412, 9478, null], [9478, 11375, null], [11375, 13211, null], [13211, 16077, null], [16077, 18598, null], [18598, 22494, null], [22494, 24297, null], [24297, 27381, null], [27381, 29202, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2358, true], [2358, 6412, null], [6412, 9478, null], [9478, 11375, null], [11375, 13211, null], [13211, 16077, null], [16077, 18598, null], [18598, 22494, null], [22494, 24297, null], [24297, 27381, null], [27381, 29202, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29202, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29202, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29202, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29202, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29202, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29202, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29202, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29202, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29202, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29202, null]], "pdf_page_numbers": [[0, 2358, 1], [2358, 6412, 2], [6412, 9478, 3], [9478, 11375, 4], [11375, 13211, 5], [13211, 16077, 6], [16077, 18598, 7], [18598, 22494, 8], [22494, 24297, 9], [24297, 27381, 10], [27381, 29202, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29202, 0.06667]]}
olmocr_science_pdfs
2024-11-22
2024-11-22
d759379813604f22e5144d872da71aa707395b76
REMINDERS - Exam 1 will be held **Monday, March 2, 2015**. Most of you will take the exam from 6:00-7:50PM in DCC 308. - Students who have provided Prof. Goldschmidt with an accommodation letter requiring extra time will take the exam starting at 5:00PM in DCC 239. - You **MUST BRING YOUR RPI ID** to the exam. Missing IDs will result in a 20-point penalty. SOLUTIONS Below are solutions to the practice problems. Please be aware that there may be more than one way to solve a problem, so your answer may be correct despite being different from ours. Overview - No calculators, books, electronics of any kind, etc.! You may bring a one-page, double-sided, 8.5” x 11” “crib sheet” sheet with you. You may prepare this as you wish. Feel free to work in groups to prepare a common crib sheet. Of course, each of you must have your own copy during the exam. - We will assume you know (perhaps via your crib sheet) the following mathematical functions, some of which are in the `math` module: abs, ceil, float, int, max, min, sqrt, trunc - We will assume you know the following functions associated with the data of type `str`: len, +, * (by an integer), capitalize, replace, str, upper, and lower. - Python syntax will be important on this exam. On exams later in the semester it will be less important, but not entirely ignored. - Below are **many** sample questions, far more than will be on the exam. Solutions to most of the problems will be posted on the course website on Saturday, February 28. These posted solutions will **not** include problems involving output from Python — try those out yourself! - Please note that your solution to any question that requires you to write Python code will be at most a few (5-7) lines long, and may be much shorter. Focus on writing short programs, as it will save you time during the exam. - The exam questions will be closely related to the practice problems below and problems from the homeworks, labs, and lecture exercises. - Remember that the exam is timed. You should be able to solve each programming problem in about 5-8 minutes. Time yourself when solving them, find out where you are spending too much time, and practice the related material. • How to study? – First, make sure you know the syntax of all the different programming constructs we have learned. You cannot construct solutions if you do not know the building blocks. Practice them like you would a foreign language; memorize them. The best approach is to type them into the Python shell and try new variations of code. This will also help when you are trying to track down syntax errors. – Work through the sample questions, writing out solutions by hand (since you won’t have laptops at all for the exam!). Read solutions only as a last resort. Remember that reading a solution is much easier than actually writing one, and you are graded on the writing part. – You are encouraged to work with other students as you study, but ask yourself if you understand the questions and material enough to solve problems on your own. Try to replicate a solution you worked on with someone a little while later without looking at any solutions. – Review and re-do lecture exercises, lab, and homework problems. – Identify the problems that cause you difficulty and review lecture notes and background reading on these topic areas. Go to office hours and ask to review a concept you did not understand. – Use the Wing IDE / Python interpreter extensively to help you understand what is happening, but practice writing solutions out without using your laptop. • Advance warnings: – We will not answer questions from students in the middle of the exam unless there is a mistake in the question itself. In other words, do not ask what a certain line of code means or if your answer is correct. Questions 1. Identify all Python syntax errors in the following code. In identifying each error, unlike what the Python interpreter does, assume the errors on the previous lines have been fixed. ```python y = 5 x = y + 5*6 - # ill-formed expression z = x+y if z > x print 'z is smaller' else: print 'z is larger' print 'Sqrt of y is', sqrt(y) print 'Abs of x is ', abs(x1) s = str(z) s = s + 6 s = s * 6 print "String s is now' s" ``` Solution: There are six syntax errors, as highlighted below. ```python y = 5 x = y + 5*6 - # ill-formed expression z = x+y if z > x # missing : at end of line print 'z is smaller' else: print 'z is larger' ``` 2. What is the exact output of the following Python code? Show the output to the right of the code. Also, what are the local variables in the given code? ```python x=3 def do_something(x, y): z=x+y print z z *= z print z z += z * z print z do_something(1, 1) y=1 do_something(y,x) ``` Solution to the second part: z is the only local variable. 3. Assuming we type the following directly into the Python interpreter, please fill in the exact output: ```python >>> x = 5 >>> y = 27 >>> x / y >>> y - x*3 >>> z = x + y - 7 * 3 % 4 >>> z >>> x = 14 - 3 ** 3 + 2 % 6 >>> print x >>> y += y * len(str(y)) ``` >>> y >>> 19.5 / 5 / 3 >>> 1 + 2 * 3 - 4 / 5 + 6 * 7 % 8 >>> import math >>> x = 'the answer is ' + ( str( math.sqrt( 81 ) ) * 3 ) >>> print x >>> s = '3' + '4' '5' >>> s >>> print s >>> int(s) / 10 >>> float(s) / 10 >>> 'aBdcSf'.lower().capitalize() + 'BdkgE'.capitalize().lower() Solution: Please test them for yourself. 4. Assuming we type the following directly into the Python interpreter, please fill in the exact output: >>> x = 5 >>> y = 12 >>> z = y-x >>> s = "Grail" >>> x+y > z * len(s) >>> z > len(s) >>> u = "grail" >>> u == s >>> len(u) == len(s) Solution: Please test them for yourself. 5. Write a short segment of Python code that asks the user for a positive integer, reads the value, and generates an output error message if the user has input a negative number or 0. Solution: ```python x = int( raw_input( 'Enter a positive integer ==> ' ) ) if x <= 0: print 'Error: the number is not positive' ``` 6. In the United States, a car’s fuel efficiency is measured in miles driven per gallon used. In the metric system it is liters used per 100 kilometers driven. Using the values 1.609 kilometers equals 1 mile and 1 gallon equals 3.785 liters, write a Python function that converts a fuel efficiency measure in miles per gallon to one in liters per 100 kilometers and returns the result. Solution: The solution is spread across several lines to make it easier to write and read. This is not necessary for the exam, but it is good practice ```python def convert( mpg ): km_per_mile = 1.609 liters_per_gallon = 3.785 miles_per_liter = mpg / liters_per_gallon liters_per_mile = 1 / miles_per_liter liters_per_km = liters_per_mile / km_per_mile return liters_per_km * 100 ``` 7. Write a function called `spam` that takes as input 4 numbers and returns a string listing all the numbers and their average as a float. For example: ```python >>> s = spam(3, 10, 4, 2) >>> s "The average of 3, 10, 4, and 2 is: 4.75" ``` Solution: There are many ways to do this. ```python def spam( n1, n2, n3, n4 ): avg = ( n1 + n2 + n3 + n4 ) / 4.0 s = "The average of %d, %d, %d, and %d is: %.2f" %( n1, n2, n3, n4, avg ) return s ``` 8. Write a Python program that reads from the user the names of two people (on two separate lines of input) and prints out the shorter of the two names. If the two names have equal length, then print out the first name. Solution: ```python name1 = raw_input( 'Please enter the first name ==> ' ) name2 = raw_input( 'Please enter the second name ==> ' ) if len( name1 ) <= len( name2 ): print name1 else: print name2 ``` 9. What is the output of the following program? ```python def spam(a1, b1, a2, b2): if (a1 == a2) and (b1 > b2): return 1 else: return 0 def egg(a1, b1, a2, b2): if (a1 > a2) and (b1 == b2): return 0 else: return 1 a1 = 3 b1 = 4 a2 = 6 b2 = 4 print spam(a2, b2, a1, b1) print egg(a1, b1, a2, b2) c = spam(a1, b2, a2, b1) print c c += egg(a1, b2, a2, b1) print c ``` Solution: Please test them for yourself. 10. Write a Python function called `right_justify` that takes two strings and an integer, n. It should print the two strings on two consecutive lines. In the output, the strings should be right-justified and occupy at least n spaces. If either of the strings is longer than n then n should be increased to the maximum length of the two strings. As examples, the call ```python right_justify( 'Bike', 'Baseball', 15 ) ``` should output ``` Bike Baseball ``` while the call ```python right_justify( 'Bike', 'Baseball', 5 ) ``` should output ``` Bike Baseball ``` Solution: def right_justify( s1, s2, n): n = max( len( s1 ), len( s2 ), n ) line1 = ' ' * ( n - len( s1 ) ) + s1 line2 = ' ' * ( n - len( s2 ) ) + s2 print line1 print line2 11. Write a Python function that takes as input two strings, string1, string2. It should print the first string 3 times on each each of three lines, separated by spaces. Next it should output a blank line and then it should print the second string 6 times, again separated by spaces. For example: chant("Let's go red!", "Fight!") should output Let's go red! Let's go red! Let's go red! Let's go red! Let's go red! Let's go red! Let's go red! Let's go red! Let's go red! Fight! Fight! Fight! Fight! Fight! Fight! Try to do this with code where the names string1 and string2 only appear in your function once each! Solution: def chant( s1, s2 ): line1 = ( s1 + ' ' ) * 3 print line1, '\n', line1, '\n', line1, '\n' print ( s2 + ' ' ) * 6 12. Suppose Shirley Ann Jackson is tracking the number of friends she has on Facebook. Let the variable week1 represent the number of friends she has at the end of the first week, let week2 represent the number of friends she has at the end of the next week, and finally let week3 represent the number of friends she has at the end of the third week. Write a single Python expression that calculates the average change, including both up and down changes, of Shirley’s number of friends, and assigns the value to the variable change. For example if she has 100 friends in week 1, 150 in week 2, and 125 in week 3, then the total change is 75 and the average change is 75/2=37.5. Solution: change = ( abs( week1 - week2 ) + abs( week2 - week3 ) ) / 2.0 13. Write a section of Python code that reads a string into variable mystr and then reads an integer into variable num. The code should then print out the value of mystr repeated num times in a line, and then repeat the line num times. Note: you may not use for loops (or while loops for that matter) on this problem. As an example, given mystr="Python" and num=4, your code should output: PythonPythonPythonPython PythonPythonPythonPython PythonPythonPythonPython PythonPythonPythonPython Solution: In the solution below, the last line, the one that is commented out, has the advantage of not printing an extra line at the end. mystr = raw_input( 'Enter a string ==> ' ) num = int( raw_input( 'How many times ==> ' ) ) line = mystr * num + '\n' print line * num # print line * ( num - 1 ), mystr * num 14. Write a Python function that takes two strings as input and prints them together on one 35-character line, with the first string left-justified, the second string right-justified, and as many periods between the words as needed. For example, the function calls print_left_right( 'apple', 'banana') print_left_right( 'syntax error', 'semantic error') should output apple...........................................banana syntax error..............................semantic error You may assume that the lengths of the two strings passed as arguments together are less than 35 characters. **Solution:** def print_left_right( s1, s2 ): spaces = 35 - ( len( s1 ) + len( s2 ) ) line = s1 + '.' * spaces + s2 print line 15. Chris, a carpenter, takes $w_1$ weeks and $d_1$ days for job 1, $w_2$ weeks and $d_2$ days for job 2, and $w_3$ weeks and $d_3$ days for job 3. Assuming $w_1, d_1, w_2, d_2, w_3, d_3$ are all variables that have been assigned integer values, write a segment of Python code that calculates and outputs Chris’s average number of weeks and days per job? Output integer values, and do not worry about rounding numbers. For example, given $w_1 = 1$ $d_1 = 6$ $w_2 = 2$ $d_2 = 4$ $w_3 = 2$ $d_3 = 6$ The output should be average is 2 weeks and 3 days **Solution:** The trick is that averages must be calculated based on converting to total days. avg_days = ( ( w1 + w2 + w3 ) * 7 + d1 + d2 + d3 ) / 3 print 'average is %d weeks and %d days' %( avg_days / 7, avg_days % 7 ) 16. Write one line of Python code to calculate and output the average of the maximum and the minimum of four numbers, stored in the variables $u$, $x$, $y$ and $z$. For example, if the variables have the values 6, 5, 21, -3, then the code should output 9.0. **Solution:** print 0.5 * ( max( u, x, y, z ) + min( u, x, y, z ) ) 17. Write a Python function called `compare_date` that takes as arguments two lists of two integers each. Each list contains a month and a year, in that order. The function should return -1 if the first month and year are earlier than the second month and year, 0 if they are the same, and 1 if the first month and year are later than the second. Your code should work for any legal input for month and year. Example calls and expected output are shown below: ```python >>> compare_date( [10,1995], [8,1995] ) 1 >>> compare_date( [5,2010], [5,2010] ) 0 >>> compare_date( [10,1993], [8,1998] ) -1 ``` Solutions: ```python def compare_date( x, y ): if x[1] > y[1]: return 1 elif x[1] < y[1]: return -1 elif x[0] > y[0]: return 1 elif x[0] < y[0]: return -1 else: return 0 ``` Second version, combining the logic: ```python def compare_date( x, y ): return 1 return -1 else: return 0 ``` 18. Assuming we type the following directly into the Python interpreter, please fill in the exact output of the last command, making it clear what is your answer (as opposed to scratch work). Remember, there are no syntax errors here. 19. Assuming we type the following directly into the Python interpreter, please fill in the exactly output, making it clear what is your answer: ```python >>> t = ' ab c- >>> len(t) >>> x = '32' >>> x * len(x) >>> int(x) / len(x) ``` >>> a = [ 54, 'abc', "67.5", 12 ] >>> print a[1] >>> s = a[2] >>> print s[-1] >>> names = [ 'graham', 'john', 'terry', 'terry', 'eric', 'michael' ] >>> a = str( len( names[0] ) ) + '2' >>> b Solution: Please test them for yourself. 20. Assume v is a list containing numbers. Write Python code to find and print the highest two values in v. If the list contains only one number, print only that number. If the list is empty, print nothing. For example, if we assigned v = [ 7, 3, 1, 5, 10, 6 ] then the output of your code should be something like 7 10 If we are given that v = [ 7 ] then the output of your code should be 7 Solution: v.sort() if len( v ) == 1: print v[0] elif len( v ) > 1: print v[-2], v[-1] 21. Clearly show the output from the following Python program: def thingamajig( a , b , c ): c = c + "x" if a < b: b = b-a c = c * ( b ** 2 ) else: a = a*a c = c * ( ( 3 / 2 ) * a ) print "msg:", c thingamajig( 2 , 3 , "Hola" ) thingamajig( 2 , 2 , "Bonjour" ) thingamajig( 2 , 1 , "Ciao" ) Solution: msg: Holax msg: BonjourxBonjourxBonjourxBonjourx msg: CiaoxCiaoxCiaoxCiaox 22. The following Python code is supposed to calculate and return the surface area of a cylinder by adding the area of the top and bottom circles to the area of the face of the cylinder (lateral area). There is at least one syntax error and at least one semantic error in this code. Identify each error in the code by rewriting the line and say whether it is a syntax error or a semantic error. ```python from math import pi def surface_area(radius, height): # Calculate base area for each end of the cylinder base_area = (pi * radius)**2 # Calculate lateral area lateral_area = 2*height*radius*pi # Calculate surface area surface_area = lateral_area + 2*base_area return surface_area ``` Solution: ```python def surface_area(radius, height): # Missing : at end of line base_area = pi * r ** 2 # Semantic error, exponent on the wrong term surface_area = lateral_area + 2 * base_area # Syntax error, missing * ``` 23. This question has two parts. First write a function called longest_first which has two strings as parameters. The function should print two lines of output. The first line should be the longest word, with '*' before it and ' '* after it. The second line should be the shortest word, also framed, with as many '!' as needed to fill the space. For example, the call ```python longest_first( 'car', 'butterball' ) ``` should print * butterball * * car!!!!!!!! * You may assume the words are not the same length. The second part of the question is to write code that asks the user for two strings, reads the strings in, and calls longest_first to generate the output. Solution: def longest_first( s1, s2 ): if len( s1 ) > len( s2 ): print '* ' + s1 + ' '* ( len( s2 ) - len( s1 ) ) + ' * print '* ' + s2 + ( '!' * ( len( s2 ) - len( s1 ) ) ) + ' * else: print '* ' + s2 + ' '* ( len( s2 ) - len( s1 ) ) + ' * print '* ' + s1 + ( '!' * ( len( s1 ) - len( s2 ) ) ) + ' * s1 = raw_input( 'Enter string 1: ' ) s2 = raw_input( 'Enter string 2: ' ) longest_first( s1, s2 ) 24. Write a Python program that requests two strings from the user using raw_input(). The program should then print each string on a separate line such that the second string appears just below the first string, overlapping by one letter, followed by a line of asterisks that underlines both strings. For example, given the words python, and programming, your program should output: python programming **************** Solution: w1 = raw_input() w2 = raw_input() print w1 print ' ' * ( len( w1 ) - 1 ) + w2 print '*' * ( len( w1 ) + len( w2 ) - 1 ) 25. Given the following two functions: def bunny(bpop, fpop): ## Returns next year's bunny population given current bunny and fox populations bpop_next = (10*bpop)/(1+0.1*bpop) - 0.05*bpop*fpop return int(max(0,bpop_next)) def fox(bpop, fpop): ## Returns next year's fox population given current bunny and fox populations fpop_next = 0.4 * fpop + 0.02 * fpop * bpop return int(max(0,fpop_next)) Assume these functions are already correctly defined in your program. Suppose the current bunny population is 200, and the current fox population is 10. Write code using the above functions to compute the projected populations of bunnies and foxes in two years from now, and output one of the three strings below based on these numbers using an if statement. year 2016: more bunnies year 2016: more foxes year 2016: same number of bunnies and foxes Solution: b = bunny( 200, 10 ) f = fox( 200, 10 ) b2 = bunny( b, f ) f2 = fox( b, f ) if b2 > f2: print 'year 2016: more bunnies' elif b2 < f2: print 'year 2016: more foxes:' else: print 'year 2016: same number of bunnies and foxes' 26. Write a Python program that reads a sentence from the user using `raw_input()` and then does the following: - Removes all instances of the string `and` if only they appear between two other words surrounded by spaces. - Replaces all occurrences of the string `but` with `ot`. - Prints the remaining string. - Prints how many characters were removed from the original string. For example, if the user types the following string: ``` butters and kenny but not andrew ``` your program should output: ``` otters kenny ot not andrew Removed 5 characters ``` Solution: ```python sentence = raw_input() slen = len( sentence ) sentence = sentence.replace( ' and ',', ' ) sentence = sentence.replace( 'but', 'ot' ) print sentence print 'Removed', slen - len( sentence ), 'characters' ```
{"Source-Url": "http://www.cs.rpi.edu:80/~goldsd/docs/spring2015-csci1100/t1_overview_ans.pdf", "len_cl100k_base": 5769, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 27575, "total-output-tokens": 6809, "length": "2e12", "weborganizer": {"__label__adult": 0.0008721351623535156, "__label__art_design": 0.0010919570922851562, "__label__crime_law": 0.0006775856018066406, "__label__education_jobs": 0.1610107421875, "__label__entertainment": 0.00022614002227783203, "__label__fashion_beauty": 0.0004634857177734375, "__label__finance_business": 0.00063323974609375, "__label__food_dining": 0.0015325546264648438, "__label__games": 0.00255584716796875, "__label__hardware": 0.001796722412109375, "__label__health": 0.00113677978515625, "__label__history": 0.0007891654968261719, "__label__home_hobbies": 0.0006237030029296875, "__label__industrial": 0.001224517822265625, "__label__literature": 0.0010976791381835938, "__label__politics": 0.0005507469177246094, "__label__religion": 0.0013742446899414062, "__label__science_tech": 0.0273895263671875, "__label__social_life": 0.0007123947143554688, "__label__software": 0.0139923095703125, "__label__software_dev": 0.77734375, "__label__sports_fitness": 0.0009794235229492188, "__label__transportation": 0.0011148452758789062, "__label__travel": 0.0006303787231445312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20581, 0.0509]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20581, 0.79054]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20581, 0.8383]], "google_gemma-3-12b-it_contains_pii": [[0, 2202, false], [2202, 4478, null], [4478, 5115, null], [5115, 5732, null], [5732, 7740, null], [7740, 8792, null], [8792, 11120, null], [11120, 13179, null], [13179, 14507, null], [14507, 14744, null], [14744, 15723, null], [15723, 17393, null], [17393, 19545, null], [19545, 20581, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2202, true], [2202, 4478, null], [4478, 5115, null], [5115, 5732, null], [5732, 7740, null], [7740, 8792, null], [8792, 11120, null], [11120, 13179, null], [13179, 14507, null], [14507, 14744, null], [14744, 15723, null], [15723, 17393, null], [17393, 19545, null], [19545, 20581, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20581, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20581, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20581, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20581, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20581, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20581, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20581, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20581, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 20581, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20581, null]], "pdf_page_numbers": [[0, 2202, 1], [2202, 4478, 2], [4478, 5115, 3], [5115, 5732, 4], [5732, 7740, 5], [7740, 8792, 6], [8792, 11120, 7], [11120, 13179, 8], [13179, 14507, 9], [14507, 14744, 10], [14744, 15723, 11], [15723, 17393, 12], [17393, 19545, 13], [19545, 20581, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20581, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
716990721304048cc41f62c8c494e976fa19da77
EECS150 - Digital Design Lecture 3 - Verilog Introduction Jan 29, 2013 John Wawrzynek Outline • Background and History of Hardware Description • Brief Introduction to Verilog Basics • Lots of examples – structural, data-flow, behavioral • Verilog in EECS150 Design Entry - Schematic entry/editing used to be the standard method in industry and universities. - Used in EECS150 until 2002 - Schematics are intuitive. They match our use of gate-level or block diagrams. - Somewhat physical. They imply a physical implementation. - Require a special tool (editor). - Unless hierarchy is carefully designed, schematics can be confusing and difficult to follow on large designs. Hardware Description Languages (HDLs) are the new standard - except for PC board design, where schematics are still used. Hardware Description Languages • Basic Idea: - Language constructs describe circuits with two basic forms: - Structural descriptions: connections of components. Nearly one-to-one correspondence to with schematic diagram. - Behavioral descriptions: use high-level constructs (similar to conventional programming) to describe the circuit function. • Originally invented for simulation. - Now "logic synthesis" tools exist to automatically convert from HDL source to circuits. - High-level constructs greatly improves designer productivity. - However, this may lead you to falsely believe that hardware design can be reduced to writing programs!* "Structural" example: Decoder(output x0,x1,x2,x3; inputs a,b) { wire abar, bbar; inv(bbar, b); inv(abar, a); and(x0, a, b); and(x1, a, b); and(x2, a, b); and(x3, a, b); } Warning: this is a fake HDL! *Describing hardware with a language is similar, however, to writing a parallel program. Sample Design Methodology Hierarchically defines structure and/or function of circuit. HDL Specification Simulation Verification: Does the design behave as required with regards to function, timing, and power consumption? Synthesis Maps specification to resources of implementation platform (FPGA or custom silicon). Note: This is not the entire story. Other tools are useful for analyzing HDL specifications. More on this later. Verilog • A brief history: – Invented as simulation language. Synthesis was an afterthought. Many of the basic techniques for synthesis were developed at Berkeley in the 80’s and applied commercially in the 90’s. – Around the same time as the origin of Verilog, the US Department of Defense developed VHDL (A double acronym! VSIC [Very High-Speed Integrated Circuit] HDL). Because it was in the public domain it began to grow in popularity. – Afraid of losing market share, Cadence opened Verilog to the public in 1990. – Verilog is the language of choice of Silicon Valley companies, initially because of high-quality tool support and its similarity to C-language syntax. – VHDL is still popular within the government, in Europe and Japan, and some Universities. – Most major CAD frameworks now support both. – Latest Verilog version is “system Verilog”. – Latest HDL: C++ based. OSCI [Open System C Initiative]. Verilog Introduction - A **module** definition describes a component in a circuit - Two ways to describe module contents: - Structural Verilog - List of sub-components and how they are connected - Just like schematics, but using text - Tedious to write, hard to decode - You get precise control over circuit details - May be necessary to map to special resources of the FPGA - Behavioral Verilog - Describe what a component does, not how it does it - Synthesized into a circuit that has this behavior - Result is only as good as the tools - Build up a hierarchy of modules. Top-level module is your entire design (or the environment to test your design). Verilog Modules and Instantiation - **Modules define circuit components.** - **Instantiation defines hierarchy of the design.** ``` module addr_cell (a, b, cin, s, cout); input a, b, cin; output s, cout; endmodule ``` Instance of `addr_cell` ``` module adder (A, B, S); addr_cell acl1; // ...connections... endmodule ``` Note: A module is not a function in the C sense. There is no call and return mechanism. Think of it more like a hierarchical data structure. ### Structural Model - XOR example ```verilog module xor_gate (out, a, b); input a, b; output out; wire aBar, bBar, t1, t2; not invA (aBar, a); not invB (bBar, b); and and1 (t1, a, bBar); and and2 (t2, b, aBar); or or1 (out, t1, t2); endmodule ``` **Notes:** - The instantiated gates are not "executed". They are active always. - xor gate already exists as a built-in (so really no need to define it). - Undeclared variables assumed to be wires. Don’t let this happen to you! ### Structural Example: 2-to1 mux ```verilog module mux2 (in0, in1, select, out); input in0, in1, select; output out; wire s0, w0, w1; not (s0, select); and (w0, s0, in0), (w1, select, in1); or (out, w0, w1); endmodule // mux2 ``` --- **Structural Example: 2-to1 mux** - **a)** 2-input mux symbol - **b)** 2-input mux gate-level circuit diagram ```c /* 2-input multiplexor in gates */ module mux2 (in0, in1, select, out); input in0, in1, select; output out; wire s0, w0, w1; not (s0, select); and (w0, s0, in0), (w1, select, in1); or (out, w0, w1); endmodule // mux2 ``` ### Instantiation, Signal Array, Named ports ```verilog module mux4 (in0, in1, in2, in3, select, out); input in0, in1, in2, in3; input [1:0] select; output out; wire w0, w1; mux2 m0 (.select(select[0]), .in0(in0), .in1(in1), .out(w0)), m1 (.select(select[0]), .in0(in2), .in1(in3), .out(w1)), m3 (.select(select[1]), .in0(w0), .in1(w1), .out(out)); endmodule // mux4 ``` - **Signal array.** Declares select[1], select[0] - **Named ports.** Highly recommended. ### Simple Behavioral Model ```verilog module foo (out, in1, in2); input in1, in2; output out; assign out = in1 & in2; endmodule ``` - **“continuous assignment”** Connects out to be the “and” of in1 and in2. **Shorthand for explicit instantiation of “and” gate (in this case).** The assignment continuously happens, therefore any change on the rhs is reflected in out immediately (except for the small delay associated with the implementation of the &). **Not like an assignment in C that takes place when the program counter gets to that place in the program.** Example - Ripple Adder module FullAdder(a, b, ci, r, co); input a, b, ci; output r, co; assign r = a ^ b ^ ci; assign co = a&ci | a&b | b&cin; endmodule module Adder(A, B, R); input [3:0] A; input [3:0] B; output [4:0] R; wire c1, c2, c3; FullAdder add0(.a(A[0]), .b(B[0]), .ci(1'b0), .co(c1), .r(R[0]) ), add1(.a(A[1]), .b(B[1]), .ci(c1), .co(c2), .r(R[1]) ), add2(.a(A[2]), .b(B[2]), .ci(c2), .co(c3), .r(R[2]) ), add3(.a(A[3]), .b(B[3]), .ci(c3), .co(R[4]), .r(R[3]) ); endmodule Continuous Assignment Examples assign R = X | (Y & ~Z); assign r = &X; assign R = (a == 1'b0) ? X : Y; assign P = 8'hff; assign P = X * Y; assign P[7:0] = {4(X[3]), X[3:0]}; assign {cout, R} = X + Y + cin; assign Y = A << 2; assign Y = {A[1], A[0], 1'b0, 1'b0}; ### Verilog Operators <table> <thead> <tr> <th>Verilog Operator</th> <th>Name</th> <th>Functional Group</th> </tr> </thead> <tbody> <tr> <td>()</td> <td>bit-select or part-select</td> <td></td> </tr> <tr> <td>()</td> <td>parenthesis</td> <td></td> </tr> <tr> <td>!</td> <td>logical negation</td> <td>Logical</td> </tr> <tr> <td>&amp;</td> <td>negation</td> <td>Bit-wise</td> </tr> <tr> <td></td> <td></td> <td>reduction AND</td> </tr> <tr> <td>-&amp;</td> <td>reduction NAND</td> <td>Reduction</td> </tr> <tr> <td>^</td> <td>reduction NOR</td> <td>Reduction</td> </tr> <tr> <td>^- or ^'</td> <td>reduction XOR</td> <td>Reduction</td> </tr> <tr> <td>+</td> <td>unary (sign) plus</td> <td>Arithmetic</td> </tr> <tr> <td>-</td> <td>unary (sign) minus</td> <td>Arithmetic</td> </tr> <tr> <td>(())</td> <td>concatenation</td> <td>Concatenation</td> </tr> <tr> <td>()</td> <td>replication</td> <td>Replication</td> </tr> <tr> <td>*</td> <td>multiply</td> <td>Arithmetic</td> </tr> <tr> <td>/</td> <td>divide</td> <td>Arithmetic</td> </tr> <tr> <td>%</td> <td>modulus</td> <td>Arithmetic</td> </tr> <tr> <td>+</td> <td>binary plus</td> <td>Arithmetic</td> </tr> <tr> <td>-</td> <td>binary minus</td> <td>Arithmetic</td> </tr> <tr> <td>&lt;&lt;</td> <td>shift left</td> <td>Shift</td> </tr> <tr> <td>&gt;&gt;</td> <td>shift right</td> <td>Shift</td> </tr> <tr> <td>&gt;</td> <td>greater than</td> <td>Relational</td> </tr> <tr> <td>&gt;=</td> <td>greater than or equal to</td> <td>Relational</td> </tr> <tr> <td>&lt;</td> <td>less than</td> <td>Relational</td> </tr> <tr> <td>&lt;=</td> <td>less than or equal to</td> <td>Relational</td> </tr> <tr> <td>==</td> <td>logical equality</td> <td>Equality</td> </tr> <tr> <td>!=</td> <td>logical inequality</td> <td>Equality</td> </tr> <tr> <td>===</td> <td>case equality</td> <td>Equality</td> </tr> <tr> <td>!==</td> <td>case inequality</td> <td>Equality</td> </tr> <tr> <td>&amp;</td> <td>bit-wise AND</td> <td>Bit-wise</td> </tr> <tr> <td>^</td> <td>bit-wise XOR</td> <td>Bit-wise</td> </tr> <tr> <td>^- or ^=</td> <td>bit-wise XNOR</td> <td>Bit-wise</td> </tr> </tbody> </table> ### Verilog Numbers **Constants:** - **14** ordinary decimal number - **-14** 2's complement representation - **12'b0000_0100_0110** binary number (“_” is ignored) - **12'h046** hexadecimal number with 12 bits **Signal Values:** By default, Values are unsigned ```verilog e.g., C[4:0] = A[3:0] + B[3:0]; if A = 0110 (6) and B = 1010(-6) C = 10000 not 00000 i.e., B is zero-padded, not sign-extended ``` **wire signed [31:0] x;** Declares a signed [2's complement] signal array. Non-continuous Assignments A bit strange from a hardware specification point of view. Shows off Verilog roots as a simulation language. “always” block example: ```verilog module and_or_gate (out, in1, in2, in3); input in1, in2, in3; output out; reg out; always @(in1 or in2 or in3) begin out = (in1 & in2) | in3; end endmodule ``` Isn’t this just: `assign out = (in1 & in2) | in3;`? Why bother? Always Blocks Always blocks give us some constructs that are impossible or awkward in continuous assignments. ```verilog module mux4 (in0, in1, in2, in3, select, out); input in0, in1, in2, in3; input [1:0] select; output out; reg out; always @ (in0 in1 in2 in3 select) case (select) 2'b00: out=in0; 2'b01: out=in1; 2'b10: out=in2; 2'b11: out=in3; endcase endmodule // mux4 ``` Couldn’t we just do this with nested “if”s? Well yes and no! Always Blocks Nested if-else example: ```verilog module mux4 (in0, in1, in2, in3, select, out); input in0, in1, in2, in3; input [1:0] select; output out; reg out; always @ (in0 in1 in2 in3 select) if (select == 2’b00) out=in0; else if (select == 2’b01) out=in1; else if (select == 2’b10) out=in2; else out=in3; endmodule // mux4 ``` Nested if structure leads to "priority logic" structure, with different delays for different inputs (in3 to out delay > than in0 to out delay). Case version treats all inputs the same. Review - Ripple Adder Example ```verilog module FullAdder(a, b, ci, r, co); input a, b, ci; output r, co; assign r = a ^ b ^ ci; assign co = a&ci + a&b + b&cin; endmodule ``` ```verilog module Adder(A, B, R); input [3:0] A; input [3:0] B; output [4:0] R; wire c1, c2, c3; FullAdder add0(.a(A[0]), .b(B[0]), .ci(1'b0), .co(c1), .r(R[0]) ), add1(.a(A[1]), .b(B[1]), .ci(c1), .co(c2), .r(R[1]) ), add2(.a(A[2]), .b(B[2]), .ci(c2), .co(c3), .r(R[2]) ), add3(.a(A[3]), .b(B[3]), .ci(c3), .co(R[4]), .r(R[3]) ); endmodule ``` Example - Ripple Adder Generator Parameters give us a way to generalize our designs. A module becomes a “generator” for different variations. Enables design/module reuse. Can simplify testing. ```verilog module Adder(A, B, R); parameter N = 4; input [N-1:0] A; input [N-1:0] B; output [N:0] R; wire [N:0] C; genvar i; generate for (i=0; i<N; i=i+1) begin:bit FullAdder add(.a(A[i], .b(B[i]), .ci(C[i]), .co(C[i+1]), .r(R[i])); end endgenerate assign C[0] = 1'b0; assign R[N] = C[N]; endmodule ``` Parameters give us a way to generalize our designs. A module becomes a “generator” for different variations. Enables design/module reuse. Can simplify testing. - Declare a parameter with default value. - Note: this is not a port. Acts like a “synthesis-time” constant. - Replace all occurrences of “4” with “N”. - variable exists only in the specification - not in the final circuit. ```verilog // Gray-code to binary-code converter module gray2bin1 (bin, gray); parameter SIZE = 8; output [SIZE-1:0] bin; input [SIZE-1:0] gray; genvar i; generate for (i=0; i<SIZE; i=i+1) begin:bit assign bin[i] = ^gray[SIZE-1:i]; end endgenerate assign C[0] = 1'b0; assign R[N] = C[N]; endmodule ``` Variable exists only in the specification - not in the final circuit. Keywords that denote synthesis-time operations For-loop creates instances (with unique names) More on Generate Loop Permits variable declarations, modules, user defined primitives, gate primitives, continuous assignments, initial blocks and always blocks to be instantiated multiple times using a for-loop. ```verilog // Gray-code to binary-code converter module gray2bin1 (bin, gray); parameter SIZE = 8; output [SIZE-1:0] bin; input [SIZE-1:0] gray; genvar i; generate for (i=0; i<SIZE; i=i+1) begin:bit assign bin[i] = ^gray[SIZE-1:i]; end endgenerate assign C[0] = 1'b0; assign R[N] = C[N]; endmodule ``` generate if-else-if based on an expression that is deterministic at the time the design is synthesized. generate case: selecting case expression must be deterministic at the time the design is synthesized. Verilog in EECS150 • We will primarily use **behavioral modeling** along with **instantiation** to 1) build hierarchy and, 2) map to FPGA resources not supported by synthesis. • Favor continuous assign and avoid always blocks unless: - no other alternative: ex: state elements, case - helps readability and clarity of code: ex: large nested if else • Use named ports. • Verilog is a big language. This is only an introduction. - Our text book is a good source. Read and use chapter 4. - Be careful of what you read on the web. Many bad examples out there. - We will be introducing more useful constructs throughout the semester. Stay tuned! --- Final thoughts on Verilog Examples Verilog looks like C, but it describes hardware Multiple physical elements with parallel activities and temporal relationships. A large part of digital design is knowing how to write Verilog that gets you the desired circuit. **First understand the circuit you want then figure out how to code it in Verilog.** If you do one of these activities without the other, you will struggle. **These two activities will merge at some point for you.** Be suspicious of the synthesis tools! Check the output of the tools to make sure you get what you want.
{"Source-Url": "http://inst.eecs.berkeley.edu/~cs150/sp13/agenda/lec/lec03-verilog.pdf", "len_cl100k_base": 4712, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24569, "total-output-tokens": 5180, "length": "2e12", "weborganizer": {"__label__adult": 0.0011072158813476562, "__label__art_design": 0.0023136138916015625, "__label__crime_law": 0.0007739067077636719, "__label__education_jobs": 0.01131439208984375, "__label__entertainment": 0.00021076202392578125, "__label__fashion_beauty": 0.000675201416015625, "__label__finance_business": 0.0003581047058105469, "__label__food_dining": 0.0008702278137207031, "__label__games": 0.0016460418701171875, "__label__hardware": 0.176513671875, "__label__health": 0.0016927719116210938, "__label__history": 0.0008144378662109375, "__label__home_hobbies": 0.001251220703125, "__label__industrial": 0.00408172607421875, "__label__literature": 0.00041794776916503906, "__label__politics": 0.0006923675537109375, "__label__religion": 0.0015583038330078125, "__label__science_tech": 0.2403564453125, "__label__social_life": 0.0002378225326538086, "__label__software": 0.01158905029296875, "__label__software_dev": 0.53759765625, "__label__sports_fitness": 0.00131988525390625, "__label__transportation": 0.002490997314453125, "__label__travel": 0.0003979206085205078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15668, 0.0341]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15668, 0.71046]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15668, 0.74706]], "google_gemma-3-12b-it_contains_pii": [[0, 263, false], [263, 1780, null], [1780, 3388, null], [3388, 4556, null], [4556, 5709, null], [5709, 6755, null], [6755, 7584, null], [7584, 10147, null], [10147, 11125, null], [11125, 12317, null], [12317, 14424, null], [14424, 15668, null]], "google_gemma-3-12b-it_is_public_document": [[0, 263, true], [263, 1780, null], [1780, 3388, null], [3388, 4556, null], [4556, 5709, null], [5709, 6755, null], [6755, 7584, null], [7584, 10147, null], [10147, 11125, null], [11125, 12317, null], [12317, 14424, null], [14424, 15668, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15668, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15668, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15668, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15668, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15668, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15668, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15668, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15668, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15668, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15668, null]], "pdf_page_numbers": [[0, 263, 1], [263, 1780, 2], [1780, 3388, 3], [3388, 4556, 4], [4556, 5709, 5], [5709, 6755, 6], [6755, 7584, 7], [7584, 10147, 8], [10147, 11125, 9], [11125, 12317, 10], [12317, 14424, 11], [14424, 15668, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15668, 0.08163]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
8101bbcfade9a6e5c2ab21a4636a49cdb8a14304
Troll, a Language for specifying Dice-rolls Mogensen, Torben Ægidius Published in: Proceedings of the 2009 ACM symposium on Applied Computing DOI: 10.1145/1529282.1529708 Publication date: 2009 Document version Publisher's PDF, also known as Version of record Citation for published version (APA): ABSTRACT Dice are used in many games, and often in fairly complex ways that make it difficult to unambiguously describe the dice-roll mechanism in plain language. Many role-playing games, such as Dungeons & Dragons, use a formalised notation for some instances of dice-rolls. This notation, once explained, makes dice-roll descriptions concise and unambiguous. Furthermore, the notation has been used in automated tools for pseudo-random dice-rolling (typically used when playing over the Internet). This notation is, however, fairly limited in the types of dice-rolls it can describe, so most games still use natural language to describe rolls. Even Dungeons & Dragons use formal notation only for some of the dice-roll methods used in the game. Hence, a more complete notation is in this paper proposed, and a tool for pseudo-random rolls and (nearly) exact probability calculations is described. The notation is called “Troll”, combining the initial of the Danish word for dice (“terninger”) with the English word “roll”. It is a development of the language Roll described in an earlier paper. The present paper describes the most important features of Troll and its implementation. Categories and Subject Descriptors D.3.2 [Programming Languages]: Language Classifications—Specialized application languages; G.3 [Probability and Statistics]: Distribution functions General Terms Languages Keywords Dice, Probability, Domain-Specific Languages 1. INTRODUCTION The first thing to ask is: Why would you want a formal notation for specifying dice-rolls? There are several answers to this: - Formal notation can give a concise and unambiguous description which can be used to communicate ideas between people. This is, for example, the main motivation for mathematical notation. Games that use dice can (and sometimes do) use formal notation to describe dice-rolls. - A formal notation is machine readable, which enables use of tools that analyse specifications of dice-rolls. Two types of tools are especially relevant for dice-rolls: Internet dice-roll servers for playing games that involve dice online or by email. There are many dice-roll servers, but they are each limited to a small number of different types of dice-roll. With a universal notation, a dice-roll server can perform rolls to any specification. Probability calculators As with any random element used in games, players and game designers are interested in knowing the probability of each possible outcome. A tool that takes a specification of a roll and calculates this is, hence, quite useful. The concept of using formal notation for dice is not new: One of the first games to use a variety of dice shapes and rolling methods was Dungeons & Dragons from 1974 [3]. The rules introduced a formal notation for dice consisting of the following elements: \( d_n \) describes rolling a single dice with sides numbered 1–n. A normal six-sided die is, hence, denoted by \( d_6 \). \( m \cdot d_n \) describes rolling a \( m \) dice with sides numbered 1–n and adding up their result. \( 3d6 \), hence, denote rolling three six-sided dice and adding them up to get a result between 3 to 18. In spite of introducing this notation, most of the rulebook used an alternative, less explanatory notation: An interval of values was shown, and it was up to the player to figure out which dice should be rolled to obtain this interval. For example, “3–18” would denote rolling and adding three six-sided dice, while “2–20” would denote rolling and adding two ten-sided dice. In later editions, the \( m \cdot dn \) notation was used more and more, and in the most recent editions, the interval notation is completely eliminated. The \( m \cdot dn \) notation is also used in other role-playing games and has been extended to include addition, so you can write things like $2d8+1$ or $d6+d10$. Nevertheless, this extension is far from enough to describe dice-roll methods used in modern role-playing games. Here are some examples of methods from popular games: - To determine values for personal characteristics such as strength and intelligence, Dungeons & Dragons suggest several methods, one of which is to roll four six-sided dice, discarding the lowest and adding the rest. - “The World of Darkness” [1] lets a player roll a number of ten-sided dice equal to his ability score. If any of these show 10, additional dice equal to the number of 10s are rolled. If some of these also show 10, this is repeated until no more 10s are rolled. At this point, the number of values exceeding 7 are counted to give the final value of the roll which, consequently, can be any number from 0 upwards. - “Ironclaw” [4] lets a player roll three dice of different sizes (number of sides) determined by ability scores and count how many exceed a threshold determined by the difficulty of the task. - “Silhouette” [13] lets a player roll a number of ten-sided dice equal to his ability score and selects the highest of these as the result. An universal method for dice-rolls needs to be able to describe all of the above, and more. Any Turing-complete programming language can do this, but the result is not necessarily very concise or readable, and it may be impossible to analyse descriptions for such things as probability distributions. Hence, we propose a notation that extends the $m$ notation from Dungeons & Dragons while being readable to non-programmers after a short introduction. This is not my first attempt at doing so: In 1996, I proposed a notation called “Roll” [7], which attempted the same thing. While it was moderately successful (it was used to calculate probabilities when modifying rules for the new edition of “The World of Darkness” [1]), experiences showed that the notation was not as readable to non-programmers as it could be and that it lacked features to concisely describe some of the more complex rolls. To address this, Roll was over time modified and extended until it had little resemblance to the original. Hence, a new name “Troll” was chosen for the revised and extended language. One of the key features of Troll (and Roll) is the ability to automatically analyse descriptions for probability. Naive calculation will in many instances be far too time-consuming, so a number of optimisations are used. These are described in section 4. 2. THE BASICS OF TROLL When you roll several dice at the same dice, you don’t normally care about any specific order, so it is natural to consider the result of a dice-roll as a multisets, i.e., a collection where the order doesn’t matter but where multiplicity of elements does. Multisets of integers are used throughout Troll as the only data structure. Set operations like membership, union, intersection and difference extend in the obvious way to multisets. For details, see [17]. We will call a multisets of integers “a value”. A value containing exactly one number will in some contexts be treated as the number it contains. Some constructs require singleton values and run-time errors will be generated if these are applied to non-singleton values. Likewise, some operations require positive or non-negative numbers or non-empty multisets and will generate errors if applied to something other than that. Like in the original Dungeons & Dragons notation, $dn$ means rolling a die with $n$ sides numbered from 1 to $n$. However, $mdn$ is the value (multiset) of $m$ integers obtained by rolling $m$ $n$-sided dice, and you need an explicit sum operator to add up the numbers. Hence, what in Dungeons & Dragons is written as just $3d6$ will in Troll be written as $\sum 3d6$. The reason for this is that, unlike in Dungeons & Dragons, we need to do many other things to the dice than just add them. For example, in Silhouette [13], we need to find the maximum of $n$ ten-sided dice, which we in Troll can write as $\max n_{d10}$. There are also operators for finding the $m$ largest numbers in a value and for finding the minimum or $m$ smallest numbers. For example, the method mentioned earlier for determining personal characteristics in Dungeons & Dragons can be written as $\max n_{d10}$. When the number of dice rolled depends on in-game values (such as ability level), it may be useful to use a variable to represent the number of dice. For example, $\max n \; d10$ will take the number of dice rolled from the variable $n$. Variables can be bound to values when running Troll or locally inside definitions. We will return to this later. Anywhere a number can occur in Troll, you can use an expression instead, so it is, for example, perfectly legal (though not very sensible) to write $d \; d10$, which would roll a dice the number of sides of which is determined by a $d10$. An expression like $d(2d10)$ will, however, cause a run-time error, as the $d$ operator requires singletons as arguments. The usual arithmetic operators on numbers, too, are defined only on singleton arguments. Like $d$ can take a numeric prefix to specify the number of rolls, you can specify multiple rolls of more complex expressions by using the $\#$ operator: $\#\sum 3d6$ specifies a multisets of six numbers each obtained by adding three dice. You can use set-like notation like $\{3, 4, 3\}$ to build multisets. Union of values is done using the $\cup$ operator. 2.1 Comparisons, conditionals and bindings Some dice-roll mechanism, such as that described above for Ironclaw [4], require the rolled numbers to be compared against a threshold and counting of those that do. Comparison is in Troll done by filters, that compare the elements of a value with a single number and only returns the elements where the comparison holds. For example, $3<4d6$ rolls four $d6$ and retains only those numbers $n$ such that $3 < n$. You can then count the number of remaining elements using the $\#\sum$ operator, which simply returns the number of elements in a value. So, an instance of the Silhouette system where you roll one $d8$ and two $d10$ and where the threshold is 5 can be written as $\sum 5 \leq \{d8, d10, d10\}$. There are filters for $=, <, >, \leq, \geq$ and $\neq$. Note the asymmetry $\neq$. The space between $n$ and $d$ is required to separate the lexical tokens. Allowing any expression to be prefixed by a number causes ambiguity, hence the need for an explicit operator. --- 1 The space between $n$ and $d$ is required to separate the lexical tokens. 2 Allowing any expression to be prefixed by a number causes ambiguity, hence the need for an explicit operator. metric nature of filters: The left-hand operand must be a singleton and is never returned, while the right-hand operand is a value, part of which may be returned. Filters can also be used in conditionals. The expression if \( c \) then \( e_1 \) else \( e_2 \) evaluates the expression \( c \). If this evaluates to a non-empty value, \( e_1 \) is evaluated and returned, otherwise \( e_2 \) is evaluated and returned. Note that the semantics of filters ensures that, for example, \( x < y \), where \( x \) and \( y \) are singletons, is non-empty exactly when \( x < y \). In some cases you want to do two operations on the same die. You can’t just repeat the expression that rolls the die, as you will get two independent rolls. Instead, you need to locally bind the value of a roll to a name and refer to that name repeatedly. The syntax for this is \( \text{let } x := e_1 \text{ in } e_2 \). While this may look like an assignment, it is a local binding corresponding to, for example, \( \text{let } x = e_1 \text{ in } e_2 \) in Haskell. A local bind can allow us to define the dice-roll for Backgammon, where two identical dice are doubled to give four identical numbers: \[ \begin{align*} \text{x := d6; y := d6; if x=y then } & \{x, x, y, y\} \text{ else } \{x,y\} \\ \end{align*} \] 2.2 Repetition Troll has two constructs for repeated die-rolling. The simplest is the repeat loop, which repeats a roll until a condition holds and then returns the value of the last roll (the one that fulfills the condition). For example, \[ \text{repeat } x:=\text{d10 until } x \geq 1 \] keeps rolling a d10 until the result is greater than 1. It will, hence, return a value in the range 2...10. Note that the variable \( x \) is locally bound, not assigned to, so a loop like \[ \begin{align*} \text{x:=0; repeat } x:=x+1 \text{ until } x=10 \\ \end{align*} \] does not terminate, as the two bound \( x \)'s are different. The above is equivalent to \[ \begin{align*} \text{x:=0; repeat } y:=x+1 \text{ until } x=10 \\ \end{align*} \] which is, clearly, not terminating. The only thing that can change the value of the bound variable in different iterations is if the expression has a random element, such as a \( d_6 \) sub-expression. Basically, the exact same roll is repeated until the condition holds (if ever). For World of Darkness [1], we want to collect all the dice rolled until a certain condition is fulfilled, so we need a different kind of loop. The accumulate loop works like the repeat loop, but instead of returning just the value that fulfills the condition, it returns the union of all the values up to and including this point. With this, the World of Darkness dice roll can be expressed as \[ \text{count } x< y \#\text{accumulate } x:=\text{d10 until } x<10 \] Note that, like with repeat, the bound variable only changes as a result of different values of random elements in the expression; all iterations perform the same actions until these result in a value that fulfills the condition. 2.3 Other features The above sections describe only the essential features of Troll. There are many other features including foreach loops, removal of duplicates in a multiset, random selection of \( n \) elements in a multiset and even recursive functions. To see a full description of the language, go to the Troll web page [6]. 3. DIFFERENCES FROM ROLL The basic idea of operations on multisets is unchanged from Roll to Troll, so the changes have mainly been addition of more operators, changes to syntax and new loop structures. Below is a summary of the changes and the reasons for them: - The dice-operator \( d \) was in Roll purely a prefix operator, so to roll several identical dice, you needed the \# operator, i.e., \( 3\#d6 \) instead of \( 3d6 \). Troll added the \( mdn \) form to get closer to the familiar Dungeons & Dragons notation. - Troll added the set-like notation with curly braces, where Roll required building multisets by using the \( \cup \) (union) operator. In particular, this has made specification of the empty collection easier, as you can write \( \{ \} \) instead of, for example, \( 0\#0 \). - Local binding in Roll used a let-in syntax like in ML or Haskell, but users found the assignment-like syntax easier to read and write, especially when several bindings are nested. - Filters (comparisons) were prefix operators in Roll, so you would write \( < 3x \) instead of \( 3 > x \). The motivation for the prefix syntax in Roll was to emphasise the asymmetric nature of filters, but users found it confusing. - Roll had a more powerful loop structure that allowed changes in variables between iterations, but it was too complex for most users. Hence, it was replaced by the simpler repeat and accumulate loops, and recursive functions were added to handle the more complicated (and much rarer) cases. - More predefined operators were added. Some were just abbreviations of common cases, such as \( \text{min} \) abbreviating \( \text{least} 1 \), while others would be impossible to emulate without using recursive functions. An example of this is \( \text{pick} \), which picks \( n \) elements from a multiset. As an example, the World of Darkness roll described earlier would in Roll be written as \[ \text{count } N \# \text{if } =10 \text{ then } d10 \text{ else } 0\#0 \] which is, clearly, less readable. Nearly all changes have been motivated through discussions with or requests from users of earlier versions. Sometimes to make descriptions easier to write and read and sometimes to make it possible to at all specify a certain dice-roll method. 4. IMPLEMENTATION Troll is implemented as an interpreter written in Standard ML (specifically, Moscow ML). Two semantics are implemented: - A random-roll semantics, where the interpreter makes random samples of the described dice-roll. - A probability semantics, which calculates the probability distribution of the possible outcomes of the described dice-roll. The random-roll semantics is implemented as a fairly straightforward interpreter using a pseudo-random-number generator seeded by system time, so this will not be detailed further. The implementation of the probability semantics is a bit more interesting, so we will elaborate on this. If we for the moment ignore loops and recursive functions, a probability distribution for a dice roll is a finite map from outcomes to probabilities, such that the probabilities add up to one. Loops and recursion can make the number of possible outcomes infinite and allow the possibility of non-termination with non-zero probability, so a finite map is insufficient. We will, nevertheless, use finite maps and deal with the infinity issue later. We will write a finite map as a set of pairs of values and probabilities. For example, the distribution for sum 2d2 is: \[ \{(2,0.25), (3,0.5), (4,0.25)\} \] There are, basically, two ways in which we can calculate a finite probability map for a dice-roll definition: 1. We can from each subexpression produce a finite map for its possible outcomes and combine these to find a finite map for the outcomes of the full expression. 2. We can use Prolog-style backtracking to obtain all enumeration in space will use \(O(n^9)\) space. For each. Enumeration in space will combine values for \(n\) multisets of \(n\) possible values, enumeration by backtracking has to look at \(10^9\) possible values, enumeration in space looks at intermediate values than final values, you can use very large amounts of memory. An example is sum \(n\text{d}10\), where there are \(O(n^3)\) possible values of \(n\text{d}10\) but only \(O(n)\) possible values for the sum\(^3\). Hence, enumeration in time will use only \(O(n)\) space, while enumeration in space will use \(O(n^3)\) space. 3. Enumeration in time needs only keep track of one value at any given time (except at the top-level count), so you don’t need very much space. 4. Because enumeration in time looks at intermediate values one at a time, it can not recognize that it has seen a value before and will, hence, often repeat calculations that it has already done. Enumeration in space can combine identical values in the finite map by adding up their probabilities and, hence, avoid doing further calculation twice. For example, while \(n\text{d}10\) has \(O(n^3)\) possible values, enumeration by backtracking has to look at \(10^9\) combinations. To find the distribution for sum \(n\text{d}10\), enumeration in time has to look at \(10^9\) multisets of \(n\) numbers and add these up using 9 additions for each. Enumeration in space will combine values for \(n\text{d}10\) so it only needs to add up \(O(n^3)\) multisets. Though space costs more than time on modern computers, reduction from exponential time to polynomial time is worth an polynomial increase in space. \(^3\)To be precise, \(n\text{d}10\) has \(\binom{n+9}{9}\) possible outcomes. where $\oplus$ is the operator for the homeomorphism $f$, $\oplus$ is $\oplus$ lifted to union-free distributions and $\oplus^+ d$ is an optimized version of $d\oplus d$. ### 4.1 Local bindings If we locally bind a value to a variable, as in $x := d6; x+x$, the two occurrences of $x$ after the semicolon must always refer to the same value. Hence, in the expression $x\times x$, the distribution for $x$ must have only one possible outcome. So the local binding must normalise the distribution for $x$ to a finite map and then evaluate $x\times x$ for each possible value and combine the results to a new finite map. By normalising, we lose all of the optimisations of using a non-normalised representation, so local binding is one of the most costly operations in Troll. We represent normalised finite maps as a special case of the non-normalised representation where there are no union nodes and where the left operand to a choice node is always of the form $M!$, i.e., $N = M! + (M!)p N$. Furthermore, the nodes of the form $M!$ are in strictly ascending order (using a lexicographic ordering on multisets). ### 4.2 Loops The first observation is that since all iterations evaluate the same expression and the last such evaluation provides the result, the outcomes of a loop of the form $$\text{repeat } x := e \text{ until } c$$ is a subset of the outcomes of $e$. The way we handle $\text{repeat}$ loops in Troll is to first calculate the distribution $d$ for $e$ and then rewrite $d$ into a form $d_1|p d_2$, where all outcomes of $d_1$ fulfil the condition $c$ and none of those of $d_2$ do. It is now clear that the distribution for $x := e \text{ until } c$ is $d_1$: Repetition is done until we have a result in $d_1$, regardless of how unlikely this is. There is a possibility $p = 0$, i.e., that none of the outcomes of $e$ fulfil $c$. It would be possible to include nontermination as a possible value in distributions, but since dice rolls are intended to terminate, we instead report an error whenever there is a positive chance of nontermination. An accumulating loop of the form $$\text{accumulate } x := e \text{ until } c$$ can have an infinite number of possible outcomes. If we, like above, rewrite the distribution for $e$ into the form $d_1|p d_2$, we find that the distribution $d'$ for the loop can be defined by the equation $$d' = d_1|p (d_2 \cup d')$$ This will not have a finite solution unless $p = 0$ or $d_2 = \{\}$. Instead of trying to work with infinite maps, we have chosen to approximate by unfolding the above equation a finite (but user-definable) number of times and then replacing the remaining recursive reference to $d'$ by $d_1$. If rerolls have probability less than 1, we can get arbitrarily good approximations by increasing the unroll depth. We are now left with the problem of rewriting $d$ to $d_1|p d_2$ given the condition $c$. We note that the loop introduces a local binding, so we start by normalising $d$. We then find $d_1|p d_2$ by the following function: \[ C(M!|p M!) = M!|p M! \\ E(M) = 0 \text{ if } M \neq \{\} \\ E(d|p d') = p \cdot E(d) + (1-p) \cdot E(d') \\ E(d \cup d') = E(d) \cdot E(d') \\ E(2 \times d) = E(d)^2 \] The $M!|p M!$ in the first line may seem a bit curious, as it is equivalent to $M!$, but since $p$ is significant for later calculations, we write the distribution in this redundant way. If there are several possible values for $x$, i.e., if the normalised distribution contains a choice, we calculate the split for each possible value and combine the results. $E(d)$ is the probability that an outcome of $d$ is the empty multiset. Since conditions are considered true when they evaluate to non-empty multisets, the probability of a condition being true is $1$ minus the probability of it being empty. ### 4.3 Other optimisations In most places where distributions are built, some local simplifications at the top-level of the tree-structure are attempted. An incomplete list of these is shown below. \[ \begin{align*} d|p d & \rightarrow d \\ d_1|1 d_2 & \rightarrow d_1 \\ d_1|0 d_2 & \rightarrow d_2 \\ d_1|p (d_1|q d_2) & \rightarrow d_1|p' d_2 & \text{ where } p' = p+q-pq \\ d \cup d & \rightarrow 2 \times d \\ \{\}! \cup d & \rightarrow d \\ M! \cup N! & \rightarrow (M \cup N)! \end{align*} \] These add a small overhead to construction of trees, but can sometimes reduce their size dramatically. ## 5. ASSESSMENT So, how well does Troll work? ### 5.1 The language While Troll is considerably easier for non-programmers and programmers alike to use than Roll, people with no programming experience at all usually find it hard to write definitions that involve conditionals or loops, but expressions like $\text{sum largest } 3 \text{ 4d6}$ or $\text{count } 7<5 \text{ d10}$ are not usually any problem. People with a minimum of programming experience (such as experience from writing formulae in spreadsheets) can usually write definitions using a single loop or conditional and more experienced users can use all of the language. Usually, people can read and understand definitions that are more complex than they can write. While a number of game designers around the world use Troll to calculate probabilities, the notation has not been adopted in any game rules texts. Any one game will usually only use one or two different dice-roll methods, so it is easier for the writers to use a specialised notation or just plain English. Nor has any Internet dice-server used Troll to allow users to describe rolls that are not pre-programmed. So, overall, the success of Troll has almost exclusively been in probability calculation, where there (to my knowledge) are no other similar tools. 5.2 The implementation Due to the optimisations enabled by the non-normalised representation, calculating the probability distribution of most simple definitions is very fast. For example, calculating the distribution for sum 50d10 takes about 0.1 second on my fairly old machine⁴ and the calculation for count 7<100d10 takes less than 0.01 second. But when rolls combine a large number of dice and operations that do not distribute well over the non-normalised representation, calculations can take very long and use enormous amounts of memory. This can even happen for dice-roll systems used in actual games. For example, the game “Legend of Five Rings” [18] uses a roll mechanism that can be described in Troll as \[ \text{sum (largest M N}(\text{sum accumulate } x := \text{d}10 \text{ until } x<10)) \] where M and N depend on the situation. With M = 3, N = 5 and the maximum number of iterations for accumulate set to 5, Troll takes nearly 500 seconds to calculate the result, and it gets much worse if any of the numbers increase. If exact calculation of probabilities takes too long, the random-roll semantics of Troll can be used to generate a large number of samples which can be used to calculate statistics that approximate the probabilities. 6. RELATED WORK Apart from the notation originated in Dungeons & Dragons [3], most examples of notation for dice-roll are used only in games from a single publisher. Wikipedia [16] lists a few examples of those. Besides Roll [7] and Troll, the only other attempt at defining a universal dice-roll notation that I know of is in a Usenet post from 1992 [14]. There are many similarities between this and Troll. Numbers are kept separate until explicitly summed or otherwise operated on and there are clear equivalences to the Troll operations sum, d, #, least and largest (but not to the more complex Troll features). A partial implementation of the language was implemented in a dice-roll calculator [15]. There are many examples of extending traditional languages with probabilistic choice operators, e.g., [9, 2, 12], but in the main these do not allow calculation of probabilities, only sampling at runtime. Other languages are designed for calculating probabilities [10, 5, 8, 11]. None of these are specialised for defining dice-rolls, but some similarities to Troll exist. For example, the stochastic lambda calculus [11], like Troll, can be instantiated to both a sampling and probability calculation. The authors note that calculating probabilities for product spaces (which are similar to unions) can take very long time and discuss translating expressions into measure terms that keep the parts of a product space separate as long as possible. This has some similarities to using a non-normalised form, but the measure terms are more complex (and potentially more powerful) than the representation used in Troll. The main probabilistic construct in the stochastic lambda calculus is choose p e₁ e₂, which is equivalent to the d₁ |p d₂ form in the unnormalised representation in Troll. The predecessor of Troll, Roll is described in a paper [7] that includes formal semantics for both sampling and probability calculation. An implementation using an unnormalised representation similar to the one described in this paper is described, but not all the optimisations described above were applied to Roll. 7. REFERENCES ⁴3.2GHz Pentium 4, 1.5GB RAM
{"Source-Url": "https://static-curis.ku.dk/portal/files/17117511/Troll-SAC.pdf", "len_cl100k_base": 6826, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 25505, "total-output-tokens": 8251, "length": "2e12", "weborganizer": {"__label__adult": 0.0010061264038085938, "__label__art_design": 0.0004458427429199219, "__label__crime_law": 0.000926971435546875, "__label__education_jobs": 0.00087738037109375, "__label__entertainment": 0.00026154518127441406, "__label__fashion_beauty": 0.00044846534729003906, "__label__finance_business": 0.000370025634765625, "__label__food_dining": 0.001461029052734375, "__label__games": 0.0234375, "__label__hardware": 0.0014476776123046875, "__label__health": 0.0011453628540039062, "__label__history": 0.0005478858947753906, "__label__home_hobbies": 0.0001533031463623047, "__label__industrial": 0.0008420944213867188, "__label__literature": 0.0009813308715820312, "__label__politics": 0.0005316734313964844, "__label__religion": 0.00106048583984375, "__label__science_tech": 0.032989501953125, "__label__social_life": 0.0001569986343383789, "__label__software": 0.0074615478515625, "__label__software_dev": 0.92138671875, "__label__sports_fitness": 0.001010894775390625, "__label__transportation": 0.0006122589111328125, "__label__travel": 0.00037980079650878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31172, 0.0347]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31172, 0.78561]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31172, 0.89195]], "google_gemma-3-12b-it_contains_pii": [[0, 557, false], [557, 4308, null], [4308, 11086, null], [11086, 17079, null], [17079, 20020, null], [20020, 25288, null], [25288, 31172, null]], "google_gemma-3-12b-it_is_public_document": [[0, 557, true], [557, 4308, null], [4308, 11086, null], [11086, 17079, null], [17079, 20020, null], [20020, 25288, null], [25288, 31172, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31172, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31172, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31172, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31172, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31172, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31172, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31172, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31172, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31172, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31172, null]], "pdf_page_numbers": [[0, 557, 1], [557, 4308, 2], [4308, 11086, 3], [11086, 17079, 4], [17079, 20020, 5], [20020, 25288, 6], [25288, 31172, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31172, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
845f9d66e9366426142c77a3bd3155838e9b375d
Agenda ❖ Architecture Overview ❖ NOVA Building Blocks ❖ Recent Innovations ❖ ARM/x86 Code Unification ❖ Advanced Security Features (x86) ❖ Performance ❖ Q&A Architecture Overview NOVA Microhypervisor ARMv8-A or x86 UltraVisor™ UltraSecurity™ VMM VMM VMM VMM VMM VMM VM (Linux) VM (Windows) VM (Appliance) VM (RTOS) VM (Unikernel) UART MUX (UMX) VirtIO Socket MUX (vSMX) Network MUX (vSwitch) UART Driver Storage Driver Platform Manager Network Driver Host Apps Master Controller NOVA Microhypervisor ARMv8-A or x86 UltraSecurity™ VMI VAS VAS VAS VAS user kernel guest host NOVA Microhypervisor Building Blocks Protection Domains, Execution+Scheduling Contexts, Portals, Semaphores Hypercall interface uses capability-based access control for all operations - Very fast synchronous IPC with time donation and priority inheritance **NOVA Innovation Timeline** <table> <thead> <tr> <th>Release</th> <th>Feature</th> </tr> </thead> <tbody> <tr> <td>21.08</td> <td>CPU Freq. Enumeration, Unified Time Mgmt., CR0 / CR4 Intercepts</td> </tr> <tr> <td>21.17</td> <td>SMMU Stream Masking, CET Support</td> </tr> <tr> <td>21.26</td> <td>CAT / CDP Support, Extended State Mgmt., AVX512 Support</td> </tr> <tr> <td>21.35</td> <td>Enhanced SMMU Invalidation, TSC_AUX Support</td> </tr> <tr> <td>21.43</td> <td>TME / TME-MK Key Programming, DRNG Support</td> </tr> <tr> <td>21.52</td> <td>Unified Code Base, Generic Page Tables, ARM/x86 Feature Parity</td> </tr> <tr> <td>22.08</td> <td>ACPI S3 / S4 Suspend / Resume on x86</td> </tr> <tr> <td>22.17</td> <td>Enhanced ACPI S1 / S5 / Reset Platform Shutdown</td> </tr> <tr> <td>22.26</td> <td>ACPI S1 / S5 / Reset Platform Shutdown</td> </tr> <tr> <td>22.35</td> <td>ACPI S3 / S4 Suspend / Resume on x86</td> </tr> <tr> <td>22.43</td> <td>ACPI S1 / S5 / Reset Platform Shutdown</td> </tr> <tr> <td>22.52</td> <td>ACPI S1 / S5 / Reset Platform Shutdown</td> </tr> <tr> <td>23.08</td> <td>Extended ACPI S1 / S5 / Reset Platform Shutdown</td> </tr> </tbody> </table> **Approximately 2 months between releases** NOVA Design Goals ❖ Provide the same (or similar) functionality across all architectures ❖ Generic API that抽象s from architectural differences as much as possible ❖ Simple Build Infrastructure ➢ make ARCH= BOARD= ❖ Standardized Boot Process and Resource Enumeration ➢ Use Multiboot v2/v1, UEFI, ACPI, PSCI ❖ Formal Verification of highly concurrent C++ code and weakly-ordered memory ❖ Modern, Small, Fast: Best-in-Class Security and Performance Functions can be categorized as - **Same API / Same Implementation** - can share source/header/spec file - **Same API / Different Implementation** - can potentially share header/spec file - **Different API and Implementation** - cannot share anything Architecture-specific files will override generic files with the same name --- **NOVA** - **Makefile** - **build-aarch64** - **build-x86_64** - **doc** - **inc** - `aarch64` - `x86_64` - **src** - `aarch64` - `x86_64` Unified Code Base <table> <thead> <tr> <th>x86</th> <th>11483 SLOC</th> <th>37.4% generic</th> <th>3.9% ASM</th> </tr> </thead> <tbody> <tr> <td>x86_64</td> <td>7183 SLOC</td> <td>43.0% generic</td> <td></td> </tr> <tr> <td>generic</td> <td>4300 SLOC</td> <td></td> <td></td> </tr> <tr> <td>aarch64</td> <td>5683 SLOC</td> <td></td> <td></td> </tr> <tr> <td>ARM</td> <td>9983 SLOC</td> <td></td> <td></td> </tr> </tbody> </table> NOVA x86 Binary (ACPI) - 68808 Bytes Code - 2464 Bytes Data NOVA ARM Binary (ACPI) - 66916 Bytes Code - 300 Bytes Data SLOC based on release-23.08.0, binary sizes based on gcc-12.2.0 build. Other versions will produce different numbers. Protecting against Boot-Time DMA Attacks - Firmware owns platform hardware prior to UEFI ExitBootServices - ExitBootServices drops DMA protections for Legacy OS Support 😞 - Window of opportunity for DMA attack before NOVA can enable IOMMU ⇒ NOVA disables bus masters and manages ExitBootServices flow Flexible Load Address - No physical load-address range works across all platforms/architectures - Some platforms have RAM starting at 0, some have MMIO starting at 0 - Bootloader may want to move image to end of memory - Load address now flexible (can move NOVA up/down by multiples of 2 MiB) - No ELF relocation support needed - Init section mapped 1:1 and uses position-independent (mode-independent) code - Runtime section mapped V⇒P via paging and uses virtual memory - Multiboot protocol deficiencies currently limit load address to < 4 GiB - Many multiboot structures defined as 32-bit and nothing defined for aarch64 😞 # Power Management: Overview of ACPI States <table> <thead> <tr> <th>Platform Global States</th> <th>Platform Sleep States</th> <th>Core Idle States</th> <th>Core Performance States</th> </tr> </thead> <tbody> <tr> <td>G0 (Working)</td> <td>S0 (Working)</td> <td>C0 (Active)</td> <td>P0</td> </tr> <tr> <td>G1 (Sleeping)</td> <td>S1 (Stop Grant)</td> <td>C1 (Halt)</td> <td>P1</td> </tr> <tr> <td></td> <td>S2 (Power-On Suspend)</td> <td>C2 (Stop Clock)</td> <td>P2</td> </tr> <tr> <td></td> <td>S3 (Suspend-to-RAM)</td> <td>C3 (Deep Sleep)</td> <td>P3</td> </tr> <tr> <td></td> <td>S4 (Suspend-to-Disk)</td> <td>C4 (Deeper Sleep)</td> <td>P4</td> </tr> <tr> <td></td> <td>S5 (Soft Off)</td> <td>C6 (Deep Power Down)</td> <td>P5</td> </tr> </tbody> </table> Suspend/Resume: Possible Approaches ❖ Either save/restore entire register set of all devices ➢ Significant amount of state to manage ➢ Does not work for devices with hidden “internal state” or complex state machines ⇒ Possibly suitable for generic devices managed by some other component ❖ Or save high-level configuration and reinitialize devices based on that ➢ Less information to maintain ⇒ faster save/restore ➢ Resume similar to first initialization ⇒ using same code paths as initial boot ⇒ Approach for all devices managed by the NOVA microhypervisor Performance: ACPI P-States on x86 - Max Turbo Frequency - Guaranteed Frequency - Maximum Efficiency $P_{\text{dyn}} = CfV^2$ - Dynamic Voltage and Frequency Scaling - Unused thermal and power headroom of idle/offline cores given to active cores - HW-Controlled Turbo Bins - HW-Controlled or OS-Controlled P-States - Active Core 0 - Idle Core 1 - Idle Core 2 - Idle Core 3 - P0 - HFM - LFM - P1 - P2 - P3 - P4 - Pn - Maximum Efficiency - HW-Controlled - Turbo Bins - Guaranteed Frequency - HW-Controlled or OS-Controlled P-States - Max Turbo Frequency - Guaranteed Frequency - Maximum Efficiency ## Power Management: Feature Comparison <table> <thead> <tr> <th>Feature</th> <th>x86_64</th> <th>aarch64</th> </tr> </thead> <tbody> <tr> <td>P-State Support</td> <td>EIST (older) and HWP (SKL+)</td> <td>Not Yet</td> </tr> <tr> <td>S1</td> <td>Supported</td> <td>N/A</td> </tr> <tr> <td>S2/S3</td> <td>Supported</td> <td>Supported (PSCI 1.0+ Optional)</td> </tr> <tr> <td>S4/S5</td> <td>Supported</td> <td>Supported (PSCI 0.2+ Mandatory)</td> </tr> <tr> <td>Platform Reset</td> <td>Supported</td> <td>Supported (PSCI 0.2+ Mandatory)</td> </tr> <tr> <td>S0ix (Low-Power Idle)</td> <td></td> <td>Not Yet</td> </tr> <tr> <td>Hypercall <code>ctrl_hw</code></td> <td></td> <td>Returns <code>BAD_FTR</code> for unsupported functionality</td> </tr> <tr> <td>Hypercall <code>assign_dev</code></td> <td></td> <td>Device assignment preserved over Suspend/Resume</td> </tr> <tr> <td>Hypercall <code>assign_int</code></td> <td></td> <td>Interrupt configuration preserved over Suspend/Resume</td> </tr> </tbody> </table> Past: Singleton Spaces built into Protection Domain - **Host EC**: Thread - **Guest EC**: Virtual CPU - **Device**: BDF or SID 1. **create_pd** 2. **create_ec** 3. **assign_dev** - **Object Space**: Object Capability Table - **Host Space**: Stage-1 Page Table - **Guest Space**: Stage-2 Page Table - **DMA Space**: IOMMU Page Table - **PIO Space**: I/O Permission Bitmap - **MSR Space**: MSR Permission Bitmap Protection Domain - Suboptimal for Nested Virtualization, SMMU Virtualization, Advanced VMI Multiple Spaces separated from Protection Domain - **Host EC** - Thread - **Object Space** - Object Capability Table - **Host Space** - Stage-1 Page Table - **Guest Space** - Stage-2 Page Table - **DMA Space** - IOMMU Page Table - **PIO Space** - I/O Permission Bitmap - **MSR Space** - MSR Permission Bitmap - **Protection Domain** - Slab Allocators - **3. create_ec** - Permanent Binding - **2. create_pd** - Flexible Assignment - **1. create_pd** 6 new kernel object types Multiple Spaces separated from Protection Domain 1. create_pd 2. create_pd 3. create_ec 4. ipc_reply (with MTD.SPACES set) Flexible reassignment of guest spaces upon events Host Space - Stage-1 Page Table Guest Space - Stage-2 Page Table DMA Space - IOMMU Page Table PIO Space - I/O Permission Bitmap MSR Space - MSR Permission Bitmap Protection Domain - Slab Allocators Object Space - Object Capability Table Guest EC - Virtual CPU SEL - GST - PIO - MSR x86 Only - Permanent Binding - Flexible Assignment Multiple Spaces separated from Protection Domain 1. create_pd 2. create_pd 3. assign_dev - Object Space: Object Capability Table - Host Space: Stage-1 Page Table - Guest Space: Stage-2 Page Table - DMA Space: IOMMU Page Table - PIO Space: I/O Permission Bitmap - MSR Space: MSR Permission Bitmap Flexible reassignment of DMA spaces anytime Permanent Binding Flexible Assignment Device BDF or SID Generic Page Tables - NOVA manages 3 Page-Table Types for each Architecture - Host (Stage-1), Guest (Stage-2), DMA (IOMMU/SMMU) - These correspond to the 3 memory spaces - Architecture-Independent Template Base Class - Lock-less page-table traversal, (de)allocation and PTE updates - Substitution of large pages with page tables and vice versa - Type-Specific Subclasses (CRTP) - Encode access permissions and memory attributes (cacheability, shareability, key index) - Ensure non-coherent agents observe updates in the correct order Page Tables: 39-bit Address Space (3-Level) Input Address HVA or GPA Level 2 9 Bits Page Table 512 Entries Level 1 9 Bits Page Table 512 Entries Level 0 9 Bits Page Table 512 Entries Offset 12 Bits 63:39 38:30 29:21 20:12 11:0 Root Ptr Page Frame 4096 Bytes 1 GiB 2 MiB 4 KiB Applies to ➢ aarch64 ➢ x86_64 NOVA Microhypervisor Udo Steinberg - FOSDEM 2023 Page Tables: 40-bit Address Space (Concatenated) - **Input Address**: HVA or GPA - **Level 2**: 10 Bits - **Level 1**: 9 Bits - **Level 0**: 9 Bits - **Offset**: 12 Bits - **Page**: Frame 4096 Bytes - **Page Table**: 512 Entries - **Root Ptr**: L2 resolves 1 extra bit - **Page Table**: 1024 Entries - **Page Frame**: 4096 Bytes - **L2 Page Table is 8 KiB** - **Applies to**: aarch64 only Page Tables: 41-bit Address Space (Concatenated) Input Address - HVA or GPA Offset - 12 Bits Level 2 - 11 Bits Level 1 - 9 Bits Level 0 - 9 Bits Page Frame - 4096 Bytes Root Ptr Page Table - 2048 Entries Page Table - 512 Entries Page Table - 512 Entries L2 resolves 2 extra bits L2 Page Table is 16 KiB Applies to - aarch64 only NOVA Microhypervisor Udo Steinberg - FOSDEM 2023 Page Tables: 48-bit Address Space (4-Level) ### Input Address - HVA or GPA <table> <thead> <tr> <th>Level</th> <th>Offset</th> <th>Address Space</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>11:0</td> <td>1 GiB</td> </tr> <tr> <td>1</td> <td>20:12</td> <td>2 MiB</td> </tr> <tr> <td>2</td> <td>29:21</td> <td>4 KiB</td> </tr> <tr> <td>3</td> <td>38:30</td> <td>512 GiB</td> </tr> <tr> <td>4</td> <td>47:39</td> <td>512 MiB</td> </tr> <tr> <td>5</td> <td>56:48</td> <td>512 GiB</td> </tr> </tbody> </table> - **Root Ptr**: 1 GiB - **Page Table**: 512 Entries - **Page Frame**: 4096 Bytes Applies to - aarch64 - x86_64 Page Tables: Level-Indexing Configuration ### x86_64 <table> <thead> <tr> <th>Bits</th> <th>L4</th> <th>L3</th> <th>L2</th> <th>L1</th> <th>L0</th> <th>OFF</th> </tr> </thead> <tbody> <tr> <td>57</td> <td>9</td> <td>9</td> <td>9</td> <td>9</td> <td>9</td> <td>12</td> </tr> <tr> <td>48</td> <td>9</td> <td>9</td> <td>9</td> <td>9</td> <td>9</td> <td></td> </tr> <tr> <td>39</td> <td>9</td> <td>9</td> <td>9</td> <td>9</td> <td>9</td> <td></td> </tr> </tbody> </table> ### aarch64 <table> <thead> <tr> <th>Bits</th> <th>L3</th> <th>L2</th> <th>L1</th> <th>L0</th> <th>OFF</th> </tr> </thead> <tbody> <tr> <td>52</td> <td>4+9</td> <td>9</td> <td>9</td> <td>9</td> <td></td> </tr> <tr> <td>48</td> <td>9</td> <td>9</td> <td>9</td> <td>9</td> <td></td> </tr> <tr> <td>44</td> <td>5</td> <td>9</td> <td>9</td> <td>9</td> <td></td> </tr> <tr> <td>42</td> <td>3+9</td> <td>9</td> <td>9</td> <td>9</td> <td>12</td> </tr> <tr> <td>40</td> <td>1+9</td> <td>9</td> <td>9</td> <td></td> <td></td> </tr> <tr> <td>36</td> <td>6</td> <td>9</td> <td>9</td> <td></td> <td></td> </tr> <tr> <td>32</td> <td>2+9</td> <td>9</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> - **NOVA derives from input bits** - Uniform index bits per level - Non-uniform concatenation at top level - Number of required page-table levels Multi-Key Total Memory Encryption (TME-MK) - KeyID per page encoded in PTE - Stealing upper physical bits <table> <thead> <tr> <th>Unused</th> <th>KeyID</th> <th>Physical Address</th> <th>Attributes</th> </tr> </thead> <tbody> <tr> <td>Unused</td> <td>KeyID</td> <td>Physical Address</td> <td>Attributes</td> </tr> <tr> <td>Unused</td> <td>KeyID</td> <td>Physical Address</td> <td>Attributes</td> </tr> </tbody> </table> - Key Programming - random/tenant - DRNG entropy Key0 - FW TME Key Key1 - AES-XTS-128 Key2 - AES-XTS-256 Key3 - AES-XTS-256 Key4 - AES-XTS-128 Key5 - AES-XTS-128 Protecting against “Noisy Neighbor” Domains [Diagram showing a microhypervisor architecture with VMs, cores, caches, and shared resources.] Cache Allocation Technology (CAT) - Limited Number of Classes of Service (COS) - Hypervisor assigns COS to all cache-sharing entities (VMs/Processes/Threads) - Capacity Bitmask per COS - Specifies into which ways of the cache that COS can allocate - Can be defined per CPU and separately for L2/L3 caches - Bitmasks must be contiguous, but can overlap - Active COS per CPU set in IA32_PQR_ASSOC MSR - Improves Predictability (WCET) and Cache Side-Channel Resistance Cache Allocation Technology (CAT): Example Full Isolation / No Capacity Sharing Code and Data Prioritization (CDP) - Capacity bitmasks redefined as pairs - Even bitmask number ⇒ data - Odd bitmask number ⇒ code - More fine-grain control over how the cache is being used - Total number of classes of service (COS) cut in half - 6 COS for CAT ⇒ 3 COS for CDP - NOVA API enforces selection of CAT vs. CDP mode for L2/L3 cache - QoS configuration required before capacity bitmasks can be programmed - Mode cannot be changed subsequently Code and Data Prioritization (CDP): Example Competitive Capacity Sharing Exclusive Use - COS 0 - D: 35% - C: 25% - COS 1 - D: 50% - C: 15% - COS 2 - D: 20% - C: 30% - Exclusive Use Class of Service (COS) - Option 1: Assign COS to Protection Domain (PD) - For PDs that span multiple cores, COS settings must be consistent across cores - Option 2: Assign COS to Execution Context (EC) - Expensive to set new COS during each context switch (WRMSR) - In NOVA: COS is a Scheduling Context (SC) attribute - Only need to context-switch COS during scheduler invocations - Extends the call-chain time/priority donation model with COS donation - Servers use the cache capacity allocated to their respective clients - Additional benefit: COS settings can be configured differently on each core Code Integrity Protection - Long history of paging features raising the bar for code injection attacks - Non-writable code / Non-executable stack (W^X) - Supervisor Mode Execution Prevention (SMEP) - Supervisor Mode Access Prevention (SMAP) - Mode-Based Execution Control (MBEC) for Stage-2 with XU/XS permission bits - Code snippets (gadgets) in existing code could still be chained together - Control-Flow Hijacking: COP / JOP / ROP attacks - Instruction length is fixed on ARM but varies on x86 Control-Flow Enforcement Technology (CET) ❖ Protects integrity of control-flow graph using x86 hardware features ❖ Indirect Branch Tracking (Forward-Edge) ➢ Used with indirect JMP / CALL instructions ➢ Valid branch targets must be marked with ENDBR instruction ➢ Requires compiler support (available since gcc-8) ❖ Shadow Stacks (Backward-Edge) ➢ Used with CALL / RET instructions ➢ Second stack used exclusively for return addresses ➢ Can only be written by control-transfer and shadow-stack-management instructions CALL / JMP Instruction - Next instruction must be ENDBR - #CP exception otherwise CALL instruction - Pushes return address onto both stacks RET instruction - Pops return address from both stacks - #CP exception if addresses not equal Shadow Stack Management - Busy bit in token prevents multi-activation - NOVA must unwind supervisor shadow stack during context switches Managing the ISA Evolution - Modern CPU features require new instructions that fault on older CPUs - Example: Shadow Stack Management Instructions - Newer CPUs provide instructions that do more/optimized work - Example: XSAVE / XSAVEC / XSAVEOPT / XSAVES (Extended State Management) - Code using newer instructions will fault on older CPUs - Possible options - Either compile different binaries with/without those instructions (compile-time) - Or select the most suitable instruction (each time) at run time (run-time) - Or install the most suitable instruction (once) at boot time (boot-time) Boot-Time Code Patching ❖ Install the optimal instruction (sequence) once at boot time ➢ asm ("xsaves %0" : "=m" (fpu_state) ...); ⇒ asm ("xsave %0" : "=m" (fpu_state) ...); ➢ asm ("setssbsy"); ⇒ asm ("nop"); ❖ Early feature checking to determine which patches to apply ➢ Patches only low-level assembler instructions (newer CPUs ⇒ less patching) ➢ No need for patching any high-level C++ code ⇒ compiles to innocuous instructions ➢ No need for repeated run-time feature tests ⇒ no extra overhead ❖ One generic binary that works across a wide range of hardware platforms ➢ Automatically adjusts to supported platform features Round-Trip IPC Performance (with CET) **NUC12WS** Intel® Core™ i7-1270P GLC / 2496 MHz TSC - **Same PD**: - TSC Ticks: 217 - B: 87ns +13% - R: 246 +59% - F: 346 +67% - **Different PDs**: - TSC Ticks: 536 - B: 215ns +5% - R: 563 +22% - F: 658 +26% **NUC11TN** Intel® Core™ i7-1185G7 WLC / 2995 MHz TSC - **Same PD**: - TSC Ticks: 229 - B: 77ns +15% - R: 265 +60% - F: 367 +73% - **Different PDs**: - TSC Ticks: 584 - B: 195ns +6% - R: 621 +23% - F: 721 +29% --- **Release 23.08** - CFP=none - CFP=branch - CFP=return - CFP=full **Bedrock Systems Inc.** NOVA Microhypervisor Udo Steinberg - FOSDEM 2023 Questions and Discussion The NOVA microhypervisor is licensed under GPLv2 Releases: https://github.com/udosteinberg/NOVA/tags More Information: bedrocksystems.com and hypervisor.org
{"Source-Url": "https://fosdem.org/2023/schedule/event/nova/attachments/slides/5886/export/events/attachments/nova/slides/5886/NOVA.pdf", "len_cl100k_base": 5982, "olmocr-version": "0.1.53", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 52607, "total-output-tokens": 7005, "length": "2e12", "weborganizer": {"__label__adult": 0.0005517005920410156, "__label__art_design": 0.000682830810546875, "__label__crime_law": 0.0005373954772949219, "__label__education_jobs": 0.00043582916259765625, "__label__entertainment": 0.00010412931442260742, "__label__fashion_beauty": 0.0002465248107910156, "__label__finance_business": 0.0005574226379394531, "__label__food_dining": 0.00045871734619140625, "__label__games": 0.0010929107666015625, "__label__hardware": 0.03375244140625, "__label__health": 0.0005645751953125, "__label__history": 0.0005064010620117188, "__label__home_hobbies": 0.00023949146270751953, "__label__industrial": 0.0019683837890625, "__label__literature": 0.00022208690643310547, "__label__politics": 0.0003633499145507813, "__label__religion": 0.0007424354553222656, "__label__science_tech": 0.1356201171875, "__label__social_life": 6.920099258422852e-05, "__label__software": 0.01471710205078125, "__label__software_dev": 0.8046875, "__label__sports_fitness": 0.00047135353088378906, "__label__transportation": 0.000950336456298828, "__label__travel": 0.00029969215393066406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18852, 0.05859]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18852, 0.11164]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18852, 0.66187]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 158, false], [158, 610, null], [610, 869, null], [869, 2266, null], [2266, 2714, null], [2714, 3204, null], [3204, 3755, null], [3755, 4058, null], [4058, 4699, null], [4699, 5489, null], [5489, 6060, null], [6060, 6664, null], [6664, 8063, null], [8063, 8570, null], [8570, 9078, null], [9078, 9600, null], [9600, 10001, null], [10001, 10550, null], [10550, 10922, null], [10922, 11313, null], [11313, 11705, null], [11705, 12176, null], [12176, 12899, null], [12899, 13379, null], [13379, 13520, null], [13520, 13999, null], [13999, 14080, null], [14080, 14544, null], [14544, 14743, null], [14743, 15360, null], [15360, 15872, null], [15872, 16408, null], [16408, 16491, null], [16491, 16782, null], [16782, 17390, null], [17390, 18022, null], [18022, 18668, null], [18668, 18852, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 158, true], [158, 610, null], [610, 869, null], [869, 2266, null], [2266, 2714, null], [2714, 3204, null], [3204, 3755, null], [3755, 4058, null], [4058, 4699, null], [4699, 5489, null], [5489, 6060, null], [6060, 6664, null], [6664, 8063, null], [8063, 8570, null], [8570, 9078, null], [9078, 9600, null], [9600, 10001, null], [10001, 10550, null], [10550, 10922, null], [10922, 11313, null], [11313, 11705, null], [11705, 12176, null], [12176, 12899, null], [12899, 13379, null], [13379, 13520, null], [13520, 13999, null], [13999, 14080, null], [14080, 14544, null], [14544, 14743, null], [14743, 15360, null], [15360, 15872, null], [15872, 16408, null], [16408, 16491, null], [16491, 16782, null], [16782, 17390, null], [17390, 18022, null], [18022, 18668, null], [18668, 18852, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18852, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18852, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18852, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18852, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18852, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18852, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18852, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18852, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18852, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18852, null]], "pdf_page_numbers": [[0, 0, 1], [0, 158, 2], [158, 610, 3], [610, 869, 4], [869, 2266, 5], [2266, 2714, 6], [2714, 3204, 7], [3204, 3755, 8], [3755, 4058, 9], [4058, 4699, 10], [4699, 5489, 11], [5489, 6060, 12], [6060, 6664, 13], [6664, 8063, 14], [8063, 8570, 15], [8570, 9078, 16], [9078, 9600, 17], [9600, 10001, 18], [10001, 10550, 19], [10550, 10922, 20], [10922, 11313, 21], [11313, 11705, 22], [11705, 12176, 23], [12176, 12899, 24], [12899, 13379, 25], [13379, 13520, 26], [13520, 13999, 27], [13999, 14080, 28], [14080, 14544, 29], [14544, 14743, 30], [14743, 15360, 31], [15360, 15872, 32], [15872, 16408, 33], [16408, 16491, 34], [16491, 16782, 35], [16782, 17390, 36], [17390, 18022, 37], [18022, 18668, 38], [18668, 18852, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18852, 0.1236]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
8e18cb473149ada8f4485dd3ea179a2379686df3
Software Clustering Using Bunch Software Clustering Environment The Structure of a Graphical Editor **The Software Clustering Problem** - **“Find a good partition of the Software Structure.”** - A **partition** is the decomposition of a set of elements (i.e., all the nodes of the graph) into mutually disjoint clusters. - A **good partition** is a partition where: - highly interdependent nodes are grouped in the same clusters - independent nodes are assigned to separate clusters **Not all Partitions are Created Equal ...** **Our Assumption** - **“Well designed software systems are organized into cohesive clusters that are loosely interconnected.”** **Problem:** There are too many partitions of the MDG... The number of MDG partitions grows very quickly, as the number of modules in the system increases... \[ S_{n,k} = \begin{cases} 1 & \text{if } k = 1 \vee k = n \\ S_{n-1,k-1} + kS_{n-1,k} & \text{otherwise} \end{cases} \] <table> <thead> <tr> <th>k</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> <th>11</th> <th>12</th> <th>13</th> <th>14</th> <th>15</th> <th>16</th> <th>17</th> <th>18</th> <th>19</th> <th>20</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>6</td> <td>1</td> <td>11</td> <td>16</td> <td>8</td> <td>16</td> <td>30</td> <td>55</td> <td>30</td> <td>12</td> <td>90</td> <td>180</td> <td>144</td> <td>877</td> <td>203</td> <td>877</td> <td>55</td> <td>203</td> <td></td> </tr> <tr> <td>2</td> <td>7</td> <td>12</td> <td>12</td> <td>17</td> <td>12</td> <td>17</td> <td>12</td> <td>17</td> <td>12</td> <td>17</td> <td>12</td> <td>17</td> <td>12</td> <td>17</td> <td>12</td> <td>17</td> <td>12</td> <td>17</td> <td>12</td> <td></td> </tr> <tr> <td>3</td> <td>8</td> <td>13</td> <td>13</td> <td>18</td> <td>13</td> <td>18</td> <td>13</td> <td>18</td> <td>13</td> <td>18</td> <td>13</td> <td>18</td> <td>13</td> <td>18</td> <td>13</td> <td>18</td> <td>13</td> <td>18</td> <td>13</td> <td></td> </tr> <tr> <td>4</td> <td>9</td> <td>14</td> <td>14</td> <td>19</td> <td>14</td> <td>19</td> <td>14</td> <td>19</td> <td>14</td> <td>19</td> <td>14</td> <td>19</td> <td>14</td> <td>19</td> <td>14</td> <td>19</td> <td>14</td> <td>19</td> <td>14</td> <td></td> </tr> </tbody> </table> **A 15 Module System is about the limit for performing Exhaustive Analysis** --- **Edge Types** - With respect to each cluster, there are two different kinds of edges: - \(\mu\) edges (Intra-Edges) which are edges that start and end within the same cluster - \(\epsilon\) edges (Inter-Edges) which are edges that start and end in different clusters --- **A Partition of the Structure of the Graphical Editor** --- Distributed Objects The Module Dependency Graph (MDG) We represent the structure of the system as a graph, called the MDG. - The nodes are the modules/classes. - The edges are the relations between the modules/classes. Example: The Structure of the *dot* System Dot is a reasonably small system, but its structure is complex and hard to understand... The clustered view highlights... Identification of "special" modules (relations are hidden to improve clarity) The Partitioned Module Dependency Graph (MDG) The partitioned MDG contains clusters that group related nodes from the MDG. - The clusters represent the subsystems. - The subsystems highlight high-level features or services provided by the modules in the cluster. The clustered view of dot (its partitioned MDG) is easier to understand than the unclustered view... Step 1: Creating the MDG 1. The MDG can be generated automatically using source code analysis tools. 2. Nodes are the modules/classes, edges represent source-code relations. 3. Edge weights can be established in many ways, and different MDGs can be created depending on the types of relations considered. Example: The MDG for Apache’s Regular Expression class library Source Code Analysis Tools Acacia Java Automatic MDG Generation We provide scripts on the SERG web page to create MDGs automatically for systems developed in C, C++, and Java. The MDG eliminates details associated with particular programming languages. Why do we want to Cluster? - To group components of a system. - Grouping gives some semantic information. - We find out which components are strongly related to each other. - We find out which components are not so strongly connected to each other. - This gives us an idea of the logical subsystem structure of the entire system. Reverse Engineering (Bunch) © SERG **Clustering useful in practice?** - Documentation may not exist. - As software grows, the documentation doesn’t always keep up to date. - There are constant changes in who uses and develops a system. **Our Approach to Automatic Clustering** - “Treat automatic clustering as a searching problem” - Maximize an objective function that formally quantifies of the “quality” of an MDG partition. - We refer to the value of the objective function as the modularization quality (MQ) MQ is a Measurement and not a Metric **Why automatic clustering?** - Experts are not always available. - Gives the previously mentioned benefits without costing much. How does clustering help? - Helps create models of software in the absence of experts. - Gives the new developers a ‘road map’ of the software’s structure. - Helps developers understand legacy systems. - Lets developers compare documented structure with some ‘inherent’ structure of the system. Module Dependency Graph - Represents the dependencies between the components of a system. - Nodes are components: - Classes, modules, files, packages, functions… - Edges are relationships between components: - Inherit, call, instantiate, etc… Software Clustering Problem - For a given mdg, and a function that determines the quality of a partitioning of that mdg, find the best partition. Possible Answer? - We could look at every possible partition... - Exhaustive search always gives the best answer. - The number of partitions of a set are equal to the recurrence given by Sterling: \[ S(n, k) = \begin{cases} 1 & \text{if } k = 1 \text{ or } k = n \\ S(n - 1, k - 1) + k S(n - 1, k) & \text{otherwise} \end{cases} \] Exhaustive Search... - If Exhaustive search is optimal, then why not use it? - The recurrence to the Sterling recurrence grows very rapidly. - The search for partitioning for \( n \) nodes must go through partitions for all \( k \) in \( S(n, k) \). Thus the size of the search space is: \[ \sum_{k=1}^{n} S(n, k) \quad \text{enormous} \] Other Answers - If we don’t use Exhaustive search, then we have to use a sub-optimal strategy. - Just because a strategy is sub-optimal, though, doesn’t mean it won’t give good results. Hill-Climbing - Hill-Climbing refers to how to travel along the contour of the solution space. - The idea is to find a starting position, and then move to a better state. - What if there is more than one state that we could move to? - What happens when we find a local maximum? - Can we do anything about getting a bad start? **Bunch Hill Climbing Clustering Algorithm** Generate a Random Decomposition of MDG Iteration Step Generate Next Neighbor Measure MQ Compare to Best Neighboring Partition Better? Best Neighboring Partition for Iteration Convergence Reverse Engineering (Bunch) © SERG --- **Hill-Climbing Algorithm Features** We have implemented a family of hill-climbing algorithms: - Control the "steepness" of the climb - Simulated Annealing - Population Size Reverse Engineering (Bunch) © SERG --- **Genetic Algorithms** - Genetic Algorithms take a problem, and model solution finding after nature. - Selection - Crossovers - Mutations - Genetic Algorithms tend not to get stuck at local maxima. **Bunch Genetic Clustering Algorithm (GA)** Generate a Starting Population from the MDG Iteration Step Current Population Next Population Mutation Operation Crossover Operation Best Partition from Final Population All Generations Processed **Tools for Clustering** - **Bunch** – Automatic Clustering - Performs automatic clustering. - Automatically produced results are often suboptimal. - Allows user-directed clustering: - Users can help the search for a good clustering with their knowledge. **Using Bunch...** - Bunch is a tool written in Java that will work on any machine supporting the JVM. - The first step in using Bunch will be finding out how to make those mdg files mentioned earlier. Creating mdg files... - To create an mdg, we use the mdg script: - Used to create mdg files for Java, C, and C++. - Can view various relations (extends, implements, method-variable, method-method). - Can also give weights to each relation, along with providing its type. Syntax of mdg ``` mdg --(java|c|c++) --(eivmA)+ --(wltA)* ``` - The first set of parameters represents the language to use. - The second set of parameters describes the types of relationship to display. - The last set determines what extra information to produce. Syntax of bmdg ``` bmdg -f <file> -i -a -v ``` - The first parameter specifies the XML file created by bat during source code analysis. - The -i (optional) parameter includes the “class A subclasses or implements class B” relations in the mdg. - The -a (optional) parameter includes the “class A calls method in class B” and “class A accesses public variable in class B” relations in the mdg. - The -v (optional) parameter includes the “class A has variables of type class B” relations in the mdg. **MDG Results** - The mdg script produces a series of pairs that represent directed edges. - These edges can have extra information along with them, such as weight and type, if desired. **Bunch Features** - Bunch is a tool to perform automatic clustering of module dependency graphs. - Has various algorithms for finding a good solution. - Allows users to assist in automatic clustering. - Provides methods to exclude omnipresent libraries and consumers in clustering (provides a more succinct clustering). **The Bunch Algorithms** - Provides use for exhaustive search. - Has hill-climbing algorithms - Steepest Ascent (greedy) - Nearest Ascent - A genetic algorithm is also available. Hill-Climbing Algorithms • Hill-Climbing algorithms get their names because they ‘climb’ to the top of ‘the hills’ of a solution space. – Given the current solution, and some transition function, find some better solution. – Sub-optimal: Can get caught on one of ‘the smaller hills’ a.k.a. local maximum. Steepest Ascent • Steepest Ascent is a ‘greedy’ algorithm. – Greedy Algorithms take the most of what they want whenever possible. – Steepest Ascent computes all the states it can transition to, and takes the best one. – Greedy algorithms aren’t always optimal overall, even though they take the optimal step at each state. Nearest Ascent • Nearest Ascent is a non-greedy hill-climbing algorithm – Does not compute the set of all possible states to transition to. – Computes states until a better one is found. – Much faster in practice than Steepest Ascent. Helping Hill-Climbers - Hill-Climbing algorithms are inherently sub-optimal. - They still tend to give good results, though. - Even with good results, there are ways of improving the quality of the results, without that much effort (relatively). Bunch Clustering - Bunch, as it may already seem, views clustering as an optimization problem. - The bunch objective function trades off between two things: - Cohesiveness of clusters (how intra-related is a cluster?) - Coupling of clusters (how inter-related are clusters?) Random Restart - One way to help Hill-Climbing algorithms is to restart the algorithm at some random state after finding a local maximum. - The idea is to get various local maxima. - After finding a set of various solutions, return the best one. - The cost isn’t really that much, in comparison to the cost in terms of system size. Simulated Annealing - Simulated Annealing is another technique used to assist Hill-Climbing algorithms. - Occasionally take a transition that is not an increase in quality. - The hope is that this will lead to climbing a better hill. Genetic Algorithms - A totally different approach is to use Genetic Algorithms. - Genetic Algorithms tend to converge to a solution quickly when the solution space is small relative to the search space. - Genetic Algorithms tend to provide good results. - Genetic Algorithms don’t get stuck at local maxima. Genetic Algorithms... - Genetic Algorithms work by representing each state as a string, and then doing ‘genetic manipulation’ on a ‘generation’ in ways analogous to nature. - Crossing Over - Mutations - Selection **Measuring MQ – Step 1: The Cluster Factor** The Cluster Factor for cluster \( i \), \( CF_i \), is: \[ CF_i = \begin{cases} 0 & \text{if } \mu_i = 0 \\ \frac{2\mu_i}{2\mu_i + \sum (\epsilon_{ij}, \epsilon_{ji})} & \text{otherwise} \end{cases} \] \( CF \) increases as the cluster’s cohesiveness increases. --- **Modularization Quality (MQ):** \[ MQ = \sum_k CF_i \] - \( k \) represents the number of clusters in the current partition of the MDG. - Modularization Quality (MQ) is a measurement of the “quality” of a particular MDG partition. --- **Modularization Quality (MQ):** - We have implemented a family of MQ functions. - MQ should support MDGs with edge weights. - Faster than older MQ (Basic MQ) - TurboMQ \( \approx O(|V|) \) - ITurboMQ \( \approx O(1) \) - ITurboMQ incrementally updates, instead of recalculates, the \( CF_i \) The Clustering Problem: Algorithm Objectives “Find a good partition of the MDG.” - A partition is the decomposition of a set of elements (i.e., all the nodes of the graph) into mutually disjoint clusters. - A good partition is a partition where: - highly interdependent nodes are grouped in the same clusters - independent nodes are assigned to separate clusters - The better the partition the higher the MQ. The Bunch Tool: Automatic Clustering Starting Bunch - Now that we know some of the stuff that Bunch does ‘under the hood,’ let’s start to use it. - Bunch is a Java program held in a jar file. To run it, type the following: `java -classpath Bunch.jar bunch.Bunch` Basic Controls Basic Controls Description - Input Graph File: The mdg to be clustered. - Clustering Method: Which algorithm to use? (Hill-Climbing, Genetic, etc). - Output Cluster File: Where to put the results? - Output File Format: Dotty? Text? Clustering Actions - Agglomerative Clustering: Bunch will cluster until the system is together in one large cluster. - User-Driven Clustering: Here, the user clicks ‘Generate Next Level’ to proceed with each level of clustering. Clustering Options Description - Clustering Algorithm: Which ‘clustering’ algorithm to use. - Graph File Delimiters: Specify delimiting characters in MDG files. - Limiting Runtime: Used to limit the amount of time Bunch will run. - Agglomerative Output Options: What type of output for agglomerative clustering. - Generate Tree Format: Show clustering hierarchy. Libraries - Some Modules are just libraries used by the system. - These don’t add any semantics to the system. - While being a part of what makes the system, they are just small tools that the designers decided to use. Omni-Present Modules - Some systems have internal modules that are use across a large portion of the system. - Including such modules in a decomposition is pointless. - Bunch gives the option of removing such modules from the decomposition. Sometimes, there will exist *a priori* knowledge about a system. - Such knowledge, when used as constraints on a decomposition, makes it a better decomposition. - Bunch allows pre-cluster grouping of modules. Results... **System Evolution** - Systems change constantly. - New developers add to chaos in the maintenance. - New features do so, too. **Out with the old?** - Even though systems change, out of date decompositions don’t have to be thrown away. - There still may be some relations that the decomposition represents. - An old decomposition may be better than some random start state for clustering a system. **Orphan Adoption** - When a change is applied to one module of a system, the amount of change is relatively small. - We can take the module that changed, and see how it fits in with every cluster. - We can also see how good of a system we get when it’s alone. - Taking the best solution out of these solutions is how Bunch handles this. Hierarchical Clustering (1): Tree View 1. 2. Default 3. 4. Hierarchical Clustering (2): Standard View 1. 2. Default 3. 4. Distributed Clustering Distributed clustering allows large MDGs to be clustered faster... Bunch User Interface (BUI) Bunch Clustering Service (BCS) Neighboring Servers (NS) Distributed Clustering Distributed clustering allows large MDGs to be clustered faster... Distributed Clustering Features - Multiple Clustering Algorithms - Heterogeneous Platforms - Adaptive Load Balancing Bunch User Interface (BUI) Bunch Clustering Service (BCS) Neighboring Servers (NS) Reverse Engineering (Bunch) © SERG ‘Good’ Clusterings - Even with all these features, only the Exhaustive Search gives us the optimal clustering. - How do we know how good of a clustering we have?
{"Source-Url": "https://www.cs.drexel.edu/~spiros/teaching/CS675/slides/bunch.pdf", "len_cl100k_base": 4514, "olmocr-version": "0.1.49", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 59030, "total-output-tokens": 5578, "length": "2e12", "weborganizer": {"__label__adult": 0.0002129077911376953, "__label__art_design": 0.00019943714141845703, "__label__crime_law": 0.00019431114196777344, "__label__education_jobs": 0.00028061866760253906, "__label__entertainment": 3.224611282348633e-05, "__label__fashion_beauty": 7.748603820800781e-05, "__label__finance_business": 0.0001404285430908203, "__label__food_dining": 0.00016808509826660156, "__label__games": 0.00034928321838378906, "__label__hardware": 0.0004024505615234375, "__label__health": 0.00015079975128173828, "__label__history": 9.739398956298828e-05, "__label__home_hobbies": 5.042552947998047e-05, "__label__industrial": 0.00017023086547851562, "__label__literature": 0.00011628866195678712, "__label__politics": 0.00010657310485839844, "__label__religion": 0.00018489360809326172, "__label__science_tech": 0.002819061279296875, "__label__social_life": 5.78761100769043e-05, "__label__software": 0.0113677978515625, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.0001615285873413086, "__label__transportation": 0.00015664100646972656, "__label__travel": 0.00011730194091796876}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16862, 0.01457]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16862, 0.37288]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16862, 0.85207]], "google_gemma-3-12b-it_contains_pii": [[0, 102, false], [102, 666, null], [666, 2093, null], [2093, 2428, null], [2428, 2906, null], [2906, 3910, null], [3910, 4566, null], [4566, 5265, null], [5265, 6172, null], [6172, 6505, null], [6505, 7210, null], [7210, 7930, null], [7930, 8976, null], [8976, 9677, null], [9677, 10561, null], [10561, 11433, null], [11433, 12209, null], [12209, 13067, null], [13067, 13759, null], [13759, 14239, null], [14239, 14603, null], [14603, 15074, null], [15074, 15296, null], [15296, 16051, null], [16051, 16368, null], [16368, 16862, null]], "google_gemma-3-12b-it_is_public_document": [[0, 102, true], [102, 666, null], [666, 2093, null], [2093, 2428, null], [2428, 2906, null], [2906, 3910, null], [3910, 4566, null], [4566, 5265, null], [5265, 6172, null], [6172, 6505, null], [6505, 7210, null], [7210, 7930, null], [7930, 8976, null], [8976, 9677, null], [9677, 10561, null], [10561, 11433, null], [11433, 12209, null], [12209, 13067, null], [13067, 13759, null], [13759, 14239, null], [14239, 14603, null], [14603, 15074, null], [15074, 15296, null], [15296, 16051, null], [16051, 16368, null], [16368, 16862, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16862, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16862, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16862, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16862, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16862, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16862, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16862, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16862, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16862, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16862, null]], "pdf_page_numbers": [[0, 102, 1], [102, 666, 2], [666, 2093, 3], [2093, 2428, 4], [2428, 2906, 5], [2906, 3910, 6], [3910, 4566, 7], [4566, 5265, 8], [5265, 6172, 9], [6172, 6505, 10], [6505, 7210, 11], [7210, 7930, 12], [7930, 8976, 13], [8976, 9677, 14], [9677, 10561, 15], [10561, 11433, 16], [11433, 12209, 17], [12209, 13067, 18], [13067, 13759, 19], [13759, 14239, 20], [14239, 14603, 21], [14603, 15074, 22], [15074, 15296, 23], [15296, 16051, 24], [16051, 16368, 25], [16368, 16862, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16862, 0.02029]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
0c38cfe34a9e33226b8441e587713b6a528deb71
# CONTENTS <table> <thead> <tr> <th>Section</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>Author</td> <td>3</td> </tr> <tr> <td>Introduction</td> <td>4</td> </tr> <tr> <td>Modern Exploitation</td> <td>6</td> </tr> <tr> <td>Advanced Exploitation</td> <td>8</td> </tr> <tr> <td>Future Exploit Techniques</td> <td>12</td> </tr> <tr> <td>Conclusions</td> <td>13</td> </tr> </tbody> </table> Aaron is a security researcher at NCC Group, where he researches exploit development techniques and builds tools to assist internal consultants. He has experience with exploit development on numerous platforms. Prior to NCC he worked doing mobile security research at BlackBerry, and threat analysis and reverse engineering for Symantec. INTRODUCTION For over forty years the computer industry has been engaged in a cat and mouse game of defensive and offensive techniques and countermeasures. Traditionally, the offensive side almost always has a technological and time advantage. Exploits are among the primary tools of the offensive side. An exploit is typically a piece of software, or some logic used by an attacker, which takes advantage of a bug or behaviour in the targeted software or hardware. Use of the exploit allows the target to be manipulated in ways unintended by the designer. This manipulation can in turn allow security bypasses, such as executing arbitrary code when only strict program interaction was intended or extracting sensitive data without authentication. A person writing an exploit only needs to spend as much time as it takes to find the one way in through the various defences in place, whereas a person writing the defence has to spend as much time as it takes to think of every possible logical way around what they're building. This gave exploit writers an edge for a very long time, as writing exploits was often not as complex as one might expect; but it is always becoming more difficult. This arms race exists in all facets of technology, from hardware to software, from C to Python, but the fiercest competition continues in memory corruption exploitation, typically against software written in C and C++. Historically, exploit writers and offensive researchers tend to be aware of a significant number of techniques that could be used to overcome future defence technology and mitigations. With many of the waves of defensive mitigations introduced by operating systems, exploit writers immediately knew how to overcome the defences without even doing any new research. The tricks to overcome the defences might have been considered advanced when first discovered, but once the mitigation was in place, the technique would quickly become the norm and well understood by many. In the last decade and a half, we have seen a significant shift in the defensive realm, with the introduction of many mitigations into mainstream compilers and operating systems, and into their services and applications. This increase in defences has led exploit writers to start leveraging new techniques, along with many that were previously known but considered advanced and unnecessary, in order to achieve a successful compromise. The defences that attackers will now, depending on the scenario, routinely bypass, circumvent, or purposefully avoid dealing with during exploitation include: - Address space layout randomisation (ASLR) - Non-executable memory - Executable but non-readable memory - Stack cookies and variable reordering - Heap metadata hardening - Heap layout randomisation - Delayed heap freeing - Object allocation partitioning - Exception handling: SafeSEH, SEHOP - Pointer encoding - Sandboxing - Input filtering - Supervisor Mode Execution Prevention - Vtable integrity - Control flow guard Not all of these mitigations have an easy workaround for an attacker, so often the situations in which they would pose a problem are simply avoided. Take stack cookies and variable reordering as a prime example. Without an information leak (a way to retrieve information from the target process either locally or over a network before exploitation), these mitigations can make many stack overflows difficult, if not impossible, to exploit. However, following the path of least resistance means that attackers spend their time trying to exploit other bug classes like type confusion, heap-based buffer overflows, or use after frees, as an alternative. At one point it was considered advanced to understand these defensive technologies and be able to defeat, circumvent, or avoid them while still leveraging a bug. As more and more exploit techniques are used routinely, the techniques start to lose their advanced status and new, more esoteric, tricks take their place, as we see with many of the zero-day exploits discovered recently. Modern Exploitation What many people think of as advanced exploitation techniques have in fact often been known and even practically deployed for over a decade. The differentiating factor is that the techniques weren’t necessary for everyday exploitation scenarios. A variety of techniques have moved from the advanced category to the normal category, to the point where we see them in almost every major attack. Information leaks It is largely accepted by exploit writers that information leaks have become the most important part of a successful attack; some have even dubbed it the information leak era. In the past, because of failure to implement ASLR effectively, there was often no need for information leak to achieve successful exploitation. Now that ASLRd processes, especially on 64-bit operating systems, have reasonably random memory layouts, exploit writers must resort to leaking addresses from the process to inform the rest of the exploit. Almost every major memory-based exploit now relies on some form of information leak. Almost every major memory-based exploit now relies on some form of information leak. ASLR is not the only reason one might need an information leak. You might need one for finding executable modules in memory to facilitate return oriented programming (ROP), determining the layout of objects in memory to know where to corrupt, reading secret cookie values to bypass mitigations; finding key data structures to continue a process cleanly after exploitation has completed, and many more reasons. These leaks are typically used in two ways: 1. An exploit writer leverages the original vulnerability they want to exploit in order to build what is called a leak primitive. This will often result in triggering the vulnerability many times in order to leak various areas in memory. 2. A separate vulnerability is used for the information leak. Sometimes a bug that lets you eventually gain code execution isn’t sufficient for leaking information, so you are forced to use a separate bug. Sandbox escapes Sandbox escapes are among the areas that have seen the most advancement in recent years. The general idea of this is quite old, as exploit writers have been breaking out of primitive chroot jails for a long time. But as sandbox technology has advanced and arrived on the desktop, so too have the techniques required to break out of them on more modern operating systems. In client-side exploitation scenarios, sandbox breakouts have become a necessity for exploit writers, as almost all mainstream browsers and document-parsing tools use some form of sandbox. One point worth mentioning, however, is that breaking out of the sandboxes themselves doesn’t always involve some new or advanced exploit techniques; it simply involves an additional exploit. The requirement of chaining multiple, often completely unrelated, exploits in a single attack at a more abstract level represented major sophistication in the past, but again has now become fairly commonplace. The purpose of a sandbox is to limit the environment in which an attacker finds themselves after the first stage of successful exploitation. Were a browser compromised and an exploit to obtain arbitrary code execution carried out, the attacker might be executing in an environment with no meaningful network, filesystem, or system access. This forces them to resort to breaking out of the sandbox. The most common approach to this is to use a second exploit that targets the operating system kernel. NCC Group Research Insights 6 Depending on the design, other breakout exploits, which target the sandbox broker processes or, if present, the hypervisor, are also seen. Many interesting and sophisticated techniques have come out of these breakouts; some of the most notable being from exploit competitions, game console hacking and jailbreaks, rather than malicious exploitation. We do still see new and interesting attacks in this space, such as leveraging an identical bug in different privilege contexts to achieve both client-side exploitation and sandbox breakout. ### Malleable bugs Much of modern exploitation involves attacking very specific bug classes because they exhibit properties that facilitate bypassing many mitigations. Although exploit writers create innovative ways to leverage restrictive bugs to do what they need, there is an increasing requirement for a bug to exhibit certain behaviors to facilitate bypassing all of the modern mitigations. If it doesn't, it will either be ignored in favor of a more malleable bug, or be put to use as one of a collection of bugs used to leverage an attack. At one point, before ASLR became so effective, a bug that allowed an arbitrary write to any location in memory was often seen as favorable. There was almost a simple recipe one could employ to exploit it. However, in the modern age in which no static addresses are known in advance, this type of bug isn’t always ideal. Instead, small linear overwrites, with minimal data restrictions, have become much more favorable than arbitrary writes. This is not to say that the eventual goal isn’t to construct an arbitrary write; however, in modern exploitation scenarios a smaller controlled linear overwrite in combination with heap feng shui (massaging) can give you a much more favorable starting position that lets you slowly build up a collection of exploit primitives. It’s worth noting that heap feng shui is also an exploit technique that used to be considered quite advanced, but has just become part of the modern toolset. ### Exploit releases from malware In the past there was a fairly vibrant community of exploit writers who would release their work to the public. We’ve now seen this habit change, and malware will now leverage zero-day exploits, exploits for bugs that had yet to be proven exploitable publicly, or for known exploitable bugs for which no public exploit had been available. The reason for this shift is at least in part due to the exploit community largely withdrawing from the public eye, leading to malware developers needing to develop their own private exploits. Another likely cause is the ongoing monetisation of malware and exploit technology, which allows malware authors to purchase exploits to plug into their software as needed. This is possibly evidence that the increased difficulty of exploitation, which leads to a larger time investment, prevents some hobbyists from being able to exploit the bugs in a reasonable amount of time and that they might be less willing to give away the work for free. On the same note it shows that the level of sophistication of some malware authors is increasing, in that many of them no longer rely on adapting publicly-available exploits. Public research is often enough for them to build upon to develop their own exploits, and in some cases the malware authors are now leveraging exploit techniques that had not been shown publicly at all. The term “advanced” is subjective and is a window looking out over a moving landscape. The exploit techniques already described were once advanced and are now fairly standard. It is interesting to consider what could currently be considered advanced exploit techniques. One aspect of modern exploitation is that more aggressive defences, and a better general understanding of low-level technologies, seem to have pushed research and techniques to what is almost the fringe of what can be classified as an explicit hole or flaw. What is especially interesting about these techniques is that not only do they start to blur the line of traditional vulnerabilities, but in many cases the vendor response is to not fix the underlying issue; sometimes software vendors are unable to even fix the bug, and other steps must be taken to mitigate. Similarly, some bug classes transcend the thinking of a typical software vulnerability and exploit a more abstract design as a side channel. Although many of the following bugs and exploit techniques represent areas that are hard to fix, whatever is pushing researchers and attackers to find such vulnerability classes could be indicative of vendors starting to succeed in some of their defences. As a general rule, as bugs get harder to find or exploit, people are pushed to more obscure and extreme ways of achieving their end goals. This often results in fascinating research and new areas for security hardening, but also tangible risks to those trying to secure their infrastructure. **DRAM row hammering** An interesting physical property of dynamic random access memory (DRAM) is that the aggressive use of certain rows of memory cells can result in abnormally fast discharging of capacitors in the rows of cells adjacent to those being hammered, which, given the right timing, can corrupt those cells by triggering what is known as a “disturbance error”: causing bits to flip from one value to another. This is known simply as row hammering. In 2015 it was shown that these row hammering disturbances could be abused reliably by native code, which leveraged specific cache flushing instructions on some hardware to break out of the Google Chrome sandbox and to manipulate Linux kernel data in order to elevate local privileges. Newer research has suggested that row hammering can be reliably triggered from JavaScript without even requiring the direct execution of a cache flushing instruction. Row hammering exploitation leverages physical properties of RAM. A software vendor can’t fix this vulnerability, but they can reduce the availability of certain functionality that can help exploitation. The Chrome browser sandbox no longer allows execution of the cache flushing instruction on x86. The Linux kernel now prevents an unprivileged user from being able to query the underlying physical frame number for a given allocation, as this information was used to inform exploitation of row hammering. These mitigations will help slow down attackers, but in the end don’t fix the underlying issue. The only solution for users is to buy higher-end hardware that either has built-in mitigations, such as more aggressive row refreshing timing, or error correcting codes to detect unwanted bit flips. Any computer that is not, or cannot be, physically upgraded is permanently vulnerable. The practical abuse of this type of vulnerability is a great example of modern advanced exploitation. Use MemoryProtect to defeat ASLR In 2014, Microsoft's Internet Explorer browser deployed a new mitigation called MemoryProtector, which is designed to hamper the exploitation of use-after-free (UAF) vulnerabilities. UAF bugs have become one of the most popular client-side vulnerabilities to exploit in recent years. In 2015 it was shown by researchers at the Zero Day Initiative and Google Project Zero that this MemoryProtector mitigation could, in two different ways, be used as an information oracle to bypass the ASLR mitigation. This scenario presents an interesting problem for a vendor, because the original mitigation does in fact serve a purpose -- to hamper the exploitation of certain bug classes -- and therefore is still a valuable piece of technology. This is another interesting case of exploit writers pushing advancements towards the fringes of what constitutes a weakness or vulnerability, where a vendor must weigh the value of preventing certain bugs versus enabling the easier exploitation of other bugs. In this case, the vendor has so far decided that the benefit from MemoryProtector was more important than the weakness it presents to the ASLR mitigation. KASLR timing attacks Kernel ASLR is a mitigation deployed by a few operating systems to hamper the ability to exploit vulnerabilities that need to know where something is located in kernel memory. This might be, for example, the location of a function pointer to overwrite or the location of a kernel payload an exploit needs to execute. This has in turn increased the number of information leak vulnerabilities being found and fixed in kernels. Not only is this information being removed in the form of bug fixes, but also sandboxing is being used to reduce the ability of a compromised process to reveal information that may be otherwise accessible, even if not as a direct result of a vulnerability. Despite all of these efforts, we see another exploitation technique on the boundary between bug and expected behavior. By understanding how memory caching works on a processor, specifically page faults and the resultant translation lookaside buffer (TLB) caches, it is possible to use subtle timing differences exhibited by the CPU as an information side channel to discern between addresses that you can’t even directly access. Although timing attacks are somewhat probabilistic leaks compared to an explicit one that might be triggered from a more traditional software vulnerability, the technique can be perfectly effective and exists on the fringe of software and hardware. It is not functionality that can be mitigated through software changes, unlike a more traditional vulnerability. Currently exploit writers aren’t leveraging these timing attacks, as it’s typically far easier to find an information leak bug, but once the bug well dries up, this more advanced technique might become the norm. It is not functionality that can be mitigated through software changes, unlike a more traditional vulnerability. Self-mapping page table entries In 2014 increased attention was given to a feature of page table handling on some operating systems, called self-mapping, in which a given index within a page table will actually reference back to the physical address of the page table itself. What this means is that, given a userland virtual address, you can make some static modifications to the virtual address to create a new virtual address, which will only work in kernel mode, but which will resolve to the physical address of the page table entry that manages the physical page backing the original virtual address. Advanced Exploitation Cont... Why is this useful? Assume you have an arbitrary write in kernel space and want to be able to execute an exploit payload in userland. Also assume that the supervisor mode exploit protection (SMEP) mitigation prevents you from just jumping directly into executable memory in userland space. Modern kernel hardening also means that the locations in kernel memory in which you can store your payload are non-executable. One option is to use your arbitrary write to manipulate the page table entry for an address holding your payload directly, either in userland or in kernel space. If the payload is in userland an exploit could modify the associated page table entries to mark the address a supervisor range, meaning executing data stored at this address from kernel space will no longer trigger the SMEP mitigation. Similarly, an exploit could modify the page table entry of a read-write memory location in kernel space, and mark it as executable. This way execution could be redirected, without violating SMEP, and the payload can actually be executed. Although in theory this problem could be mitigated in some ways, it is also a legitimate and intended feature of many page table management designs, and has been for decades, as it allows a kernel to make changes to page table entries rapidly, without deploying more expensive table lookups each time. Virtualisation security introducing insecurities An increasingly popular form of sandboxing is to leverage virtualisation technology to keep parts of a system more heavily isolated. A good example of this is the Qubes OS, which leverages the Xen hypervisor to run programs within their own isolated operating system environment. One interesting aspect of some virtualisation technologies such as Xen is that they fundamentally change the environment in which an operating system would normally run, in order to facilitate certain virtualisation goals. What can happen in translation is that certain key security technologies available on a non-virtualised environment, such as the SMEP and supervisor mode access prevention (SMAP) mitigations on Intel processors, become unavailable within the virtualised environment. Specifically, in the case of a paravirtualised Xen guest machine, the entire guest operating system is run in ring 3, which means that the guest kernel cannot enforce SMEP or SMAP, as it explicitly requires running in ring 0 to be effective. An increasingly popular form of sandboxing is to leverage virtualisation technology to keep parts of a system more heavily isolated. If virtualisation is being used as a hardening measure for a more complete operating environment, then this might not be a big problem. However in a cloud environment for instance, where a customer might simply have no option but to operate within a virtualised environment, their security is impacted by the convenience of using the cloud technology. A vulnerability that might otherwise be unexploitable thanks to SMEP and SMAP mitigations could still be a perfectly legitimate candidate for exploitation on virtualised environments, and the users of the technology might not even realise that they are at increased risk. Although this has yet to become commonplace, as mitigations become increasingly difficult to exploit, attackers that have a more advanced knowledge of system and virtualisation internals may begin to seek out this type of opportunity. Abstract interpreter abuse An interesting case of an ASLR bypass that is more advanced than the typical information leak, is one that involves leveraging how an interpreter might store information within data structures. This idea has been practically demonstrated by researchers, though it is not yet something people seem to be actively finding or using in their exploits. The premise is that certain types of data structure, such as a dictionary, might be sorted using values taken from the underlying object, and that different object types such as integer values and pointers to values might be intermixed. This intermixing of data can actually be used as a side channel to infer an address, by simply interacting with and inferring properties of some target object within the data structure, based on data you’re actively inserting into the same data structure. Out of order execution engine side channels Although CPU-cache-based side channels for data exfiltration have existed for some time, there is new research being done in this realm as well. It was recently shown that a processor’s out-of-order execution engine can be used to exchange data between two co-resident virtual machines. The out-of-order execution engine is used by a CPU when it is processing opcodes, the single machine instructions to which a native application is compiled down. Typically, a CPU would fetch each new instruction to be executed and place it into an ordered pipeline. However, there are inefficiencies with this, as certain instructions can cause stalls that prevent the next instruction from being executed immediately. To counter this, many processors will re-order certain instructions in order to maximise the efficiency of the pipeline. In 2015 it was shown that this re-ordering behavior could be abused by two collaborating systems on the same hardware, but in different virtual machines with different security policies, to exchange data that would violate the policies on one system. Although this is specifically related to data exfiltration, rather than traditional exploitation, it is another good example of advanced attacker-oriented research moving towards more obscure, low level, and fringe areas of research that become increasingly difficult to address from a defensive standpoint. ...many attackers will favor leveraging their corruption bug to build an information leak, or stick to path of least resistance Although this type of ASLR information bypass hasn’t become common place yet, as many attackers will favour leveraging their corruption bug to build an information leak, or stick to path of least resistance methods such as heap spray, I think eventually this type of flaw will be used more often. This type of information leak is much more difficult to find using automated analysis and compiler checks, as it is a more abstract problem that is not caused by traditional bad coding practices. Future Exploit Techniques We see vendors and researchers increasingly making an effort to combat popular exploitation techniques and bug classes. In many cases this new research is simply improving on theoretical mitigations and security first demonstrated decades ago, but which were not adopted or suffered from performance issues that are only now being addressed due to the increased necessity for a solution. We see vendors and researchers increasingly making an effort to combat popular exploitation techniques and bug classes. For example, control flow integrity (CFI) will significantly impact return-oriented programming as a common exploitation technique. CFI is now being introduced via Control Flow Guard in recent versions of Microsoft Windows, and the LLVM compiler has also recently added support for CFI. However, we also see that exploit writers have already started to find the path of least resistance, as it has been shown that just-in-time (JIT) compilation engines don’t work well with effective control flow analysis, and thus an attacker can bypass control flow mitigations altogether by targeting JIT. Similarly, some use-after-free and type-confusion attacks are starting to be targeted by the introduction of vtable cookies. This prevents one object with a specific vtable from being operated on when the underlying memory has changed, because the associated code can tell that the vtable is incorrect. This will likely lead to increased discovery techniques and bugs that leverage objects that don’t have such protections, or simply targeting software that has none of these protections. Eventually it might cause yet another class of vulnerabilities, the next easiest to exploit, to surge in popularity. In general we’ll continue to see exploit writers taking the path of least resistance as each new mitigation is introduced. If software X becomes hardened, software Y will become the new target. Attackers have continued to target Adobe Flash in the last few years because it’s one of the easiest targets to exploit; before that Java was a principal target. Flash has recently been hardened against one of the features that made it so ideal, so perhaps exploit writers will move on to something new. As more techniques and bug classes are mitigated completely, we will start to see a move towards even more of the techniques on the fringe of software and hardware design. These will not only be increasingly difficult to fix, but in some cases might not be fixable at all, because the way in which the flaws can be abused is also an intrinsic part of what makes the design useful for non-malicious purposes in the first place. As more techniques and bug classes are mitigated completely, we will start to see a move towards even more of the techniques on the fringe of software and hardware design. Conclusions The exploits leveraged by attackers are becoming increasingly sophisticated, but typically only to the minimum level required to get the job done. The theoretical and esoteric attacks of a previous era have become the requirements of modern exploitation, and in many ways they should no longer be considered advanced. In their place are a new set of theoretical and esoteric attacks waiting in the wings until the time is necessary for exploit writers to leverage them more aggressively. As has always been the case since dawn the of the offense vs defence dance, the primary key to hampering attackers is to increase the investment they must make in order to pull off a successful attack. This means increasing upfront investment in developing their exploits, by proactively finding bugs and testing your own software, introducing mitigations to hamper exploitation of issues you don’t find, and adding layers of security to slow down the exploitation of bugs that will inevitably be missed. This extends beyond just the software on external facing systems, into all software and hardware of the entire infrastructure deployed by a user or company. Every layer that an attacker encounters must be a new hurdle to slow them down. Systems will become harder to exploit, but a determined attacker, through the exploitation of human error, software bugs, or logical errors, will always find a way to exploit the security of a system. It is up to everyone: vendors, software developers, users, and companies, to ensure that they design, configure, and deploy their technology in ways that make the attacker’s job as hard as possible. This should be done in an effort to make the time investments become discouraging enough to not be worthwhile, or to make successful attacks result in less exposure of information, assets, and control than the attacker had hoped to obtain. CONTACT US 0161 209 5200 response@nccgroup.trust @nccgroupplc www.nccgroup.trust United Kingdom - Manchester - Head office - Basingstoke - Cambridge - Cheltenham - Edinburgh - Glasgow - Leatherhead - Leeds - London - Milton Keynes - Wetherby Europe - Amsterdam - Copenhagen - Luxembourg - Munich - Zurich North America - Atlanta - Austin - Chicago - New York - San Francisco - Seattle - Sunnyvale Asia Pacific - Sydney
{"Source-Url": "https://research.nccgroup.com/wp-content/uploads/2020/07/research-insights_vol-7-exploitation-advancements.pdf", "len_cl100k_base": 5761, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 28664, "total-output-tokens": 6323, "length": "2e12", "weborganizer": {"__label__adult": 0.0005412101745605469, "__label__art_design": 0.0004372596740722656, "__label__crime_law": 0.003086090087890625, "__label__education_jobs": 0.0007452964782714844, "__label__entertainment": 0.00014138221740722656, "__label__fashion_beauty": 0.0002053976058959961, "__label__finance_business": 0.0005140304565429688, "__label__food_dining": 0.0003502368927001953, "__label__games": 0.0016164779663085938, "__label__hardware": 0.00667572021484375, "__label__health": 0.000736236572265625, "__label__history": 0.0003082752227783203, "__label__home_hobbies": 0.0001442432403564453, "__label__industrial": 0.0009684562683105468, "__label__literature": 0.00033211708068847656, "__label__politics": 0.00037217140197753906, "__label__religion": 0.0005168914794921875, "__label__science_tech": 0.2470703125, "__label__social_life": 0.00011712312698364258, "__label__software": 0.046417236328125, "__label__software_dev": 0.6875, "__label__sports_fitness": 0.0003731250762939453, "__label__transportation": 0.0005202293395996094, "__label__travel": 0.00017321109771728516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30240, 0.00838]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30240, 0.5728]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30240, 0.95337]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 332, false], [332, 671, null], [671, 4713, null], [4713, 4713, null], [4713, 8259, null], [8259, 11668, null], [11668, 15104, null], [15104, 18721, null], [18721, 22167, null], [22167, 25094, null], [25094, 27931, null], [27931, 29816, null], [29816, 29816, null], [29816, 30240, null], [30240, 30240, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 332, true], [332, 671, null], [671, 4713, null], [4713, 4713, null], [4713, 8259, null], [8259, 11668, null], [11668, 15104, null], [15104, 18721, null], [18721, 22167, null], [22167, 25094, null], [25094, 27931, null], [27931, 29816, null], [29816, 29816, null], [29816, 30240, null], [30240, 30240, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30240, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30240, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30240, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30240, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30240, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30240, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30240, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30240, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30240, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30240, null]], "pdf_page_numbers": [[0, 0, 1], [0, 332, 2], [332, 671, 3], [671, 4713, 4], [4713, 4713, 5], [4713, 8259, 6], [8259, 11668, 7], [11668, 15104, 8], [15104, 18721, 9], [18721, 22167, 10], [22167, 25094, 11], [25094, 27931, 12], [27931, 29816, 13], [29816, 29816, 14], [29816, 30240, 15], [30240, 30240, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30240, 0.05882]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
4183d6804ab29ef83ded5c6f2f5773860358a11c
ABSTRACT In 2004, ISO/IEC SC29 better known as MPEG started a new standard initiative aiming at facilitating the deployment of multi-format video codec design and to enable the possibility of reconfiguring video codecs using a library of standard components. The new standard under development is called MPEG Reconfigurable Video Coding (RVC) framework. Whereas video coding tools are specified in the RVC library, when a new decoder is reconfigured choosing in principle any (sub)-set of tools, the corresponding bitstream syntax, described using MPEG-21 BSDL schema, and the associated parser need to be respectively derived and instantiated reconfiguration by reconfiguration. Therefore, the development of an efficient systematic procedure able to instantiate efficient bitstream parsing and particularly variable length decoding is an important component in RVC. This paper introduces an efficient data flow based implementation of the variable length decoding (VLD) process particularly adapted for the instantiation and synthesis of CAL parsers in the MPEG RVC framework. Index Terms—Reconfigurable Video Coding Variable Length Decoding, CAL language, Huffman coding, Bitstream Syntax Description Language. 1. INTRODUCTION Nowadays, video decoders need to support multiple codec standards because more and more video standards are deployed. Although different, all coding standards use the same or very similar coding tools and results to share similar architectures and implementations. Unfortunately, the way in which the existing coding standards are specified lacks of flexibility to adapt performances and complexity when new applications emerge. MPEG RVC standard intends to create a framework containing existing coding technology for developing, beside current standard decoders, new configurations for satisfying specific application constraints. RVC introduces a novelty since it promotes standardization at tool-level while maintaining interoperability between solutions from different implementers. One challenge posed by the possibility of reconfiguring decoders is the need of appropriate procedures for the instantiation and synthesis of bitstream parsers in which efficient variable length decoding processes are important tasks. This paper presents a method for generating efficient components for the MPEG RVC library capable of decoding Variable Length codes. The components of the library like all other coding tools are CAL actors generated automatically given the input VLD table. By using the described procedure, VLD tables can be automatically and efficiently generated as FUs of RVC toolbox. By efficiently it is also meant that the data flow CAL FUs are suitable for efficient synthesis into SW and HW implementations. The paper is organized as follows: the RVC framework is introduced in section 2. The variable length decoding toolbox is presented in section 3. Section 4 presents how to translate efficient VLD in CAL. Section 5 briefly introduces how to automatically generate a parser from a Bitstream Schema to CAL. Section 6 discusses about the hardware and software implementation of the parser and VLD tables. Section 7 concludes the paper. 2. MPEG RECONFIGURABLE VIDEO CODING OVERVIEW MPEG has always worked to propose innovations in the video coding field that are capable of satisfying the changing landscape and needs of video coding applications. With this objective, MPEG intends to standardize the Reconfigurable Video Coding framework allowing a dynamic development, implementation and adoption of standardized video coding solutions based on a unified library of components with features of higher flexibility and reusability. RVC is a flexible framework for MPEG that tries to provide a systematic way of constructing video codecs from a collection of coding tools, it has been firstly presented in [3]. The goal of the introduction of such new interoperable model at coding tool level is twofold: to speed up the adoption and standardization of new technologies by adding new tools in toolbox and to enable the dynamic... definition of new profiles. The modular data flow based specification formalism also provides a starting point for design that is adapted to yield direct synthesis of SW and HW by using appropriate tools, for direct mapping on SW and HW platforms. A decoder specification under RVC is defined with the standard MPEG toolbox (instantiation and connections of the different coding tools) and the specification of the video bitstream syntax expressed in a MPEG-21 BSDL schema [9]. The toolbox consists of various coding tools which are also named Functional Units (FU). Each FU is a modular coding tool (such as IDCT, MC). The concept of RVC framework can be illustrated by Fig. 1. The key difference between RVC and traditional codec standards is their conformance point. The traditional codec standards define their conformance point at decoder level whereas RVC defines it in tools level so that RVC enables much more flexibility and several configurations of components taken by previous monolithic specifications are possible. ![Fig.1. RVC framework](image) Another fundamental difference between RVC specification and the traditional standard codec specification is the data flow based formalism. In the traditional codec specifications, C/C++ is the language of the reference SW, which usually is composed by several thousands of lines and is getting more and more difficult to understand and to transform into efficient implementations. In the RVC framework, data flow actor-oriental language CAL [1] which is simpler, compact in terms of number of code lines, and does not include non necessary implementations details such as a fixed scheduling for C/C++ reference SW for instance, is used to describe FUs behavior. ### 3. VARIABLE LENGTH DECODING FOR THE RVC FRAMEWORK One problem that needs to be solved when applying RVC is how to specify the parser that is in charge of decoding the bitstream of compressed video. In fact whereas all FUs of the standard MPEG toolbox are available under the form of CAL actors or as a proprietary implementation for specific platforms, the parser of a new decoder configuration need to be synthesized and instantiated automatically because it is a too burdensome task to let the designer write the parser actor in CAL. The parser is not considered as a coding tool because it does not contains any algorithm described by the standard. The unique task of the parser is to feed the coding tools with the right coded data contained in the bitstream. Therefore, a systematic procedure for synthesizing efficient parsers using appropriate FUs available in the standard toolbox is required. #### 3.1. Solutions for Variable Length Decoding Variable length coding is the most popular entropy coding module which is used in many video and picture coding standards, such as JPEG, MPEG-x, and H.26x. One of the difficulties for RVC to describe variable length decoding is the large amount of tables. For example, in MPEG-4 SP [10] there are 8 tables and in MPEG-4 ASP [10], there are 19 tables. Including those tables directly in the syntax description (BSDL schema transmitted as header in the bitstream) would imply inefficiency in the compactness of the description of a new codec configuration, but would also requires large memory and bandwidth. Another difficulty is the parsing process of the undefined bit-length of syntax. In order to avoid carrying VLD tables in bitstream description, VLD tables could be separated and implemented in CAL as FUs of RVC toolbox. The proposed Huffman decoding method is applied to VLD tables which further improved efficiency. The bitstream syntax parser is generated automatically as an independent FU in CAL language from a XML schema describing the structure of the bitstream. The transformation process is implemented using XSLT. The bitstream schema is specified in a XML dialect called Bitstream Syntax Description Language (BSDL) [9], a MPEG-21 standard. Negotiation between the syntax parser and VLD tables are also established in XSLT for variable decoding process. The systematic solution for syntax parser is highly efficient and flexible to decode a reconfigured Bitstream. #### 3.2. Efficient Huffman Decoding method In this section, a CAL model for efficient Huffman decoding is proposed for VLD tables of MPEG-2 and MPEG-4. The proposed implementation is optimized aiming at searching time and memory requirement reduction. Huffman coding has been adopted by MPEG-2 and MPEG-4 entropy coding. Sets of codewords are defined based on the probability distributions of “generic” video material. The direct way to decode variable length syntax is using a full search method: 1) The variable length decoder receives one bit from Bitstream. 2) Look through the corresponding table from the beginning to check whether it is coincide with certain code. 3) If it is found, output the value from the table. 4) Or else receive another bit and combine it with the former bits, go back to step 1). Such full search method is simple but not efficient enough because of duplicate lookup every time one bit is received. In addition, it requires a 2-D memory for each table which is not a good choice for hardware implementation. The proposed method rearranges the code in the Huffman tree. The binary Huffman tree searching can find the optimal route in short time and requires less medium data. As shown in Fig. 2, the variable length coding codeword starts with the first incoming bit. The current bit goes to the left leaf if the coming bit is “0”. Otherwise, it goes to the right leaf while “1”. Weight of each leaf is marked with the same value of lookup index for corresponding VLD tables. That is to say, every time one bit is consumed at input, one index is generated and one lookup result is generated as output. If the result is a true decoded value, it is provided to the output of the CAL FUs and the search of the variable length coding is completed. On the other hand, if the result is a false decoded value, a further searching is continued until a completed codeword is found. Different video coding standards have different VLD tables. Even in a single standard, different profiles and levels have different VLD tables’ scope. The most efficient solution for the RVC framework would be to build separate FUs available in the standard toolbox for each VLD table decoding and then generate dynamically a parser as composition of a synthesized CAL parser and VLD decoding FUs. Each VLD table is considered as an independent FU of the RVC toolbox. For example, in MPEG-4 specification Annex B, there are 8 VLD tables that are used by a simple profile decoder. They are B-6, B-7, B-8, B-12, B-13, B14, B-16 and B-17. In the MPEG-4 advanced simple profile, to these tables other VLD tables are needed. It is unnecessary to generate them again, just access to toolbox and get related FUs. Take MPEG-4 SP for example, we generate the VLD FUs and name them with the table name, such as B-6, B-7 and so on, as showed in Fig.5. Table-1. Example of VLC table B-6 for mcbpc <table> <thead> <tr> <th>Code</th> <th>mbtype</th> <th>cbpc(56)</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>3</td> <td>00</td> </tr> <tr> <td>001</td> <td>3</td> <td>01</td> </tr> <tr> <td>010</td> <td>3</td> <td>10</td> </tr> <tr> <td>011</td> <td>3</td> <td>11</td> </tr> <tr> <td>0001</td> <td>4</td> <td>00</td> </tr> <tr> <td>00001</td> <td>4</td> <td>01</td> </tr> <tr> <td>00010</td> <td>4</td> <td>10</td> </tr> <tr> <td>00011</td> <td>4</td> <td>11</td> </tr> <tr> <td>0000 0001</td> <td>Stuffing</td> <td>--</td> </tr> </tbody> </table> engine will keep search the next value when the underscore data is found. The VLD table engine will stop and report searching failure if “1” is found, which means an error code is detected. Otherwise, true decoded value from VLD table is returned and the decoding process for the syntax is completed. 4. MODELING VARIABLE LENGTH DECODING OF MPEG-4 SP IN CAL CAL [1] is a dataflow and actor-oriented language, specified as a part of the Ptolemy II project at the UC Berkeley. CAL language has concise syntax structure and is suitable for specifying complex signal processing systems as MPEG decoders. Fig.3 shows the graphical representation of the CAL model of the MPEG-4 SP decoder [10]. The Open Dataflow environment [6] is used to design and simulate CAL models. The decoder includes several networks of actors. The incoming bitstream is first converted into sequential bits by the “serialize” FU, and then is decoded by the “Parser”. The “TextureDecoding” and “MotionCompensation” networks of actors contain all the coding tools necessary for decoding the video. Figure 4 illustrates the inside of the “parser” FU present in figure 3. It shows how VLD FUs are connected to the parser for decoding Variable Length codes. For the sake of clarity, figure 4 represents only the connection of one VLD FU to the parser. This VLD FU serves at decoding the DCT coefficients (table B-16 in Annex B of the MPEG-4 standard [10]). The FU “parser” is generated automatically by the XSLT process (see section 5). The VLD FU is generated using the process described in section 3. The “BlockExpand” FU is part of the MPEG toolbox. It outputs the AC coefficients. Fig.3. High level view of the CAL model of the MPEG-4 SP decoder Fig.4. Connections of the parser to a VLD Functional Unit When the parser encounters a Variable Length code, it consumes only one bit from the bitstream port. It sends it to the VLD FU. If there is no entry in the table which corresponds to the input bit, the VLD FU sends back to the parser a token noticing that no matching has been found. Thus, the parser consumes an additional bit and sends it to the VLD FU. This latter will check if the first bit and the newly received bit match an entry in the table. If no, it continues sending token to the parser, saying that there is no matching and the parser must send an additional bit. If yes, the VLD FU sends a token to the parser saying that a matching has been found and the parser can parse the next element of the bitstream. The result of the parsing is then outputted by the VLD FU to the “BlockExpand” FU. The source code of the VLD FU for decoding the “mbcpc” variable code is shown in Fig.5. The only part of the FU which is automatically generated is a list of numbers, representing the VLD table. The rest of the code is always the same for all the VLD FUs. The extra code is needed to handle the optimized list of numbers representing the VLD table. import all caltrop.lib.BitOps; actor VLD_mcbpc_intra( int VLD_DATA_SZ, int VLD_ADDR_SZ ) string Bits --> int(size=2) finish, int(size=VLD_DATA_SIZE) data: int START_INDEX = 0; int( size=VLD_ADDR_SZ ) vld_index; int( size=VLD_DATA_SZ ) vld_codeword := 1; // ********** automatically generated part ********* list( type:int( size=VLD_DATA_SZ ), size=16 ) vld_table = [ 10, 12, 18, 58, 26, 76, 34, 16, 42, 50, 1, 80, 144, 208, 140, 204 ]; // ************************************************ procedure start_vld_engine( int index ) begin vld_index := index; vld_codeword := 2; end function vld_success() --> bool: bitand(vld_codeword,3) = 0 end function vld_continue() --> bool: bitand(vld_codeword,3) = 2 end function vld_failure() --> bool: bitand(vld_codeword,1) = 1 end function vld_result() --> int( size=VLD_DATA_SZ ): rshift(vld_codeword,2) end start_VLD: action ==> do start_vld_engine( START_INDEX ); This section showed how the Variable Length Decoding process has been modeled in CAL. The next section shows how the parser handles the communications with the VLD FUs to decode these variable length codes. 5. FROM BITSTREAM SCHEMA TO PARSER Video coding is used under the various multimedia applications such as video conferencing, digital storage media, television broadcasting, and internet streaming. Due to the heterogeneity of modern networks and terminals, current multimedia technology has to deal with different user’s requirements. As such, the use of scalable video coding, which derives useful video from subsets of a bitstream, is a must. RVC is compatible with SVC very well and it can implement SVC in function unit level. At this moment, the solution is that the MPEG-21 multimedia framework enables transparent and augmented use of multimedia resources across a wide range of networks and devices used by different communities [4]. The BSDL parser is a primordial Functional Unit in the RVC framework because it feeds the coding tool chain with the information contained in the bitstream to be decoded. As RVC is a framework for rapid development of decoding solution, the structure of the bitstream can be modified in order to explore the design space. To avoid the designer to write it by hand (which would be very time-consuming and error prone), a method has been developed to generate directly a parser from the bitstream syntax [3]. Figure 6 shows the components of this transformation process. Each component is implemented in a separate XSLT stylesheet. Fig.6. XSLT transformation process: from BSDL to CAL Pre-processing is the first operation conducted by the top level stylesheet. The pre-processing collects the individual schemata into a single intermediate tree, taking care to correctly manage the namespace of each component Schema and also performs a number of other tasks, including assigning names to anonymous types and structures. Finite State Machine (FSM) design is the major component of the parser actor. The FSM schedules the reading of bits from the input Bitstream into the fields in the various output structures, along with all other components of the actor. The FSM is specified as a set of transitions, where each transition has an initial state, a final state, and an action. BSDL specifies that the order of options within a choice establishes their priority: the first option has priority over the second, and so on. These priorities are recorded in the actor as priorities between the test actions. Guard expressions are built from the control-flow constructs in the BSDL Schema. The Behaviour of each action is to complete such tasks as storing data in the appropriate location in the output structure. Finally, the CAL component declares templates for each of the constructs in the language, such as an FSM schedule, a function call, or an assignment. These templates are called by other components of the stylesheet when building the actor. Collecting all of the CAL syntax into a single stylesheet also means that an alternative stylesheet could be provided in place of the CAL sheet. Figure 7 illustrates a part of the parser automatically generated from the bitstream schema. It shows the actions and the finite state machine generated for handling the communication between itself and external VLD FUs. When the parser meets a variable length code, the actions shown in figure 8 are generated. First, the parser reads one bit from the bitstream input port (DCT_Coeff.read action). The next step consists in sending the bit to the corresponding VLD table; it is done in action DCT_Coeff.output. Then, the parser waits for a token coming from the VLD FU. This token (finish) indicates if a matching has been found in the table or not. If yes, the value of finish is true and the action `DCT_Coeff.finish` is fired and the number of bits to read for the next element is set. If not, the value of finish is false and the `DCT_Coeff.notFinished` is fired and one more bit must be read (`M4V_VLC_LENGTH = 1`). The finite state machine summarizes the transitions. ``` DCT_Coeff.read: action ==> guard readDone() end DCT_Coeff.output: action ==> B16: [current] do current := read_result_in_progress; end DCT_Coeff.finish: action B16_f: [finish] ==> guard finish do setRead(M4V_NEXT_ELEMENT_LENGTH); end DCT_Coeff.notFinished: action B16_f: [finish] ==> guard not finish do setRead(M4V_VLC_LENGTH); end [...] ``` Fig.7. Source code of the automatically generated parser for the negotiation between the parser and the VLD FU This section showed how the variable length decoding process is handled by the generated parser to decode variable length codes. ### 6. HW AND SW IMPLEMENTATION The important reason for which CAL has been adopted as language specifying the reference software of the RVC toolbox is that CAL is suitable for direct synthesis of “efficient” software and hardware by means of CAL2SW and CAL2HW tools [7,8]. Furthermore, the very interesting aspect of this framework is that CAL models are used as inputs both for the hardware and software code generators. Thus software and hardware implementations can be derived from a unique CAL model. The designer develops an unique model and can generate seamlessly hardware and software implementation of CAL actors. As the code of the VLD actors and parser are very simple, the generation of efficient code is straightforward. In [8], it has been shown that the hardware implementation of the MPEG-4 SP decoder modeled in CAL is more efficient than the one designed by hand in VHDL. Furthermore, in terms of coding effort, it took twice less time for a designer to write the CAL model than the VHDL model. ### 7. CONCLUSION Reconfigurable video coding framework is introduced in this paper. An efficient VLD toolbox can be generated by the proposed design. It is successfully implemented in CAL and validated by simulations. This paper shows that it is possible to dynamically generate a RVC parser using a BSDL description of the Bitstream and assembling RVC decoding FUs from the standard RVC toolbox. ### 8. REFERENCES
{"Source-Url": "https://infoscience.epfl.ch/record/133371/files/MergePDFs(4).pdf", "len_cl100k_base": 4741, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19354, "total-output-tokens": 5495, "length": "2e12", "weborganizer": {"__label__adult": 0.0005369186401367188, "__label__art_design": 0.0005192756652832031, "__label__crime_law": 0.0004730224609375, "__label__education_jobs": 0.0003032684326171875, "__label__entertainment": 0.00016057491302490234, "__label__fashion_beauty": 0.00019538402557373047, "__label__finance_business": 0.00018131732940673828, "__label__food_dining": 0.0004737377166748047, "__label__games": 0.0008854866027832031, "__label__hardware": 0.004917144775390625, "__label__health": 0.0005621910095214844, "__label__history": 0.0003094673156738281, "__label__home_hobbies": 8.958578109741211e-05, "__label__industrial": 0.0007648468017578125, "__label__literature": 0.00022232532501220703, "__label__politics": 0.0003647804260253906, "__label__religion": 0.0007138252258300781, "__label__science_tech": 0.08837890625, "__label__social_life": 7.152557373046875e-05, "__label__software": 0.00920867919921875, "__label__software_dev": 0.88916015625, "__label__sports_fitness": 0.0004854202270507813, "__label__transportation": 0.0007305145263671875, "__label__travel": 0.0002582073211669922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23150, 0.04209]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23150, 0.65269]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23150, 0.87088]], "google_gemma-3-12b-it_contains_pii": [[0, 4081, false], [4081, 8664, null], [8664, 11470, null], [11470, 15382, null], [15382, 19112, null], [19112, 23150, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4081, true], [4081, 8664, null], [8664, 11470, null], [11470, 15382, null], [15382, 19112, null], [19112, 23150, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23150, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23150, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23150, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23150, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23150, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23150, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23150, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23150, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23150, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23150, null]], "pdf_page_numbers": [[0, 4081, 1], [4081, 8664, 2], [8664, 11470, 3], [11470, 15382, 4], [15382, 19112, 5], [19112, 23150, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23150, 0.10377]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
48d76ae4add46020b1545fb280587712f3b57822
Cloud Management In A Hybrid Cloud World by Dave Bartoletti, July 30, 2013 KEY TAKEAWAYS Developers Want Cloud, But They Don’t Want To Manage It Developers increasingly prefer to build and deploy applications in the cloud because it gives them fast and easy access to infrastructure and application services. But as their cloud use grows, they need to take on operational responsibility that distracts from coding and testing. What they can’t get from central IT, they build or buy themselves. You Have To Earn The Right To Manage The Hybrid Cloud Before I&O pros can take control of enterprise cloud services on behalf of their business cloud users, they must prove their value and change the focus of IT ops from the traditional IT infrastructure life cycle to the cloud application life cycle. To do this they’ll need tools and processes to remove the management burden from developers. Master Cloud Management Skills To Optimize Your Hybrid Cloud Portfolio Cloud management includes familiar IT capabilities that are updated and extended for the hybrid cloud world, one in which you won’t own much of the underlying IT infrastructure. It’s time to look at which IT operations processes will carry over to the hybrid cloud and which need to be supplemented by newer tools and technologies. Cloud platforms are increasingly a viable option for a growing set of enterprise workloads. Business-aligned developers are aggressively leveraging public cloud platforms to build and deploy new elastic applications and to extend legacy capabilities. They have come to expect speed, choice, and cost transparency. Meanwhile, nearly half of enterprise IT shops claim to be building a private cloud in 2013. The future enterprise IT infrastructure is therefore a hybrid mix of public and private clouds, but who will manage this new IT portfolio? Today, cloud developers are often doing it themselves out of necessity, but they should be focused on coding and testing, not cloud service management. Infrastructure and operations (I&O) professionals have the operations management skills, but they have not yet earned the right to take over cloud management. In this report, we explain how the I&O role changes in a hybrid cloud world, how I&O pros need to accelerate the cloud application delivery life cycle to exceed business expectations of cloud, and which cloud management capabilities I&O must master to take on the role of hybrid cloud manager. Table Of Contents 2 Who Will Deliver The Hybrid Cloud The Business Wants? 6 Master Three Essential Cloud Management Capacities RECOMMENDATIONS 12 Manage The Disruption Or Plan To Be Disrupted WHAT IT MEANS 13 What Does IT Management Look Like In 2020? 13 Supplemental Material Notes & Resources This report is based on vendor briefings, client inquiries, consulting engagements, and numerous end user interactions on the topic of cloud management. Related Research Documents The Rise Of The New Cloud Admin February 21, 2013 Cloud Keys An Era Of New IT Responsiveness And Efficiency November 19, 2012 Improving The Ops In DevOps July 21, 2011 **WHO WILL DELIVER THE HYBRID CLOUD THE BUSINESS Wants?** The cloud is now a viable option for a broad range of enterprise workloads and is actually preferred for a growing class of new customer-facing web and mobile applications. Fifty-four percent of business-unit-aligned developers will adopt public cloud services by the end of 2013. While these business users are ahead of IT in the adoption of public cloud services, I&O pros are starting to get serious about cloud. Forrsights data shows that 85% of software buyers expect to have a formal cloud strategy or approach in place by the end of this year. Only 62% had one at the end of 2012. However, only 20% expect to be executing on a formal cloud migration plan by the end of this year (see Figure 1). This means that cloud strategy and migration is top of mind, but still in its infancy. The time is ripe for I&O pros to rise to the cloud challenge and chart a path to hybrid cloud success. **Figure 1** Companies Are Getting Serious About Cloud Strategy And Migration Planning > “Today and 12 months from now, which of the following describes your firm’s strategy/approach for using public software-, infrastructure-, or business-process-as-a-service offerings?” <table> <thead> <tr> <th>2012</th> <th>2013</th> </tr> </thead> <tbody> <tr> <td>38%</td> <td>15%</td> </tr> <tr> <td><strong>Δ61%</strong></td> <td><strong>Δ100%</strong></td> </tr> </tbody> </table> **The Business Wants To Use The Cloud For Speed But Doesn’t Want To Manage It** Cloud is a great opportunity for I&O pros. Your business users want to *build and deploy* applications faster, and they need an IT foundation that can keep up with them. They don’t want to *operate* the supporting IT environments (i.e., secure them, back them up, maintain them, or fix performance problems). This is where you come in. But you’ll have to earn their trust and prove your value to meet... their expectations of cloud, which are high based on their experience with public clouds. Before reaching out and offering the business what it wants from cloud computing, understand that: - **Your cloud customer wants speed and choice above all.** The business wants cloud for two main reasons: speed and agility. It wants access to infrastructure on demand to support Agile and Lean development cycles, and it also wants a range of development tools, packaged application components, and middleware services for integration. These demands are typically at odds with the way IT delivers today: customized resources that are handcrafted and take weeks to provision. If you can't get complete cloud application and infrastructure services to users in minutes, they will go around you.3 - **Developers manage their own clouds today out of opportunity and expediency.** In practice, someone is taking control of cloud, but it's usually not the I&O team. Developers, architects, application managers, and support teams aligned with business units go around IT to get to cloud faster, and they are creating new processes — and experimenting with new tools — to do so. But in the long run, these teams have neither the skills nor the inclination to operate their own hybrid cloud infrastructures.4 - **Developers want autonomy; don't get in the way.** Taking control of cloud doesn't mean standing between cloud developers and the cloud resources they need. Cloud management should be as frictionless to developers as possible. That means you can't require developers to go through a cumbersome request process to deploy services, nor should you limit their ability to configure infrastructure, middleware, database, or server components as needed. ### Don't Be Disrupted: Earn The Right To Manage Your Growing Cloud Portfolio In the cloud era, your new role is to establish guardrails to guide developers to the best cloud services, get out of the way while they deploy, test, and release them, and then take over ongoing operational support so they can get back to coding.5 But you have to earn the right to be their cloud manager. The operational skills you developed to manage traditional data center infrastructure need to be refocused on the application tier since this is where you'll add the most value. Build your cloud management capabilities with two primary objectives in mind: 1) how you can make developers more productive, and 2) how you can optimize the runtime performance, availability, and cost-efficiency of cloud applications wherever they are deployed. Keep in mind: - **Shrinking the cloud application life cycle is the end goal.** Cloud managers get involved early in the application design phase and help guide developers to the cloud platforms best suited for each application. They maintain image templates and application blueprints and offer the automation tools to simplify deployment and configuration. Once apps are deployed, cloud managers monitor performance, availability, and compliance to maximize ongoing cloud spend and ensure that cloud provider service-level agreements (SLAs) meet enterprise requirements. And they report utilization and performance metrics back to the business to guide the next phase of development (see Figure 2). - I&O must improve the cloud “ops” in DevOps (development/operations). Cloud-based development is a team effort and will require close collaboration with your peers in enterprise architecture and application development and delivery. Developers must take on some deployment responsibility, architects must establish an integration canvas, and IT operations must ensure that cloud applications are properly configured to maintain high availability and perform as required. DevOps are the processes, methods, and systems that support such collaboration, and you need to take an active role in defining and establishing them.6 - Cloud management fills the operational gaps left by cloud service providers. Cloud providers, internal or external, present abstracted and standardized services for consumption and offer basic metrics to monitor those services. Today’s cloud consumers are left to make sure that the cloud services they use meet their company’s operational requirements. Cloud management as an I&O practice fills the operational gaps left by this uneven handshake, and I&O pros must understand which capabilities are required to fill these gaps to earn the right to be cloud managers (see Figure 3).7 **Figure 2 Cloud Managers Accelerate The Cloud Application Life Cycle** - Design - Select - Deploy - Manage - Optimize - Determine which apps are right for cloud. - Decide which cloud is best for which app. - Maintain standardized app templates. - Automate deployment to the best cloud provider. - Monitor and manage app performance, availability, and compliance. - Track and optimize cloud costs and vendor SLAs. - Report back to the business to guide further cloud app design. Source: Forrester Research, Inc. Cloud Management Is Different From Traditional IT Management As cloud manager, you will be a broker, orchestrator, and administrator of cloud services, some of which will run on infrastructure you own, but most of which won’t. In this new role, you’ll succeed if you focus on managing services and leave resource management to your cloud providers and cloud-building tools. Service management is the key IT operational discipline to ensure effective cloud operations. To lay the foundations, you’ll need to: - **Know what you’ll manage in the cloud and what you won’t.** You won’t own the entire infrastructure as your cloud portfolio grows. You’ll likely have a mix of on-premises private clouds built on virtualized or converged infrastructures plus off-premises public cloud infrastructure that for the most part is managed for you. In either case, leave infrastructure management to these cloud providers (private or public) — they are responsible for multitenancy, resource pooling, and scale-out within each cloud domain, for instance. The providers expose the building blocks; your job is to integrate and standardize them to simplify and accelerate consumption. - **Standardize and automate everything.** Traditional IT provisioning is a slow and manual process, while cloud provisioning is on demand, automated, and application programming interface (API)-driven. Developers don't want to fill out a help desk request, nor will they tolerate a lengthy approvals process to get access to cloud resources. Beyond provisioning, cloud consumers also expect automated compliance, availability, scalability, and performance management features. If you've built your problem diagnosis and remediation processes for specific infrastructure or applications you own, they will need to be generalized to handle new cloud-specific metrics and to enable automated remediation. As your cloud application life cycle compresses, there will be no time to craft custom management processes for each new application. - **Understand why developers are turning to new cloud management tools.** Your cloud consumers are likely relying on newer cloud management tools already; reach out and understand why. Cloud developers turn to Dell's Enstratius, RightScale Cloud Management, Scalr, ServiceMesh's Agility Platform, and others and rely on Opscode's Chef and Puppet Labs' Puppet for automation because these tools aim squarely at simplifying their lives. From unified dashboards, they simplify application design via reusable templates and blueprints and offer push-button deployment to multiple clouds. They automate configuration, scaling, and recovery operations and consolidate multiple cloud accounts, users, and roles in a common framework that hides infrastructure complexities. In short, they bring order to the chaos of multicloud management without limiting developer productivity. **MASTER THREE ESSENTIAL CLOUD MANAGEMENT CAPABILITIES** In the early stages of cloud adoption, developers manage their cloud applications and infrastructure themselves, *if they manage them at all.* Developers are typically responsible for the entire cloud application life cycle: They find a cloud service, open an account, select services and build templates, code, deploy, and then operate — if they have time. What they should be doing, instead, is coding, testing, and fixing bugs. So, as cloud services proliferate in your organization, and as your developers bring more mission-critical workloads and data to the cloud, the time is ripe for I&O pros to take over those skills and responsibilities. Based on client inquiries and vendor briefings, Forrester has developed a reference model of the skills needed by today's hybrid cloud manager (see Figure 4). Use this model to evaluate your current skill levels, identify where you need to learn more, and start to explore market offerings. In our model, we've organized cloud management capabilities into three primary categories: cloud service delivery, cloud service operations, and cloud service governance. Cloud Service Delivery Includes Service Catalog, Provisioning, And Migration The cloud service life cycle describes the management processes you follow to define, deliver, maintain, and decommission cloud infrastructure and application services. This life cycle is above and distinct from the underlying component life cycles (server, storage, and network devices). It’s the responsibility of your cloud platforms to expose these components as elastic services; it’s your responsibility to catalog and combine those services and ensure that they are available on demand from the most appropriate internal and external cloud providers. Cloud service delivery is important because: Your service catalog connects user requirements with available cloud services. The cloud service catalog defines the cloud services that you are capable of delivering for the needs of various cloud end users. It’s where you align user requirements with your delivery capabilities and the first place to start implementing cloud standards. The catalog is the entry point for your cloud users and should not only make it easy for them to self-provision but guide them to cloud services approved by central IT. Your goal here is to create a way for business users to easily get what they want from both internal and external cloud providers, within the constraints of the services you are able to support. Your target user communities want a range of packaged services. Services might range from simple compute instances (virtual machine [VM] image types) for a basic infrastructure-as-a-service (IaaS) offering to multimachine templates prepopulated with application components and development tools (e.g., a LAMP stack, load balancer, or app server) for software developers. For each service, clearly define how much customization will be allowed, who will have access (based on role), what availability and performance service levels they can expect, and what it will cost. Look to the public cloud service catalogs such as the Amazon Web Services (AWS) Management Console for guidance — they have set the bar for simplicity and transparency. Cisco Systems’ Cloud Portal, BMC Software’s Cloud Lifecycle Management, and RightScale for Enterprise solutions all include multicloud-enabled service catalogs that have a range of packaged service templates out of the box. Deployment automation shortens the application delivery life cycle. Your cloud users must be able to deploy application or infrastructure services from the service catalog in minutes. Remove any manual approval processes and replace them with policy-driven workflows that place constraints on special requests but auto-approve standard ones. Focus on the application services catalog and what your cloud users need to find there to easily customize multitiered and multicloud applications with a few clicks: preconfigured image templates, application stack blueprints, and configurable inputs like domain, database, compliance, security, and load balancing parameters. Your image templates and application blueprints enable portability. Reach out to your experienced cloud developers to understand the configuration tools and scripting languages they are already using in public clouds and make sure you can support them or offer something better. Your cloud developers are likely deploying applications via Chef recipes and Puppet, while you might be more comfortable with Perl or PowerShell, for example. As your development and operations teams will share responsibility for application deployment in the cloud, encourage them to work together (as a developer-operations or DevOps team). Cloud management solutions from Dell’s Enstratius, Microsoft, RightScale, Scalr, and VMware support a range of configuration tools and languages and aim to simplify template and blueprint design with visual design canvases. Onboarding and migration facilitates portability between clouds. You’ll also need a way to bring existing applications to your cloud and migrate applications between clouds. You can expose this capability directly to cloud users at deployment time, offering a choice of approved clouds for different application types or data sets, constrained by placement policies. You can also automate onboarding of your existing physical or virtualized workloads to your cloud using a physical- or virtual-to-cloud conversion tool. These tools can automatically discover the OS, network, and storage configurations of in-place multitiered applications, and then inject drivers, tools, and security components and package the app for deployment to a range of private and public cloud stacks. CliQr Technologies, Racemi, ravello systems, and RiverMeadow Software offer dedicated migration tools, and most of the leading cloud management solutions include migration features as well. Cloud Operations Include Monitoring, Scaling, And Service-Level Management What you monitor in your clouds determines what you can manage. I&O pros have traditionally monitored infrastructure components as first priority because they owned and had access to these server, storage, and network elements. CPU utilization, storage I/O, network throughput, and memory consumption are still essential infrastructure metrics to track, correlate, and threshold, but they’re not sufficient. In your clouds, application performance and user experience are paramount — that’s what your cloud users care about above all. Your cloud monitoring approach should start at the user and work back to the infrastructure, some of which you’ll own and much of which you won’t. Be sure to: - **Instrument your cloud to monitor user experience.** For adequate visibility into cloud application performance, you need to collect and integrate low-level system metrics and end user app performance metrics and add in business-level metrics as well. If you’re supporting web/mobile apps, include web application metrics (response time, transaction throughput) and mobile app performance data (error rates, active users, client stack traces) to give both you and your developers real-time insight into customer experience. Cloud monitoring vendor AppFirst, for example, integrates a wide range of data sources (from Nagios Enterprises to StatsD to log file data from Splunk) into a unified dashboard with drag-and-drop correlations to speed incident resolution. CA Nimsoft Monitor for Public Cloud extends in-house infrastructure monitoring to include quality-of-service metrics from a range of public clouds, including AWS, Microsoft Windows Azure, Google App Engine, and Rackspace. As the director of cloud architecture for a global online media giant put it, “There’s no one way to deploy or monitor apps in the cloud, but what’s revolutionary is knowing what we don’t have to monitor anymore. All our new sites are instrumented upfront — not as an afterthought — with user experience being the top metric we watch.” - **Embed monitoring into your cloud applications early in the development cycle.** Application performance monitoring (APM) becomes inseparable from infrastructure performance management in the cloud, but legacy APM solutions (born to instrument and monitor app servers and Java EE code, mostly) can be complex, expensive, and time-consuming to install and configure. Newer software-as-a-service (SaaS)-based APM solutions such as New Relic aim to simplify the process of instrumenting both web and mobile apps deployed in the cloud (even down to line-of-code detail) with automated deployment from the cloud. ManageEngine’s Applications Manager automatically discovers your Amazon EC2 instances and EBS volumes, monitors for problems, and rolls up performance data into a set of unified dashboards. Leverage analytics tools to correlate metrics and detect emerging performance hotspots faster.¹⁰ - **Automate scaling and failover within and across cloud resource pools.** Use the monitoring data you collect from cloud resources to enable automatic scale-up and scale-down as well as automated recovery across cloud resource pools. Your cloud portfolio should look like an elastic resource pool. Expose scaling options to your cloud users at deployment time via service catalog configuration options. Remember, too, that saving money in the cloud depends on both using resources efficiently and not using them when they are no longer needed. You can offer users a choice of scaling based on triggers (memory utilization, CPU thresholds, etc.) and enforce availability policies by automatically scheduling backups across cloud availability zones, for example. Solutions like Dell’s Enstratius, RightScale, ServiceMesh, and Scalr let developers define application architectures once, and then automate the tasks to deploy them across various cloud providers and availability zones. - **Manage service levels across clouds.** Once your hybrid cloud is hosting development teams and production applications, ongoing service optimization is essential to maintain user satisfaction, ensure application performance, and identify service problems before they get out of hand. But with a mix of on-premises and off-premises cloud providers, each with its own performance and availability service levels, you are now responsible not only for tracking performance within clouds but across them as well — and hiding the details from cloud users. Cloud service management starts with a foundation of continuous and proactive performance and availability monitoring, then adds automated event detection, analysis, and remediation. Cloud operations solutions such as BMC’s Cloud Operations Management, CA’s Automation Suite for Clouds, Cisco’s Intelligent Automation for Cloud, Red Hat’s ManageIQ, and VMware’s vCenter Operations Management Suite extend traditional IT performance management solutions to include multiple clouds and help speed time-to-resolution for a range of performance problems. ### Cloud Governance Includes Access Control, Cost Management, And Integration Cloud computing lets business users consume IT resources themselves. Self-service means that some degree of IT control is forfeited, along with the manual processes that IT has relied on to maintain access controls, limit use, enforce security, and maintain compliance. In the cloud, these governance controls are implemented via policy-based and automated workflows. Policies govern who has access to cloud services, what they can use, how they can modify and integrate services, where they can deploy workloads, and how much they can spend, to name a few. Use governance policies to guide users on what they can do, not just tell them what they can’t do.¹¹ As you build out your cloud governance practice, be sure to: - **Control who has access to your cloud and what they can do.** Security in the cloud must be balanced with speed: too many restrictions and users will find a way around them; too few restrictions and you’ve exposed yourself to unacceptable risk. Cloud security starts with identity verification, access controls, and permissions. Extend your existing identity services into the cloud by reaching out to take control of existing public cloud user accounts, security keys, and credentials. Define and enforce role-based access controls across clouds to restrict access to specific pools or types of resources by team or business unit — and expose these limitations clearly in your service catalog. - **Review current regulatory and corporate compliance constraints.** These determine where you’ll let applications run and where data can reside (public cloud, private cloud, or a combination); you’re ultimately responsible for compliance, whatever the mix. In addition, determine your tolerance for shared multitenant environments either on- or off-premises. Integrate cloud security with your existing LDAP and/or Active Directory infrastructure and include spending caps by role to keep cloud costs under control. Finally, make sure that you actively track, log, and report on all security and compliance events, from configuration changes to placement of sensitive data. - **Implement cost management to save money and encourage good behavior.** At the heart of cloud economics is a pay-per-use consumption model. Public cloud providers assign transparent prices to every cloud service, and you will need to do the same for your enterprise cloud services, wherever they live. Clear pricing and ongoing cost management serve three purposes: 1) help users select the most cost-efficient cloud resources for each application or use case; 2) give real-time cost visibility to financial stakeholders in your company; and 3) identify opportunities to save more money over time by decommissioning resources or moving applications to more cost-effective providers. A range of solutions from public-cloud-focused vendors (including CliQr, Cloudability, Cloudyn, Newvem Insight, and RightScale’s PlanForCloud) and traditional IT management vendors can help track cloud usage, spot trends, alert you when utilization thresholds are breached, and suggest migrations. - **Integrate your clouds into your software development and IT management processes.** Developers increasingly rely on RESTful APIs (built in a representational state transfer style) to build composite cloud-ready applications. Your cloud management processes should also be enabled with these web-friendly APIs. A rich API framework provides the abstraction from underlying cloud providers, hiding the details of any particular cloud API (AWS, OpenStack, CloudStack, or vCloud, for example) to simplify cross-cloud migrations and avoid lock-in. Leverage this API framework to not only connect to multiple cloud providers but to integrate your cloud management capabilities with existing enterprise IT management tools and software delivery life-cycle tool chains. A global sportswear manufacturer told us about why it chose Dell’s Enstratius cloud management solution: “We don’t want to manage APIs to our various clouds, and we might want to build our own dashboard [catalog] at some point. We want users going through us to get to AWS so we can create good habits and eventually offer internal cloud services that can compete with what they get today from AWS.” **Recommendations** **Manage the disruption or plan to be disrupted** If developers continue to manage cloud themselves and don’t see value in central IT’s cloud management capabilities, they will continue to go around I&O teams. Not only will this cut into developer productivity, it will foster the notion that cloud is competition for I&O professionals, something they’d like to avoid or constrain. Don’t let that idea take hold. Reach out and embrace the hybrid cloud opportunity by proving that you can lower the burden on business cloud users wherever they use cloud services and that you have the right skills and tools to make sure that their apps deliver killer user experiences. To get started: - **Find out what your developers are doing that keeps them from coding.** You will learn the most from your early cloud adopters. Spend time with them to understand how much infrastructure control they actually need and how much they are doing just because they have to. Ask them where they need more visibility and focus your monitoring efforts there. Find out why they chose a particular public cloud so you can establish a baseline for your own cloud operations — what do they like and where are the gaps? - **Catalog your existing cloud services.** Before you can define your cloud management requirements, you need to pick a starting set of deployment models and determine what level of abstraction is available in each. Which integration APIs are available, what types of infrastructure, which development tools? And how much operational management is provided for you by the cloud platform itself, and what will you have to build or acquire to fill in the gaps? - **Add cloud management capabilities in increments, looking for quick wins.** Are your cloud developers spinning up hundreds of images and then forgetting to deactivate them and seeing costs soar? Are they creating different templates for build, test, and production? If so, you should focus on service catalog and account management first to help bring some order to the app life cycle and rein in spending. Locate the most painful part of your cloud app life cycle and attack that first, and then make sure to advertise your successes early and often. - **Recognize that vendor solutions are a work in progress — one size doesn’t fit all today.** Cloud management vendor solutions (at least 40 vendors currently claim to have one) are definitely a moving target. Some purposefully mix cloud building with cloud managing, some are virtualization management tools with a fresh coat of paint, and some work quite well but for only a small subset of clouds. You are unlikely to find one cloud management solution that will have everything you need today, so be ready to do some integration work to pull together best-of-breed features. WHAT IT MEANS WHAT DOES IT MANAGEMENT LOOK LIKE IN 2020? It’s inevitable that enterprise IT in 2020 will be a hybrid mix of on- and off-premises services. While your particular mix of actual cloud services will vary, it’s unlikely that any enterprise IT shop will still be primarily focused on configuring server, storage, and network devices as a core competency. The shift to business technology and IT-as-a-service is well underway, so you can either ignore it, try to contain it, or embrace it. Start preparing now to manage an IT portfolio based on services that are deployed on demand, automatically, and from an elastic pool of infrastructure, most of which you won’t own. That means you need to start thinking about applications first: how your developers can build and deploy them faster and link them more easily to a range of existing and new business services and data sources — in-house and outside your enterprise walls. Does that mean your current IT operations skills lose value? No, unless you stick to legacy thinking about your role. You will spend much less time configuring servers and installing management software in the future, but you’ll spend more time extracting meaningful insight from performance metrics and negotiating service provider agreements. The result will be better application performance and IT cost efficiency, two objectives you already have — you will just be achieving them in a different way. SUPPLEMENTAL MATERIAL Methodology Forrester’s Forrsights Services Survey, Q2 2012 was fielded to 1,058 IT executives and technology decision-makers located in Canada, France, Germany, the UK, and the US from enterprise companies with 1,000 or more employees. This survey is part of Forrester’s Forrsights for Business Technology and was fielded during May and June 2012. LinkedIn Research Network fielded this survey online on behalf of Forrester. Survey respondent incentives include gift certificates and research reports. We have provided exact sample sizes in this report on a question-by-question basis. Each calendar year, Forrester’s Forrsights for Business Technology fields business-to-business technology studies in more than 17 countries spanning North America, Latin America, Europe, and developed and emerging Asia. For quality control, we carefully screen respondents according to job title and function. Forrester’s Forrsights for Business Technology ensures that the final survey population contains only those with significant involvement in the planning, funding, and purchasing of IT products and services. Additionally, we set quotas for company size (number of employees) and industry as a means of controlling the data distribution and establishing alignment with IT spend calculated by Forrester. analysts. Forrsights uses only superior data sources and advanced data-cleaning techniques to ensure the highest data quality. We have illustrated only a portion of survey results in this document. To inquire about receiving full data results for an additional fee, please contact Forrsights@forrester.com or your Forrester account manager. ENDNOTES 1 Source: Forrsights Software Survey, Q4 2011. 2 Source: Forrsights Services Survey, Q2 2012. 3 Developers continue to demand both highly productive tools and transparency and control over the application servers, databases, and other platform layers when needed. The cloud services that succeed for enterprises must strike the right balance between abstraction and control. For a complete discussion of what developers want from the cloud, see the November 19, 2012, “Cloud Keys An Era Of New IT Responsiveness And Efficiency” report. 4 These new cloud administrators are starting fresh with new solutions that are designed to integrate with the public cloud first and the rest of the enterprise second. This means the new cloud admin isn't prioritizing integrating this solution with the help desk, server management tools, a configuration management database (CMDB), or existing ITIL processes. These emerging new administrators are a different animal. See the February 21, 2013, “The Rise Of The New Cloud Admin” report. 5 As you shift your focus from building infrastructure to managing cloud services, you'll need to both teach and learn from your enterprise architecture (EA) and application development and delivery (AD&D) peers. As a team, you'll all need to come up to speed on new cloud application architectures, APIs, and security policies. EA pros will be responsible for balancing the needs of business units against the capabilities of your cloud infrastructure and for laying out the integration canvas on which AD&D pros will implement specific application patterns. See the November 19, 2012, “Cloud Keys An Era Of New IT Responsiveness And Efficiency” report. 6 I&O is in this together with application development and delivery. Both parties must transform their behaviors, but the actions to be taken today are primarily the responsibility of I&O. I&O should start to formulate the road map of what has to be done to change the IT service life-cycle effectiveness and maturity. See the July 21, 2011, “Improving The Ops In DevOps” report. 7 While your business colleagues may think they can buy cloud services and meet all the IT requirements, you know better. Your operational and security requirements don't change, regardless of whether your applications and/or services are on-premises or in the cloud, and if the cloud provider doesn't fully meet your requirements, you have to fill the gap — what Forrester calls the uneven handshake. See the May 29, 2012, “Assess Your Cloud Maturity” report. 8 Test and monitor mobile app user experience continuously. You should not stop testing after you deploy the application. Apps change over time because of updates. Users' needs and expectations change over time as well, causing the performance of apps with older designs to degrade even if the apps haven't changed at all. To plan your mobile app development strategy with end user experience top of mind, see the detailed analysis in the following report. See the August 7, 2012, “Design Mobile Apps From The Outside In” report. Application performance management is similar to aspirin: It makes your headache disappear, but in some cases, it's only hiding the cause of this headache. By the same token, when I&O pros cure application performance problems and small failures in production, your APM solution may mask deeper problems. To build out a comprehensive APM practice to include cloud and mobility, see the February 27, 2013, “Realize Practical Application Performance Management” report. Let analytics tools do the hard work of correlating and spotting patterns in performance data. You need machines to analyze conditions to invoke the appropriate actions. These actions themselves can be automated. To perform adaptive, full-service cloud automation, you need IT analytics. For a guide to understanding analytics and selecting a solution, see the December 5, 2012, “Turn Big Data Inward With IT Analytics” report. The fastest way to lose a relationship with empowered employees today is to tell them what they cannot do. If you want to engage these leaders, you must first tell them what they can do. Only then will they listen to your guidance on what it takes to engage cloud services responsibly. See the May 18, 2012, “Put Guardrails In Place To Drive Cloud Success” report. About Forrester A global research and advisory firm, Forrester inspires leaders, informs better decisions, and helps the world’s top companies turn the complexity of change into business advantage. Our research-based insight and objective advice enable IT professionals to lead more successfully within IT and extend their impact beyond the traditional IT organization. Tailored to your individual role, our resources allow you to focus on important business issues — margin, speed, growth — first, technology second. FOR MORE INFORMATION To find out how Forrester Research can help you be successful every day, please contact the office nearest you, or visit us at www.forrester.com. For a complete list of worldwide locations, visit www.forrester.com/about. CLIENT SUPPORT For information on hard-copy or electronic reprints, please contact Client Support at +1 866.367.7378, +1 617.613.5730, or clientsupport@forrester.com. We offer quantity discounts and special pricing for academic and nonprofit institutions. Forrester Focuses On Infrastructure & Operations Professionals You are responsible for identifying — and justifying — which technologies and process changes will help you transform and industrialize your company’s infrastructure and create a more productive, resilient, and effective IT organization. Forrester’s subject-matter expertise and deep understanding of your role will help you create forward-thinking strategies; weigh opportunity against risk; justify decisions; and optimize your individual, team, and corporate performance. **IAN OLIVER**, client persona representing Infrastructure & Operations Professionals
{"Source-Url": "https://i.crn.com/sites/default/files/ckfinderimages/userfiles/images/crn/custom/Red_Hat_Forrester_Cloud_Management_Hybrid_Cloud_World.pdf", "len_cl100k_base": 7746, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 36167, "total-output-tokens": 8466, "length": "2e12", "weborganizer": {"__label__adult": 0.0005774497985839844, "__label__art_design": 0.0007524490356445312, "__label__crime_law": 0.000621795654296875, "__label__education_jobs": 0.007083892822265625, "__label__entertainment": 0.0002944469451904297, "__label__fashion_beauty": 0.0003006458282470703, "__label__finance_business": 0.057373046875, "__label__food_dining": 0.0004658699035644531, "__label__games": 0.0011005401611328125, "__label__hardware": 0.002307891845703125, "__label__health": 0.0008711814880371094, "__label__history": 0.0004374980926513672, "__label__home_hobbies": 0.00030303001403808594, "__label__industrial": 0.0010881423950195312, "__label__literature": 0.0004982948303222656, "__label__politics": 0.0004274845123291016, "__label__religion": 0.0005059242248535156, "__label__science_tech": 0.06048583984375, "__label__social_life": 0.00024044513702392575, "__label__software": 0.1529541015625, "__label__software_dev": 0.7099609375, "__label__sports_fitness": 0.0003063678741455078, "__label__transportation": 0.000705718994140625, "__label__travel": 0.0004374980926513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40170, 0.02182]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40170, 0.02264]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40170, 0.92827]], "google_gemma-3-12b-it_contains_pii": [[0, 1298, false], [1298, 3105, null], [3105, 4954, null], [4954, 8230, null], [8230, 9957, null], [9957, 11129, null], [11129, 14011, null], [14011, 14692, null], [14692, 18278, null], [18278, 21594, null], [21594, 24940, null], [24940, 28266, null], [28266, 31082, null], [31082, 33850, null], [33850, 36942, null], [36942, 38525, null], [38525, 40170, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1298, true], [1298, 3105, null], [3105, 4954, null], [4954, 8230, null], [8230, 9957, null], [9957, 11129, null], [11129, 14011, null], [14011, 14692, null], [14692, 18278, null], [18278, 21594, null], [21594, 24940, null], [24940, 28266, null], [28266, 31082, null], [31082, 33850, null], [33850, 36942, null], [36942, 38525, null], [38525, 40170, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 40170, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40170, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40170, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40170, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40170, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, true], [5000, 40170, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40170, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40170, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40170, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40170, null]], "pdf_page_numbers": [[0, 1298, 1], [1298, 3105, 2], [3105, 4954, 3], [4954, 8230, 4], [8230, 9957, 5], [9957, 11129, 6], [11129, 14011, 7], [14011, 14692, 8], [14692, 18278, 9], [18278, 21594, 10], [21594, 24940, 11], [24940, 28266, 12], [28266, 31082, 13], [31082, 33850, 14], [33850, 36942, 15], [36942, 38525, 16], [38525, 40170, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40170, 0.0303]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
81de8f48bd7d072a057190d25d5033fd7296fc11
Autotuning wavefront applications for multicore multi-GPU hybrid architectures Citation for published version: Digital Object Identifier (DOI): 10.1145/2560683.2560689 Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. Autotuning Wavefront Applications for Multicore Multi-GPU Hybrid Architectures Siddharth Mohanty Institute for Computing Systems Architecture University of Edinburgh, UK s.mohanty@sms.ed.ac.uk Murray Cole Institute for Computing Systems Architecture University of Edinburgh, UK mic@inf.ed.ac.uk ABSTRACT Manual tuning of applications for heterogeneous parallel systems is tedious and complex. Optimizations are often not portable, and the whole process must be repeated when moving to a new system, or sometimes even to a different problem size. Pattern-based programming models provide structure which can assist in the creation of autotuners for such problems. We present a machine learning based auto-tuning framework which partitions the work created by applications which follow the wavefront pattern across systems comprising multicore CPUs and multiple GPU accelerators. The use of a pattern facilitates training on synthetically generated instances. Exhaustive search space exploration on real applications indicates that correct setting of the tuning factors leads to a maximum of 20x speedup over an optimized sequential baseline, with an average of 7.8x. Our machine learned heuristics obtain 98% of this speed-up, averaged across range of applications and architectures. Categories and Subject Descriptors C.4 [Performance of Systems]: Design Studies; D.1.3 [Programming Techniques]: Concurrent Programming-Parallel programming Keywords wavefront pattern, auto-tuning, multi-GPU 1. INTRODUCTION AND BACKGROUND The advent of heterogeneous systems comprising multicore CPUs and manycore accelerators such as GPUs, has increased the computational power available to everyday users, but has come at a price to the application developer and programming toolchains. The developer now has to navigate diverse languages and libraries, and integrate these within single applications. Performance tuning of such applications is more complicated than tuning essentially homogeneous systems. Finding a programming methodology and toolchain which can address these challenges is widely recognized as being of major importance, both academically and industrially [5]. Pattern-oriented parallel programming [12] offers a promising approach to the heterogeneous parallelism challenge, by encapsulating parallel decomposition and distribution behind an API which requires the programmer to code only application specific aspects. This approach not only simplifies the programmer’s task but also presents the system with a constrained optimization challenge of choosing between and tuning parameters of a set of candidate, heterogeneous parallelizations. This can provide a basis for performance portability. We present a case study in the application of this approach. Our selected pattern is the wavefront. Our implementation strategy distributes wavefront applications across systems which incorporate a multicore CPU and multiple GPU accelerators. In order to better understand the tuning tradeoffs, and to assist in the evaluation of our heuristics, we have performed an exhaustive exploration of an interesting fragment of the tuning space, across a collection of systems comprising a CPU and single or multiple GPUs. Since such an exhaustive search would be impractical in a production system, we have investigated the application of machine-learning strategies to reduce the search time. We have experimented across a range of wavefront applications and heterogeneous systems. The wavefront pattern [6] abstracts computations which evaluate a class of multidimensional recurrence relations. Figure 1 gives a graphical representation of a two-dimensional wavefront. The values of the relation are computed into a multidimensional array. Computation starts at position (0,0) and propagates to neighboring elements in a series of diagonal bands, resulting from the dependencies inherent in the pattern. This wave-like sweep of computation gives the pattern its name. For our purposes, the key characteristics of a wavefront instance are as described in table 1. $dim$ is the number of rows in the array. For simplicity we assume square arrays, but this restriction could be lifted straightforwardly. $tsize$ captures the granularity of the computation at each point in the array, which we assume to be regular as typically the Figure 1: (a) Waveflow for a two-dimensional instance of size 4 x 6 (b) The number of concurrently computable elements increases from iteration 0 until maximum parallelism is achieved at iterations 3, 4 and 5. Part (b) of the figure is inspired by [1]. case. $dsize$ refers to the number of floating point data items at each point in the array, providing a measure of data granularity. These characteristics will form the input parameters to our autotuning framework. Their experimental values will be discussed in section 3.1.1. The remainder of this paper is organized as follows. Section 2 presents our implementation strategy, its tuning points and the trade-offs these create. Section 3 discusses our experimental programme, covering the applications considered, implementation space and overall autotuning strategy. Section 4 discusses the results of our exhaustive evaluation of the tuning space. Section 4.2 evaluates our machine learning strategies used for autotuning. We review related work in section 5 and present our conclusions and future work in section 6. 2. IMPLEMENTATION STRATEGY Our parallel wavefront execution strategy extends previous work [3] with support for GPU tiling and the use of multiple GPUs. In a wavefront, data point computation time is roughly homogeneous so maximum parallelism occurs at the diagonal. Within a diagonal, computation of each data point is independent, hence overall diagonal computation is data parallel. The Single Instruction Multiple Thread (SIMT) constraints of the GPU architecture are thus satisfied by the diagonal major representation of data and successive diagonals can be offloaded onto a GPU. However, it is intuitively clear that this is only beneficial for diagonals of sufficient size and/or computational granularity to amortize the costs of transferring data to and from the device and of initializing execution. Determining these diagonals is a machine and application dependent tuning criterion. For the remaining data points, CPU computation is preferable and it is a common optimization to partition this space into rectangular tiles, computing all points in a tile sequentially in order to benefit from cache re-use. Optimal selection of tile size is also machine and problem dependent [10, 13]. Tiling within a GPU [1], reduces global memory access within the GPU and leads to local cache reuse, besides invoking fewer kernel calls from the host CPU. GPU tiles map to work-groups in OpenCL and the elements within the tile map to work-items or GPU threads. Within a work group, the work items have to be synchronized to follow the wavefront pattern. This introduces an overhead. The GPU tile size (our ‘gpu-tile’) tunable parameter is restricted by hardware and problem size. Our single-GPU parallel implementation strategy therefore has three phases and three tunable parameters - number of diagonals to offload onto a GPU (or ‘band’) and the tile size of CPU and GPU (‘cpu-tile’ and ‘gpu-tile’). In the first phase, tiled parallel computation proceeds using all cores of the CPU. In the second phase, execution switches to the GPU where it proceeds, possibly tiled, diagonal by diagonal. In the third phase, computation reverts to the CPU and is completed in tiled parallel fashion. This implementation strategy is illustrated in the figure 2. The second phase, or in principle the first and third phases, may be null. In the latter case, computation is carried out entirely within the GPU. The presence of multiple GPUs introduces two further tuning parameters. We must decide how many GPUs to exploit (tuning parameter $gpu-count$). Furthermore, partitioning data among multiple GPUs is non trivial and communication among GPUs is expensive. Wavefront dependencies force data in the border regions (or ‘halo’) of partitioned diagonals to be shared among the GPUs. This is shown for two GPUs in figure 3. As successive partitioned diagonals within each GPU get computed, their border data becomes stale. This necessitates halo exchanges (or ‘swaps’) between the neighbouring GPUs, depending on the extent of overlap or halo size. Each time this happens, data elements have to be first transferred to the host (CPU) memory and then transferred to respective destination GPUs. The overhead from data communication mandates minimising communication between GPUs. However increasing halo size causes more redundant computation. Thus halo size is our fifth tunable parameter. To summarise, the tunable parameters in <table> <thead> <tr> <th>Parameter</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>cpu-tile</td> <td>side length of the square tiles for CPU tiling</td> </tr> <tr> <td>band</td> <td>number of diagonals on each side of the main diagonal, to be computed on the GPU</td> </tr> <tr> <td>gpu-count</td> <td>number of GPU devices to use</td> </tr> <tr> <td>gpu-tile</td> <td>the GPU equivalent of CPU tiling</td> </tr> <tr> <td>halo</td> <td>size of the halo for dual GPUs</td> </tr> </tbody> </table> Table 2: Tunable Parameters Figure 3: The partitioning of three diagonals among two GPUs with subsequent halo regions our implementation strategy are as listed in table 2. These will be the targets of our autotuning framework. In the next subsection we discuss tuning trade-offs. The tunable three phase strategy itself is captured in our library code, using threads to control CPU phases and our own OpenCL harness to control communication with and execution upon the GPU. 2.1 Performance tuning trade-offs For the wavefront pattern, GPU computation becomes feasible when there is enough parallelism to be exploited. Thus a) the problem size (dim) should be large enough, since smaller sized problems can be computed quicker in the faster CPU cores and b) the granularity of task (tsize) should be high so that computation dominates over the cost of starting a GPU and the communication overhead of transferring data between GPU and CPU. This communication cost naturally increases when data size (tsize) being transferred increases. Another factor that increases communication cost is the number of GPUs employed. While with a single GPU data is transferred from/to CPU only twice, dual GPUs have the additional overhead of exchanging neighbouring data between themselves every few iterations (halo swapping). This overhead becomes more expensive if the data size is large as more time is spent in swapping halos. A reduction in halo swaps is obtained by increasing the halo size. The diagonal major structure of the problem grid in the GPU restricts this halo size to a maximum of the length of the start/end diagonal. Even at maximum size, the advantage gained from fewer swaps has to be traded against redundant computation, which starts affecting performance with increasing granularity of task. Communication cost is also affected by tiling (gpu-tile) the GPU since this reduces the number of kernel calls required but incurs the additional cost of synchronizing work items within each work group. If computation dominates over communication anyway, time spent in kernel calls no longer matters and tiling would then prove to be counter productive. Finally, the type of system affects the performance - a fast GPU coupled to a slow CPU means data will mostly be offloaded to the GPU (unless bandwidth is the bottleneck) leading to higher values of band. In such a system, CPU tiling will have negligible effect as most of computation is carried out in the GPU. Likewise, in fast CPU-fast GPU systems, good band values will be correspondingly lower. 3. EXPERIMENTAL PROGRAMME We now describe our experimental programme. Our overall strategy is presented in figure 4, and is line with standard applications of machine learning to the tuning of computer systems [11]. Our goals are to understand the relationship between settings of the internally tunable implementation parameters and performance, and to use machine learning techniques to control the automatic setting of these parameters. The first phase of our experimental programme deals with training our model, using the synthetic wavefront application. The second phase applies the learned model to real, previously unseen wavefront applications. 3.1 Training Phase Training is conducted with a synthetically generated wavefront application. This is parameterizable across a wide range of size and granularities. It is a strength of the pattern-oriented approach that such an approach is feasible, removing the need to find real applications for the training phase. 3.1.1 Parameter Space In order to gain insights into the shape of the performance space and trade-offs, we first conduct an exhaustive evaluation of our synthetic application, across a range of settings for the input and output parameters, as listed in table 3. dim is straightforward. tsize is measured in units of the execution time of a single iteration of the synthetic kernel function on a single CPU core. The data structure for each element in our synthetic application consists of two int vari- ### Parameter Ranges <table> <thead> <tr> <th>Parameter</th> <th>Range</th> </tr> </thead> <tbody> <tr> <td>dim</td> <td>500 to 3100</td> </tr> <tr> <td>tsize</td> <td>10 to 12000</td> </tr> <tr> <td>dsize</td> <td>1, 3, 5</td> </tr> <tr> <td>cpu-tile</td> <td>1, 2, 4, 8, 10</td> </tr> <tr> <td>band</td> <td>-1 to 2&lt;sup&gt;n&lt;/sup&gt; dim-1</td> </tr> <tr> <td>gpu-count</td> <td>0, 1, 2</td> </tr> <tr> <td>halo</td> <td>-1 to 0.5&lt;sup&gt;*&lt;/sup&gt;(length of first offloaded diagonal)</td> </tr> <tr> <td>gpu-tile</td> <td>1, 4, 8, 11, 16, 21, 25</td> </tr> </tbody> </table> Table 3: Parameter Ranges ables and a varying number of floats, controlled by `dsize`. For example, `dsize=5` means size of each element is `8+5=48` bytes and so on. Values of parameters like `dim`, `tsize`, `band`, `halo` are spaced irregularly to avoid any cyclic pattern and incorporate a degree of randomness as later the best performing values are used in training our learning models. To simplify modelling, we have overlaid the `band` and `halo` parameters to encode `gpu-count`. Thus, since a `band` of `n` means that `2n + 1` diagonals in total are assigned to the GPU, a `band` of `-1` means that the GPU is not to be used. Larger band values mean that at least one GPU is used, with a non-negative `halo` size meaning that the `gpu-count` is 2. To enable us to explore the parameter space within a reasonable time, we set a threshold limit of 90 seconds on the runtime `rt ime` for any execution. This has no impact on our tuning since any point that exceeds this threshold limit is always a very bad configuration which would not be selected as a training example. We removed the threshold in collecting points for our serial baseline in order to correctly compute performance improvement. #### 3.1.2 Autotuning Strategies We used decision trees to derive our learning model, using training data drawn from the synthetic application. Training sets are created by subsetting the exhaustive search data as follows: firstly a subset of the problem instances (i.e., by `dim`, `tsize` and `dsize`) are selected by regular sampling; then the best five performance points for these instances (by tunable parameter values) are added to the training set. The intuition is that these should be representative of the good decisions we wish to embed in our models. Initial evaluation is done through cross-validation, meaning evaluation is conducted on instances of synthetic application which were omitted from the training set at the first step, to avoid overfitting. We explored different configurations of the learning model to obtain test results that were at least 90% accurate. This model was then applied to the real applications. This procedure is repeated independently for each system, in line with a scenario which would see the software trained “in the factory”. During training, we first build a binary SVM based predictor to decide whether or not to exploit parallelism. For those cases in which parallelism is predicted to be beneficial we then apply and evaluate two machine learning heuristics, based on M5P Decision Tree and REP Tree [9]. Previous work [3] found simple Linear Regression models lacking, and upon exploring different learning models we found the decision trees to be most accurate in predicting optimal values for our tunable parameters. ### 3.2 Evaluation Phase We evaluated the performance of our learned model on two real world wavefront applications. These two applications are summarized below. #### 3.2.1 Evaluation Application Suite **Nash Equilibrium** [15]: A game-theoretic problem in economics, characterized by small instances but a very computationally demanding kernel. The internal granularity parameter controls the iteration count of a nested loop. **Biological Sequence Comparison** [2]: A string alignment problem from Bioinformatics, characterized by very large instances and very fine-grained kernels, varying with detailed comparisons made. The input parameter values of these real world applications map to our synthetic scale as follows: one iteration of Nash corresponds to a `tsize=750` with data granularity of `dsize=4`, while the Biological Sequence Comparison application has `tsize=0.5` and `dsize=0`. ### 3.3 Platforms Our three experimental systems are described in a table 4. ‘HT’ stands for hyper-threaded CPU cores and ‘CU’ refers to the GPU compute units. <table> <thead> <tr> <th>SystemFreq</th> <th>Cores</th> <th>Mem (GB)</th> <th>GPU</th> <th>Freq (Mhz)</th> <th>CU Mem (GB)</th> </tr> </thead> <tbody> <tr> <td>i3-540</td> <td>1200</td> <td>4</td> <td>GTX</td> <td>1401</td> <td>15</td> </tr> <tr> <td>i7-2600K</td> <td>1600</td> <td>8</td> <td>4x(GTX)</td> <td>1215</td> <td>16</td> </tr> <tr> <td>i7-3820</td> <td>3601</td> <td>16</td> <td>Tesla</td> <td>1147</td> <td>14</td> </tr> <tr> <td></td> <td></td> <td></td> <td>C2070</td> <td>C2075</td> <td></td> </tr> </tbody> </table> Table 4: Experimental Systems We measure runtime of the whole program execution using wall clock timers in the host program, averaging across three runs (which exhibited low variance of less than .01). ### 4. RESULTS AND ANALYSIS In section 4.1 we investigate the characteristics of the search space created by our synthetic training application, and explore the resulting model. In section 4.2 we evaluate the model on real world applications. #### 4.1 Training : Exhaustive Search Results We now present the results of our exhaustive search space exploration of the synthetic application across all three systems. ##### 4.1.1 Optimal performance points Figure 5 presents a set of four heatmaps for the two multiple GPU systems and two heatmaps for the single GPU system, with all maps having `tsize` and `dim` as axes, and plotting the values of `band` and `halo` (for multi GPU systems) that result in the fastest execution time. The upper half heat maps correspond to `dsize=1` (element size=16 bytes) and lower half with `dsize=5` (element size=48 bytes). From Figure 5: Heatmaps illustrate the *band* and *halo* values at the best performing points from our exhaustive search across three systems and element size of 16 bytes (dsize=1; 1 float and 2 ints) and 48 bytes (dsize=5; 5 floats and 2 ints). The i3 system is a single GPU system, hence no halo heat map is shown. In all maps the x-axis is tsize, indicating kernel task granularity and the y-axis is dim, indicating problem size. The maps it is clear that computing on the GPU becomes favourable (band>0) when task granularity exceeds a certain threshold and that this threshold varies depending on the problem size, data size and the hardware. Consider the case of dsize=1 (element size=16 bytes) for the i7 systems with fast CPU cores, where the GPU is used from tsize≥500 and dim≥1900 onwards. This differs from the i3 system with its slower CPU cores where GPU use becomes feasible at a lower threshold of tsize ≥ 100 and dim≥1100. Apart from the hardware affecting performance parameters, the effect of dsize can be seen in all three systems, where the 48 bytes sized elements make GPU use costly as previously discussed, leading to higher thresholds values of tsize≥2000 for dim≥1900 and tsize≥700 for dim≥1100 in the i7 and i3 systems respectively. Note that halo sizes for the multi-GPU systems are higher when tsize values are lower owing to the trade-off between redundant computation cost and lesser communication cost, as discussed in 2.1. We conclude the heatmap observations by noting that GPU tiling was not beneficial in our search space. This was because tiled GPU performed better than the untiled GPU implementation in cases where the communication costs dominated over computation costs, tsize < 50. However in these situations, the CPU only parallel implementation dominated over any GPU based implementation due to the additional overhead incurred from starting the GPU. 4.1.2 Comparison with simple schemes Next we investigate the quality of these heatmap points, by comparing the average speed-up obtained from using these optimal points against the three simple schemes of carrying out computation a) serially in the CPU, b) in parallel across all CPU cores with no GPU phase and c) entirely in the GPU (figure 6). ![Figure 6: Bars illustrate the speedup of the heatmap points from figure 5 over serial, parallel CPU and single GPU baselines.](image) Figure 7: Average case comparison for the Synthetic Application. The x-axis is dim-tsize, indicating groups of problem sizes whose kernel task granularity varies from 10 to 12K and the y-axis is rtime, indicating actual runtime. Best is the best exhaustive rtime (ber), AVG is the average rtime from all configurations, S.D. is the standard deviation from average. dsize refers to the number of floats in our synthetic data structure containing 2 int variables. Total element size = 16 bytes (dsize=1; 1 float and 2 ints) and 48 bytes (dsize=5; 5 floats and 2 ints) We note that in case of the i7 systems, on average, doing everything on the GPU, is worse than doing everything on the CPU. This is because the fast CPU outperforms the GPU by a large margin for low task granularity points (up to 10x for tsize≤100, dim ≤ 1100). 4.1.3 Average case comparison The next comparison evaluates optimal heatmap points against average behaviour. This is seen in detail in figure 7, which representing the best exhaustive runtime ( abbreviated to ber) and the runtime (rtime) averaged across all possible combinations of tunable parameters. The figure includes corresponding standard deviations. The x-axis shows groups of dim-tsize with dim varying 500 to 2700 with each dim grouping tsize varying from 10 to 12000. The y-axis is the rtime in seconds. Both halves show the performance across all three systems when element size=16 bytes and 48 bytes respectively. For dsize=1 (element size=16 bytes), the ber is 1.5-2 times faster than the average. The standard deviation steadily increases from dim=500 to dim=1900 due to the widening gap between the best performing and worst performing points. At dim=2700 there is a sharp drop as the rtime values exceeded our 90 second threshold. These points were excluded from the average. In case of dsize=5 (element size=48 bytes), the gap between ber and average rtime for dim=2700 at tsize=8K,10K and 12K narrows down to being just 20%. With higher dsize, the GPU overheads become larger and more points get excluded for exceeding the threshold. 4.1.4 Sensitivity analysis We now explore how sensitive the best points are to changes in parameter values. Higher sensitivity would indicate that finding these points is challenging, whereas low sensitivity would indicate that simple random methods might suffice. Owing to space limitations we restrict our discussion of the exact distribution of points to two samples of dim=700 and dim=2700 belonging to the i7-2600K system. Figure 8 shows violin plots (a combination of box-plots and kernel densities) for these examples. We picked these two samples for dsize=\{1,5\} as they are close to the boundary cases in our search space and they conclusively highlight how difference in problem size and data granularity (and corresponding variation in kernel task granularity within them) impacts the search space. For dim=700 we note that most of the points in tsize=100 to 1K are dispersed around the median value (represented as the white dot) with the best and worst points at the extreme ends. This is due to the best configuration in these cases being all CPU (see the heatmap in figure 5 showing band=-1 for i7-2600K where dim=700, tsize≤2K). In that case the tunable parameters are only cpu-tile and dsize resulting in configurations numbering in tens instead of thousands. Contrast this with tsize≥2K and for all points in dim=2700 where there are many points less than the median value, as seen from the flat base of each violin. These cases correspond to various combinations of the tunable parameters band, halo and gpu-tile in addition to cpu-tile. We also observe that in case of dim=2700, dsize=5 variations in the former three parameters do not affect performance as much as for dim=700. This is also confirmed by the lower gap between average rtime and ber (see figure 7). However selecting the worst points in these cases, such as computing on the CPU only with band=-1 when dim=2700, dsize=1 and tsize≥4K, is quite costly (up to 8 times slower). The worst case in these cases are the best points for dim=700, tsize≤2K. Thus, while variation in tunable parameter values from the best values within a subset of input configurations may not affect performance, it can affect performance in other subsets. We note that the best points in some subsets were the worst ones in others and vice versa, meaning that any attempt to hand code heuristics for each case quickly becomes impractical. The exhaustive search results vindicate our choice to pursue auto-tuning strategies based on machine learning. 4.1.5 The learned model A fragment of the learned model which predicts the optimum halo values for the i7-2600K system is shown in figure 9. The regression equation (LM1) shows that halo depends on other tunable parameters like band and cpu-tile. This agrees with our intuition as halo values are a measure of the extent of overlap among partitioned diagonals offloaded onto GPUs. Hence, halo values depend on band values and cpu-tile values, apart from the input parameters of task granularity and data granularity. ![Figure 9: i7-2600K system: The M5 pruned model tree for predicting halo values with one linear model (out of 22) shown. As seen, halo depends on band and cpu-tile values, apart from the input parameters of task granularity and data granularity.](image-url) search we found gpu-tile values corresponded to either 1 or 0 (meaning a GPU was not employed), so it was a binary decision that was accurately predicted using REP Tree. cpu-tile and band values, like halo values, were predicted using the M5 pruned tree model. 4.2 Evaluation : Autotuning Results For the fine grained Smith-Waterman string compare application autotuning was trivial as the band prediction were 100% accurate, i.e. do everything on the CPU. Our learning model had predicted band=−1 for all \( tsize < 100 \), across our search space of \( \text{dim} \leq 3100 \). Thus in the context of our search space only the predicted cpu-tile values differed and selecting the best points was trivial. A summary of our auto-tuner’s performance for the Nash application is shown in figure 10. This figure describes for each system, the average optimal speed-up against a sequential baseline found during exhaustive search of Nash, and the speed-up obtained by our auto-tuner. The super-optimal performance in the case of the i3-540 is explained by the fact that our regression model based tuner is free to select parameter values which lie outside the set of cases explored in the (necessarily finite) full search. The better quality predictions for the i3-540 can be explained by considering a) it is a single GPU system with only two tunable parameters band and cpu-tile, i.e. less parameter values to predict as compared to the multi-GPU systems and b) its four CPU cores are slow relative to its GPU, meaning most of the data is often offloaded onto the GPU, easing prediction as compared to the i7 systems with fast CPU cores. We conclude this section with a detailed visualization of how our auto-tuning fares against the best exhaustive runtime or ‘ber’ (figure 11). The rtime after autotuning is slightly lower than the ber for the i3-540 at many points (as discussed above), while it is slightly higher for the i7 systems as prediction is harder. 5. RELATED WORK CO2P3S [4] is a wavefront framework that generates parallel programs from user supplied methods and data. However, it is restricted to shared memory architectures and does not employ any optimization techniques for any combination of its application dependent properties. The wavefront abstraction in [15] targets multicore and distributed systems. However, its tunable parameters are specific to distributed systems. It also employs processes instead of threads as they are more adaptable to distributed systems but overhead from processes can impact performance. Stencils have similar issues, but a different dependency pattern to wavefront. Autotuning for the stencil pattern has been widely investigated (e.g. [10, 14, 16]). A multi-GPU framework to handle stencils is covered in [18]. A key difference with our implementation is the absence of dependence between elements in a stencil pattern, which means halo swapping is less frequent for stencils distributed over multiple GPUs than for wavefronts. Dynamic autotuning of multi-GPU/multicore CPU systems can also be based on analytical models [17]. However the problem class considered doesn’t belong to the dynamic programming class of problems and auto-tuning is done without resorting to machine learning. Among dynamic auto tuning frameworks, the Active Harmony framework [8] uses the greedy or Nelder Mead algorithm to search a high dimensional space and the tuning results are then treated as a new experience to update the data characteristics database for future reference. Performance models for wavefront applications on GPU-enhanced HPC systems are presented in [7]. Machine learning techniques have been successfully employed to efficiently explore the CPU-GPU optimization space in [11], though here the decision tree models were used to select either multi-core CPU or GPU implementation and not a hybrid CPU + multi-GPU setup. 6. CONCLUSIONS AND FUTURE WORK We have presented a framework that successfully encapsulates decomposition and distribution of wavefront computations across CPU cores and GPUs while automatically selecting high quality configurations with respect to problem size, data size and kernel task granularity. We demonstrated that well chosen settings for the number of diagonals to be offloaded (band) and length of overlap of computation between GPUs (halo) can produce significant improvements in the performance, while tiling inside the GPUs (gpus-tile) did not affect performance within our simple search space. Correspondingly, poorly chosen settings resulted in performance which was far from optimal. Our decision tree based auto-tuners were modelled on training data from instances of a synthetic application. This successfully predicted the optimal values for various tunable parameters for the fine grained Biological Sequence Comparison and coarse grained Nash wavefront applications, across three different systems, finding an average of 98% of the performance achieved by an exhaustive search. In future we plan to extend our framework to incorporate other dynamic programming problems, beyond simple wavefronts, such as the 0/1 knapsack problem [19]. We aim to enhance our tiled multi-GPU strategy by incorporating more than two GPUs and plan to upgrade our offline auto-tuner to tune at runtime. 7. REFERENCES
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/18615208/PMAMpaper.pdf", "len_cl100k_base": 7224, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31591, "total-output-tokens": 8967, "length": "2e12", "weborganizer": {"__label__adult": 0.000507354736328125, "__label__art_design": 0.0007200241088867188, "__label__crime_law": 0.0004830360412597656, "__label__education_jobs": 0.001148223876953125, "__label__entertainment": 0.0001583099365234375, "__label__fashion_beauty": 0.00028133392333984375, "__label__finance_business": 0.0003871917724609375, "__label__food_dining": 0.0004301071166992187, "__label__games": 0.0010528564453125, "__label__hardware": 0.004764556884765625, "__label__health": 0.0009775161743164062, "__label__history": 0.0005731582641601562, "__label__home_hobbies": 0.00020551681518554688, "__label__industrial": 0.0010175704956054688, "__label__literature": 0.00035119056701660156, "__label__politics": 0.0004487037658691406, "__label__religion": 0.0008406639099121094, "__label__science_tech": 0.434326171875, "__label__social_life": 0.00011777877807617188, "__label__software": 0.0086669921875, "__label__software_dev": 0.54052734375, "__label__sports_fitness": 0.0004718303680419922, "__label__transportation": 0.0012006759643554688, "__label__travel": 0.00030541419982910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37439, 0.04398]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37439, 0.16829]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37439, 0.88466]], "google_gemma-3-12b-it_contains_pii": [[0, 1544, false], [1544, 5893, null], [5893, 10169, null], [10169, 15058, null], [15058, 20738, null], [20738, 23117, null], [23117, 25883, null], [25883, 28496, null], [28496, 31967, null], [31967, 37439, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1544, true], [1544, 5893, null], [5893, 10169, null], [10169, 15058, null], [15058, 20738, null], [20738, 23117, null], [23117, 25883, null], [25883, 28496, null], [28496, 31967, null], [31967, 37439, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37439, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37439, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37439, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37439, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37439, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37439, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37439, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37439, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37439, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37439, null]], "pdf_page_numbers": [[0, 1544, 1], [1544, 5893, 2], [5893, 10169, 3], [10169, 15058, 4], [15058, 20738, 5], [20738, 23117, 6], [23117, 25883, 7], [25883, 28496, 8], [28496, 31967, 9], [31967, 37439, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37439, 0.14839]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
70a19d80b2141809edfda0584a29150ebf5a56c6
Soft computing based technique for accurate effort estimation: A survey Sangeeta Bhandari¹, Parveen Kakkar² ¹HMV, Jalandhar, ²DAVIET, Jalandhar ¹sangeetakapoorbhandari@gmail.com, ²parveenkakkar@rediffmail.com Abstract The global software market has grown exponentially over the past decade. The cost of developing software has grown significantly. The main reason attributed to this increasing trend in the software costs is the labor intensive nature of the software development process. To effectively manage software projects it is important to have accurate estimates of cost and effort involved in software development. The number of project failures and the cases of cost and schedule overrun have been a significant issue for software project managers. Poor estimates have not only led projects to exceed budget and go overscheduled but also in many cases to be terminated entirely. Software cost estimation is the set of techniques and procedures that organizations use to arrive at an estimate for proposal bidding, project planning and probability estimates. Accurate estimate means better planning and efficient use of project resources such as cost, duration and effort requirements for software projects. Efficient software development effort estimation is one of the most demanding tasks in software industry. Unfortunately software industry suffers from the problem of inaccurate estimate for projects and in many cases inability to set correct release date, leads to low quality of delivered product. In this paper we have explored and analyzed different soft computing based techniques employed for software effort estimation based on COCOMO model (Boehm B.). Keywords : COCOMO, Soft computing, Fuzzy logic, Software cost estimate, Neural network 1. Introduction Software effort estimation is the process of predicting the amount of time (Effort) required to build a software system. In order to perform cost-benefit analysis, cost estimation is to be performed by client or developer. Cost Estimation is achieved in terms of person-months (PM), which can be translated into actual dollar cost. Estimation carries inherent risk and this risk leads to obscurity. The different obscurity factors are project complexity, project size etc. The concept of software cost estimation has been growing rapidly due to practicality and demand for it. Today the people are expecting high quality software with a low cost, the main objective of software engineering. So many popular cost estimation models like COCOMO81, COCOMOII, SLIM, FP, Delphi, Halsted Equation, Bailey- Basili, Doty, and Anish Mittal Model had came into existence. These models are created as a result of regression analysis and power regression analysis methods applied to historical data. In spite of availability of software cost estimation models accurate estimation of software development cost continues to challenge software engineering researchers due to the continued lack of accurate estimates. Software industry suffers from the problem of inaccurate estimate for projects and in many cases inability to set correct release date, leads to low quality of delivered product. In this paper we have explored and analyzed different soft computing based techniques employed for software effort estimation based on COCOMO model (Boehm B.). 2. Software Cost Estimation Process: In the early days of computing software costs constituted a small percentage of the overall computer-based system cost. An error in estimates of software cost had relatively little impact. Today software is most expensive element of all computer based systems. Historically a decomposition technique that takes a divide and conquers approach to software project estimation has been used. These techniques decompose the project into major functional units and related software development activities. The cost and effort estimation of these subunits is performed in a stepwise fashion. The models proposed for software cost estimation are based on historical data. These are of the form \[ D = f(v_i) \] Where \( D \) is parameter to be estimated (effort, cost, duration etc.) and \( v_i \) are independent parameter like LOC or function points. Now decomposition techniques may make estimate either by dividing problem into sub problems i.e. problem based estimation or may consider steps of software development process i.e. process based estimation. In case of problem based estimation LOC and function point data are used during software project estimation. The LOC and function point estimation technique differ in the level of detail required for decomposition. When LOC is used as the estimation variable, decomposition is essential to a considerable level of detail. Each and every function to be implemented is to be identified and studied in detail. For function point estimates, rather than focusing on functions, each of the information domain characteristic i.e. inputs, outputs, data files, external interfaces and complexity adjustment values are estimated. The resultant estimates can then be used to derive a FP value that can be extracted from past data and used to generate an estimate. Using historical data the planner estimates an optimistic, most likely and pessimistic size value for each function or count for each information domain value. The expected value for estimation variable \( S \) can then be computed as a weighted average of optimistic, most likely and pessimistic values i.e. \[ S = \frac{(S_{OPT} + 4S_M + S_{PESS})}{6} \] In process based estimation, the planner estimates the effort (e.g. person months) required to accomplish each software development process activity e.g. analysis, design, coding etc. From above discussed two approaches the problem based estimation has normally been used. These two approaches give us an initial approximation for LOC or FPs. An estimation model for software uses empirically derived formula to predict effort as a function of LOC or FP. The empirical data that supports most estimation models are derived from limited sample of projects. Therefore no estimation model is appropriate for all classes of software and in all development environments. This is basic reason for inaccurate cost estimation. Various LOC-oriented estimation model found in literature are: - **Boehm Simple model** \[ E = 3.2 \times (KDLOC)^{1.05} \] - **Bailey-Basli model** \[ E = 5.5 + 0.73 \times (KDLOC)^{1.16} \] - **Walston-Felix model** \[ E = 5.2 \times (KDLOC)^{0.91} \] - **Doty model for KLOC>9** \[ E = 5.288 \times (KDLOC)^{1.047} \] Various FP oriented models have also been proposed like - **Kemerer model** \[ E = 60.62 \times 7.728 \times FP^3 \] - **Matson, Barnett and Mellichamp** \[ E = 585.7 + 15.12FP \] The general form of these model may be represented by \[ E = A + B \times (ev)^C \] As we may observe from these models, each gives different result for same value of delivered lines of code or for same number of function points. So these models must be adjusted for a particular context before application. Apart from above mentioned models COCOMO i.e. Constructive Cost Model has been most widely applied and studied. Boehm described COCOMO as a collection of three variants: basic model, intermediate model and detailed model. The basic COCOMO model computes effort as function of program size, and it is same as single variable method. \[ \text{Effort} = a \times \text{size}^b \] An intermediate COCOMO model effort is calculated using a function of program size and set of cost drivers or effort multipliers also called effort adjustment factors. \[ \text{Effort} = (a \times \text{size}^b) \times \text{EAF} \] In detailed COCOMO the effort is calculated as function of program size and a set of cost drivers given according to each phase of software life cycle. The phases used in detailed COCOMO are requirements planning and product design, detailed design, code and unit test, and integration testing. \[ \text{Effort} = (a \times \text{size}^b) \times \text{EAF} \times \text{sum(Wi)} \] for embedded systems: \( a=3.6, b=1.20 \) for organic systems: \( a=2.4, b=1.05 \) for semi-detached systems: \( a=3.0, b=1.12 \). 3. COCOMO II Model Boehm and his colleagues have refined and updated COCOMO called as COCOMO II. This consists of application composition model, early design model, post architecture model. 1) The Early Design Model It uses to evaluate alternative software system architectures where unadjusted function point is used for sizing. \[ \text{Effort} = a \times \text{KLOC} \times \text{EAF} \] 2) The Post Architecture Model It is used during the actual development and maintenance of a product. The post architecture model includes a set of 17 cost drivers and a set of 5 factors determining the projects scaling component. \[ \text{Effort} = (a \times \text{size}^b) \times \text{EAF} \times \text{sum(Wi)} \] Where \( a=2.55 \) and \( b=1.01+0.01 \times \text{SUM(Wi)} \) wi = sum of weighted factors. Now in spite of years of research and improvements in the models used for software cost estimation, software estimation is deemed to be a tough nut to crack. Still there are cases of schedule and cost overruns. Moreover today software size has become enormous, so has become the importance of cost estimation. From last three to four years new techniques like artificial neural networks, genetic algorithms, and fuzzy logic is being employed to deal with the uncertainties in the inputs to the models. Because of inherent uncertainties in the inputs, cost estimation has suffered. The use of soft computing techniques has been demonstrated by many researchers, which is the main idea of this paper. The basic point of contention regarding above mentioned class of models is concerned with the knowledge of precise (and numeric) values of the size of code prior to the completion of the project itself. It is very likely that our knowledge regarding the anticipated size of the system, especially at an early stage of the project, will not be detailed and precise. Any numeric estimate could be somewhat illusory. Venkatachalam A.R. (1993) was of opinion that although algorithmic models provide an economical approach to estimate software costs, they suffer from some serious weaknesses. First, the cost and effort estimates derived from different models seem to have significant variations (Saiedian, Band, and Barney, 1992). Such large variations may pose problems to managers in deciding the amount of resources to be committed. Second, the models are based on historical data and hence may not reflect recent developments in the areas of programming languages, hardwares, and software engineering. So he has proposed the use of artificial neural networks for accurate cost estimation. According to Venkatachalam back propagation neural network is most appropriate to be used in this context. A back-prop neural network is organized in layers, with each layer composed of neurons or processing elements and connections (Rummelhart et al. 1986). The first layer called the input layer contains neurons that represent the set of input variables. The output layer contains neurons that represent the output variables. When the relationship between the input and output variables is nonlinear, the hidden layer helps in extracting higher level features and facilitate generalization. Connections between neurons have numerical weights associated with them; the weights are adjusted in the training process by repeatedly feeding examples from the training set. Each neuron has an activation level, specified by continuous or discrete values. The value (called internal activation) coming into a neuron in the hidden or output layers is typically the sum of each incoming activation level times its respective connection weight. In back-prop network paradigm, all the connection weights are assumed to be responsible for the output error. Error is defined to be the difference between a network’s estimated output or predicted value and the corresponding observed output value. The error values are calculated at the output layer and propagated to previous layers and used for adjusting the connection weights. The training process consists of repeatedly feeding input and output data from empirical observations, propagating the error values, and adjusting the connection weights until the error values fall below a user-specified tolerance level. In his work Venkatchalam A.R. has used a back-prop neural network is constructed with 22 input nodes and 2 output nodes. The input nodes represent the distinguishing features of software projects and the output nodes represent the effort required in terms of person month and the development time required to complete the project. The projects considered for this research are taken from the COCOMO database (Boehm, 1981). The input nodes represent the various factors affecting the software cost i.e. Type of project (business, process control, scientific, etc.), Programming language used in the project, Required software reliability, Database size, Product complexity etc. According to W. Pedrycz et.al.in 1999, It is more appropriate to view an estimate of the size of code as a fuzzy set. They have used triangular fuzzy number, viz. a fuzzy set defined in $\mathbb{R}$ with a piecewise linear membership function. A triangular fuzzy number is described by three parameters – its modal value (m) and two bounds (lower and upper, “a” and “b”, respectively) beyond which value of the membership function is assumed to be zero. The lower bound expresses a minimum possible size of the system whereas the upper bound stands for the maximal anticipated size of the software system. The calculus of fuzzy numbers has been used to calculate effort for the given triangular fuzzy set. This representation makes it possible to envision each data item in form of multiple possibilities rather than discrete values. Another problem with the existing cost estimation models which are regression models is that, in spite of their nonlinear nature are global. They attempt to capture all data within a single relationship (function). The idea behind granular models and granular computing is that data are captured through a series of local models being constructed on the basis of individual information granules. This paper demonstrates the use of fuzzy clustering to deal with heterogeneous data that naturally arise due to the nature of the problem (promoting a substantial diversity of the software projects) the models produce a granular form of the results that tend to be more informative and comprehensive than single numeric. It has been found that the concept of information granularity and fuzzy sets, in particular, plays an important role in making these models more user-friendly. Idri A., Khoshgotaar T.M, Abran A. (2002) state that main reasons for skepticism about neural network is their nature of being black boxes. It implies that it is difficult to explain the reason about particular output given by neural network. This is a significant weakness as without the ability to produce comprehensible decisions, it is not possible to trust the reliability of neural networks that deal with real world problems. Idri A., Khoshgotaar T.M, Abran A. have proposed the use of methods that maps the neural network to a fuzzy rule based system. So if these rules are comprehensible then the neural network may also be easily interpreted. They have considered a three layer perceptron neural network with the sigmoid function for the hidden units and identity function for output unit. After training and testing the network with COCOMO’81 dataset, they have applied the Benitez’s method( ) to extract the if-then fuzzy rules from the network. These rules express the information encoded in the architecture of the network. Attarzadeh I., Hock Ow S. (2010) has also proposed a new cost estimation model based on artificial neural networks. The proposed structure of network is customized for COCOMO-II Post-architectural model. The proposed model was evaluated both by using original data from COCOMO dataset and artificial dataset. The results have shown that proposed neural network model showed better software effort estimates. Fuzzy logic has also been used to remove vagueness in inputs to cost estimation problem. Swarup Kumar J.N.V.R. et.al. (2011) has used the concepts of fuzzification and defuzzification for handling ambiguity and impreciseness in the inputs to COCOMOII. They have compared the values of effort calculated by eight models, which included COCOMO Basic Model, COCOMO Inter (Nom), Detailed (Nom), Early Design Model (High), post Arch Model (H - H), Doty Model, Mittal Model and Swarup Model. The results demonstrate that applying fuzzy logic method to the software effort estimation is an expedient approach to address the problem of obscurity and vagueness existing in software effort drivers. Also, the fuzzy logic model presents better estimation accuracy as compared to the other models. The performance of proposed software effort estimation model has been evaluated by comparing against various software cost estimation models. All these models use Mean Relative Error (MRE) as evaluation criteria. For each model, the impact of estimation accuracy was evaluated using (MRE, MARE) evaluation criteria. Criterion for measurement of software effort estimation model performance is Mean Absolute Relative Error (MARE). Jin-Cherng Lin and Han-Yuan Tzeng (2010) have applied Particle Swarm Optimization to estimate software effort by multiple factors software project clustering. While using PSO to optimize the parameters of COCOMO, each particle contains two dimensions X and Y coordinates. X and Y are A and B parameters in the COCOMO model equation respectively. PSO helps in finding optimal values of A and B parameters as the prediction project parameters. First, X and Y coordinates are randomly generated 40 particles of range between 0 and 1 in a two-dimensional space, and each particle has random initial speed. Then the X and Y coordinates of particles act as predictor parameters, and use MMRE as fitness Value. Each particle must acquire an optimal value on their path; this solution is called the local optimal solution (Pbest). Each particle must also have social behavior so that each particle finds the optimal solution in the current search, which is called a global optimal solution (Gbest). Although this work was first in applying correlation and intelligent computing for software cost estimate, but it was difficult to implement and no concrete results were reported. J.N.V.R. Swarup Kumar et.al. (2011) have proposed a new model using fuzzy logic in order to estimate software effort. They have used MATLAB to determine the parameters of various cost estimation models. Attarzadeh I., Hock Ow S. (2011) have used Adaptive fuzzy logic model to improve the accuracy of software time and cost estimation. They have used two-dimension Gaussian Membership Function in fuzzy model to make software attributes smoother in terms of the range of values. The proposed model was applied on COCOMO I and NASA98 datasets. The evaluation of obtained results using mean of magnitude of relative error showed the superiority of fl-COCOMO model over original COCOMO. 4. Conclusion Most important issue in software project management is accurate and reliable estimation of software development effort and cost. This is more important especially in the early phase of software development so that the manager may commit his resources for on time delivery of the software. Software attributes like LOC and function points on which traditional models of cost estimation are based have properties of uncertainty and vagueness. A software cost estimation model based on soft computing techniques can overcome the uncertainty and vagueness of software attributes. However as shown by various researches carried out, determination of suitable fuzzy rule sets for fuzzy inference system and suitable architecture in case of neural network plays important role in coming up with accurate and reliable software cost estimates. Accuracy of the models proposed is still an issue but direction of application of soft computing techniques in software cost estimation seems promising and must be carried further. References: 7. Jin-Cherng Lin, Han-Yuan Tseng(2010), Dept. of Computer Science & Engineering, Tatung University,Taipei, Taiwan, *Applying Particle Swarm Optimization to Estimate Software Effort by Multiple Factors Software Project Clustering*, IEEE.
{"Source-Url": "http://ijoes.vidyapublications.com/paper/Vol9/03-Vol9.pdf", "len_cl100k_base": 4223, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 21754, "total-output-tokens": 6917, "length": "2e12", "weborganizer": {"__label__adult": 0.0003767013549804687, "__label__art_design": 0.0002758502960205078, "__label__crime_law": 0.0003254413604736328, "__label__education_jobs": 0.0008044242858886719, "__label__entertainment": 5.91278076171875e-05, "__label__fashion_beauty": 0.0001634359359741211, "__label__finance_business": 0.0008711814880371094, "__label__food_dining": 0.0004410743713378906, "__label__games": 0.0005121231079101562, "__label__hardware": 0.00052642822265625, "__label__health": 0.0005908012390136719, "__label__history": 0.00016486644744873047, "__label__home_hobbies": 8.14199447631836e-05, "__label__industrial": 0.0003490447998046875, "__label__literature": 0.00025343894958496094, "__label__politics": 0.00024771690368652344, "__label__religion": 0.000324249267578125, "__label__science_tech": 0.008636474609375, "__label__social_life": 8.553266525268555e-05, "__label__software": 0.005008697509765625, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.0003018379211425781, "__label__transportation": 0.00037169456481933594, "__label__travel": 0.0001989603042602539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27542, 0.04764]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27542, 0.56192]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27542, 0.8646]], "google_gemma-3-12b-it_contains_pii": [[0, 3364, false], [3364, 7083, null], [7083, 10138, null], [10138, 14861, null], [14861, 19097, null], [19097, 20415, null], [20415, 24117, null], [24117, 27542, null], [27542, 27542, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3364, true], [3364, 7083, null], [7083, 10138, null], [10138, 14861, null], [14861, 19097, null], [19097, 20415, null], [20415, 24117, null], [24117, 27542, null], [27542, 27542, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27542, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27542, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27542, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27542, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27542, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27542, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27542, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27542, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27542, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27542, null]], "pdf_page_numbers": [[0, 3364, 1], [3364, 7083, 2], [7083, 10138, 3], [10138, 14861, 4], [14861, 19097, 5], [19097, 20415, 6], [20415, 24117, 7], [24117, 27542, 8], [27542, 27542, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27542, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
7acad4743b4dd40249095f0c675626f44c984719
Using pattern languages in participatory design DEARDEN, Andy <http://orcid.org/0000-0002-5706-5978>, FINLAY, J., ALLGAR, E. and MCMANUS, B. Available from Sheffield Hallam University Research Archive (SHURA) at: http://shura.shu.ac.uk/3/ This document is the author deposited version. You are advised to consult the publisher's version if you wish to cite from it. Published version Copyright and re-use policy See http://shura.shu.ac.uk/information.html ABSTRACT In this paper, we examine the contribution that pattern languages could make to user participation in the design of interactive systems, and we report on our experiences of using pattern languages in this way. In recent years, there has been a growing interest in the use of patterns and pattern languages in the design of interactive systems. Pattern languages were originally developed by the architect, Christopher Alexander, both as a way of understanding the nature of building designs that promote a ‘humane’ or living built environment; and as a practical tool to aid in participatory design of buildings. Our experience suggests that pattern languages do have considerable potential to support participatory design in HCI, but that many pragmatic issues remain to be resolved. INTRODUCTION The pattern language concept was originally developed, by the architect Christopher Alexander and his colleagues, both as a theoretical account of the properties of a humane, or ‘living’, built environment [2, 3, 5] and as a practical tool to aid participatory design processes [1, 4]. Patterns and, to a lesser extent, pattern languages have been widely adopted within software engineering as a form for sharing knowledge about ‘good’ design solutions between professionals [15], but the approach to patterns adopted in software engineering has ignored the participatory aspects of Alexander’s original work. In recent years, there has been a growing interest in the use of patterns and pattern languages to support human-computer interaction (HCI) design [8, 9, 31]. Much of this work has been inspired by the perceived success of patterns in software engineering. Of course, the parallels between architectural and interaction design, with their common concern for the design of the human environment, are arguably closer than those between architecture and Software Engineering. This may suggest that the benefits of developing pattern languages in HCI may be even greater than in Software Engineering. However, the approach to pattern languages adopted within HCI has followed closely that of software engineering, with the emphasis on sharing knowledge between professionals rather than on processes to support user participation in design. For example, the definition of a pattern language generated at the Interact’99 patterns workshop states: “The goals of an HCI pattern language are to share successful HCI design solutions among HCI professionals…” (our emphasis, as quoted in [9, p39]). In this paper, we report our experiences of developing and evaluating pattern languages as aids to participatory design of web-based systems. From our studies we have identified a number of important issues that require further examination. These issues may also be of interest in other contexts where externally produced design advice is being used within a participatory design process. Structure of this paper In the next section, we introduce the concept of patterns and pattern languages as used in architecture, software engineering and HCI. We then describe the approach we are developing for using pattern languages in practice and how it relates to Alexander’s approach. We then make a number of observations both about the form of pattern languages and practices using them derived from our investigations. Finally, we discuss relationships with other work, and issues we hope to address in the future. PATTERNS AND PATTERN LANGUAGES Pattern Languages in Architecture Alexander introduces design patterns as follows: “Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice” [2, preface p. x]. Alexander’s pattern language includes patterns addressing different physical scales ranging from the distribution of cities [2, pattern 1], the organisation of communal space, e.g. ‘Access to Water’ and ‘Accessible Green’ [2, patterns 25 & 60], through to patterns addressing detailed structure in individual rooms, e.g. ‘Windows which Open Wide’ and ‘Alcoves’ [2, patterns 236 & 179]. An intermediate level pattern is ‘Light on Two Sides of Every Room’, for which the problem and solution are stated as: “When they have a choice, people will always gravitate to those rooms which have light on two sides, and leave the rooms which are lit only from one side unused and empty. ... Therefore: Locate each room so that it has outdoor space outside it on at least two sides, and then place windows in these outdoor walls so that natural light falls into every room from more than one direction.” (2, pattern 159, authors’ emphasis) For “convenience and clarity” [2, preface, p. x], Alexander defined a specific textual and typographical format for the presentation of a pattern, consisting briefly of: a name and reference number; a picture showing an example of an instantiation of the pattern; a paragraph to set the context; three ‘diamonds’ marking the start of the problem; a concise problem statement (emboldened); the body of the problem, including the empirical background (the motivation for the pattern) and the ‘forces’ involved in the resolution of the problem; a solution (emboldened and preceded by the word ‘Therefore’); a diagram to illustrate the solution; another three ‘diamonds’ to mark the end of the problem; and a paragraph indicating how this pattern relates to other ‘lower’ patterns in the pattern language. Important features of this format are: - the combination within each pattern of both abstract descriptions of the solution (in text and graphics) and an illustration of a concrete realisation of the pattern; - the inclusion of explicit advice recommending a specific built form, rather than simply stating desirable properties of a ‘good’ solution; - the combination of both the problem – solution pair (emboldened) together with text providing a rationale for the particular solution recommended. Patterns within the language are related in a hierarchy with larger-scale patterns indexing patterns at smaller scales that can be used in their realisation. In [2 & 3] Alexander develops an explicit analogy between the concept of a generative grammar for natural human language and pattern languages in architecture: “both ordinary languages and pattern languages are finite combinatory systems which allow us to create an infinite variety of unique combinations, appropriate to different circumstances ..” [3, p187]. The parallels between natural languages and pattern languages also relate to the way that Alexander understood the evolution and development of pattern languages. Alexander viewed pattern languages as shared cultural artefacts, reflecting the practices of the communities that developed them. He interpreted the development of design languages by professional communities, in ways that excluded the users of buildings, as part of what he viewed as the failure of modern architecture. One effect of this was that: “Specific patterns, like, for instance, the light on two sides pattern, vanish from people’s knowledge about building … And those few patterns which do remain within our languages becomes (sic.) degenerate and stupid.”[3, p235]. Thus he claims: “So long as the people of a society are separated from the language which is being used to shape their buildings, the buildings cannot be alive. If we want a language which is deep and powerful, we can only have it under conditions where thousands of people are using the same language, exploring it, making it deeper all the time. And this can only happen when the languages are shared.”[3, p241, 242]. For Alexander, pattern languages were, in part, a way of sharing knowledge about building throughout a society. The concept of local and culturally specific pattern languages can also be found in his work. For example, King [18] discusses the development of a specific pattern language to support the design of a school in Japan, which draws upon the earlier languages, but is specific to the particular community for whom the building is intended. There are parallels to be drawn between Alexander’s description of pattern languages and Ehn & Kyng’s [13] discussions of design as a language game, and the concept of speech communities discussed by Wynn & Novick [32]. **Design patterns in software engineering** Early in the 1990s many software engineers were seeking ways in which design knowledge could be represented and shared between practitioners [6]. This led to an interest in the works of Christopher Alexander and resulted in early workshops at OOPSLA [11, 7] and then to the Pattern Languages of Programming conference series [12]. Discussed at these conferences are patterns and pattern languages that address many topics including the organisation of software projects and teams, design of user interaction, and software architectural design. Perhaps the best known work associated with this series of workshops and conferences is Gamma et al.’s book ‘Design Patterns: Elements of Reusable Object Oriented Software’ [15]. Gamma et al. state that a pattern has four essential elements, a pattern name, the description of a problem, a solution and a discussion of the consequences, i.e. costs and benefits, of applying the pattern. Examples of object oriented design patterns include ‘Observer’ (a generalisation of the familiar ‘model-view-controller’ architecture for user interface construction), and ‘Command’ (a software design to implement undoability). Although Gamma et al.’s patterns do contain cross-references to each other, the patterns do not form a generative language. Rather, the authors refer to their collection as a “catalog”. Unlike Alexander’s pattern language which has a specific starting point (a root node within a graph of patterns), finding a pattern in Gamma et al.’s catalogue assumes an initial search process. Coplien & Schmidt [12] discuss the differences between pattern languages and pattern catalogues in software engineering. Patterns and Pattern Languages in HCI HCI has seen examples both of pattern catalogues [16, 29] and of pattern languages [9, 27]. Whereas software engineering patterns generally describe the structure and execution of software, for example identifying classes and messages between objects, HCI patterns describe properties and behaviours of interactive systems that can be perceived by users. For example, one pattern from Tidwell’s “common ground” language [27] is ‘Progress Indicator’ for which the context, problem and solution are stated as: “Context: A time consuming process is going on, the results of which are of interest to the user. Problem: How can the artifact show its current state to the user, so that the user can best understand what is going on and act on that knowledge? Solution: Show the user a status display of some kind, indicating how far along the process is in real time. If the expected end time is known, or some other relevant quantity (such as the size of a file being downloaded), the always show what proportion of the process has been finished so far, so the user can estimate how much time is left. If no quantities are known – just that the process may take a while – then simply show some indicator that it’s still going on …” The pattern is illustrated with a picture of a dialogue window, showing a progress indicator for a file transfer. A natural question for HCI patterns is how they differ from guidelines or heuristics. There is, in one sense, nothing new in patterns [10]. Patterns are an attempt to record principles that are already known to ‘good’ designers. However, patterns combine abstract statements of design principles with: descriptions of the context where the pattern can be applied; concrete illustrations of how the pattern might be realized; discussions of the rationale for the solution chosen; and examination of relevant trade-offs that may need to be considered. Hence patterns represent a particular choice for a way of communicating design advice, and may be regarded as more closely related to the ‘Claims’ work of Carroll & Sutcliffe [25, 26] than they are to work on heuristic evaluation or style-guides. This issue of patterns as a communication medium has been explored by Erickson [14] and Borchers [9]. Borchers discusses the use of pattern languages to support communication between three domains of expertise in developing multimedia exhibits. He presents a pattern language for the production of blues music, a pattern language for designing interaction with multimedia exhibits, and a language addressing software architecture issues relevant to such exhibits. By encouraging each of these separate disciplinary groups to utilize the pattern languages within design discussions, Borchers promotes patterns as a medium to improve communication across disciplinary boundaries. Erickson [14] takes this position a stage further, speculating on patterns as a possible ‘lingua franca’ (common language) for all design stakeholders. Erickson explicitly recognizes the importance of including users as participants in this conversation. However, Erickson’s work is primarily a speculative discussion of how patterns might contribute to such developments, and he explicitly stated that his ideas had not been applied in practice. Martin et al.’s [19] work presenting findings from ethnographic studies of co-operative work can also be understood as an attempt to exploit the pattern form to aid communication between professional disciplines. A natural question is whether pattern languages can actually advance active user participation in design. We examine this question in the rest of this paper. DEVELOPING A PROCESS In this section, we review Alexander’s approach to using patterns languages, and describe the approach we have adopted for participatory design of websites. Alexander’s process model As we have noted, pattern languages in architecture were originally developed as tools to support participatory design. In a series of case-studies, Alexander et al. describe the participatory processes that they sought to develop [1,4,5]. Key elements of these processes were: 1. Removal of the separation of roles between designing a building and realizing it on site, which in Alexander’s view, made it impossible to ensure that the building was sensitive to local contingencies. Instead, a new role of ‘architect Alexander’s ‘gradual stiffening’ and relates well to work in HCI such as Shipman and McCall’s [24] notion of ‘incremental formalization’. Using patterns in website design In order to test whether pattern languages could be used effectively in participatory design of interactive systems, we have developed two pattern languages, each of which deals with a specific class of website. The first language addresses the design of travel websites. The language was developed by selecting previously published patterns that address the general issue of interactive systems design, and adapting them to reflect the specific functions and needs of a travel website [23]. This language has been used in seven simulated design exercises, in which different users were asked to develop paper prototypes. The users ranged in experience from a retired teacher with no experience of using the web to a trainee web designer. At the start of the session, users were told that following the patterns was not compulsory, and that the illustrations were examples only and not definitive ‘best practice’. Design sessions varied between 1 and 2 hours. After each session, users were interviewed to about reactions to the exercise and to the pattern language. The second language deals with the design of a web-based learning resource. This language addresses pedagogical, as well as interface design issues. The pedagogical patterns examine appropriate active learning activities to include in a learning resource, for example collaborative learning, exploratory learning and learning by doing. The interface and web design patterns address issues of structure, layout, navigation and user actions. This language was used in six simulated design exercises to develop paper prototypes, and in three further extended studies, in which these initial designs were further developed working through iterations of static HTML and then dynamic web designs. All users in this case were lecturers or students, or both, with some experience of web usage but from a range of academic disciplines. An example pattern from the on-line learning language is shown in the appendix. In both cases, design work using the languages was videotaped to support analysis of the interaction between users, the designer-facilitator and the design artefacts. Based on a preliminary (informal) analysis of the data from these studies we identified a number of important issues that require further examination. These issues involve questions of both the form of pattern languages, and processes that utilize such languages in participatory design. In the next section we present our observations on the use of pattern languages in design exercises. ISSUES ARISING FROM THE STUDY In practical design activities, a pattern language cannot be viewed solely as an abstract information source. We must recognize that pattern languages are instantiated by specific physical artefacts, and the structure of those artefacts may have a significant effect on design activity. Wording the language The writers of patterns in software engineering have long recognized the care that must be taken in producing a pattern. In software engineering, patterns are developed through successive processes of drafting and revision within ‘writers workshops’. Meszaros and Doble [20] present a ‘pattern language for pattern writing’, that offers guidance on clarity of expression. Meszaros & Doble suggest that pattern writers should identify a clear target audience [pattern D1], and then tailor the terminology of the language to that audience [pattern D2], avoiding detailed explanations of terms that will be familiar to this well-defined group. A recognized consequence of this decision is that “The pattern or pattern language may not be understandable to those readers outside the target audience if the terminology is too specialized.”[20, p. 557]. We began with patterns developed for a target audience of other interaction designers, and then made modifications. However, our users were far more diverse in background than this. There were substantial differences in the time spent reading and studying each pattern. Some users appeared to look at the illustrations only, others spent about 20 seconds on each pattern, reading mainly the bold text and looking at the diagrams, whereas some spent as much as 90 seconds reading each pattern in detail. Writing clearly for such a diverse audience presents a significant challenge. It is clear that the ‘designer-facilitator’ has an important role in supporting users, helping them to interpret the patterns, and interpreting users’ statements. We should also be aware of a possible bias towards users who are more comfortable with large amounts of text. Most of our users appeared to understand the patterns. The fact that the design domain (web pages) was familiar to most of our users was perhaps helpful in this respect. However, we did encounter some problems. One of the patterns we used included the word ‘frames’ in the context of laying out a web page. One user (a trainee web designer) challenged the pattern, arguing against the use of frames. Another user (a lecturer in a non-computing subject) did not recognize the term, and the facilitator had to repair this breakdown by explaining frames as an implementation technique to break up a page into sub-areas. This problem could be more acute where patterns are used to design systems that are less familiar to users. For example, at the current time, many users will not be familiar with designs and styles for mobile or wearable systems. Writing patterns to support user participation in such design will present a greater challenge. The layout of individual patterns In presenting individual patterns we followed the typographic style adopted by Alexander [2] and by Borchers [9]. This style presents a motivating illustration first. In Alexander’s language, this motivating illustration is a photograph of some physical space or object that instantiates the pattern. In Borchers’s language, each pattern is illustrated either by a photograph of a user interacting with a system, or a screen shot of a system that exhibits the pattern. In our travel website language, each pattern was illustrated by a screen-shot of a web page that illustrated the use of the pattern. As with Borchers’s language, our illustrations were the very first element of the pattern following immediately after the title. In practice, we found that some users made extensive reference to the illustrations, often without referring to the accompanying text. The users’ heavy reliance on the illustrations has two potential disadvantages. Firstly, the illustrations may give rise to derivative designs, which simply copy “solutions” from the illustration. For example, the pattern “Step by Step” was illustrated by a screenshot from RyanAir.com, that used a circle to represent each step of booking a ticket, and most of our users adopted a similar approach. One user even equated the pattern with the example picture, indicating that the ones she found useful were those with the illustration, the ”pattern”, which she had incorporated into her design. Secondly, we observed users referring to multiple illustrations from different patterns when developing their designs. This suggests that if an illustration contains elements that are peripheral to the pattern in which it resides, then users might interpret these elements as recommended practice, even though the pattern author might not wish to recommend these particular decisions. These disadvantages may be exacerbated by the fact that our illustrations were placed in a prominent position in the layout of the patterns. Some users suggested alternative layouts. These included: placing the problem and solution first, with the explanatory text appearing later; placing screen shot(s) at the end; and using multiple illustrations. In the design sessions, users reported that they read the problem and solution text, and looked at the illustrations, but only a few of our users actually read the explanatory text. Even where users had not had the opportunity to read the patterns in advance, they typically spent less than 30 seconds reading the pattern before continuing with the design exercise, suggesting that they were not reading the explanatory text in depth. One user observed: "The style is ...quite wordy and could be put more succinctly" (Study2h, User 1). There is clearly a need to reconsider the depth and wording of patterns as well as the layout. The form of the language We have experimented with a variety of different physical forms for the pattern language. In the first instance we presented the patterns on single sided A4 paper. Each pattern was presented on one or two sides of paper, stapled together if necessary. In later experiments, we used double sided paper, protective plastic wallets and a ring binder (with dividers) to organize the language. It appears to be important to be able to handle each pattern individually. This makes it easier for the designer facilitator to introduce patterns into the design discussion, either individually or in small sets. It also enables the user to browse through patterns that they have already seen to find ideas that they feel are useful. During design, users occasionally make reference to information they have previously seen in a pattern, and can indicate this by pointing to an individual pattern, or to a pile of patterns. During design sessions, we noticed that users progressively handled the patterns more and more, occasionally placing patterns that they had used in a pile away from the designer-facilitator’s seat. This may suggest an expression of ‘ownership’ of patterns, which would be a positive indication user participation. Our results suggest that the physical affordances of the language are significant for participatory design and that, consequently, efforts to organize pattern languages in hypertext may lose important qualities. USING PATTERN LANGUAGES Handling the language Our results indicate that the behaviour of the facilitator is critical to the effective use of the language. Without exception, users felt that the involvement of the facilitator was vital, the following comment being typical: “at first there was a lot of information and it was important to have you there for guidance and reassurance” (Study 2a, User 1). However, as the sessions progressed, the users were more able to navigate through the language and select patterns themselves. This allows the locus of control over the session gradually to shift from facilitator to user. The results from our first study suggested that an effective approach is for a small number of patterns (typically between one and four) to be presented together. Users are able to read the problems and solutions quickly before continuing with design. This practice can be used to help the user focus on a small number of relevant usability issues, whilst developing or reviewing some part of the design. We adopted this approach consistently for our second study. We also found it helpful to verify the user’s understanding after each set of patterns was introduced, asking questions such as ‘what does that pattern suggest to you?’. In recommending this practice, we should include the proviso that the facilitator must be responsive to user interests. For example, during one session a designer-facilitator is heard to say “you’re jumping ahead, you’re good at this” (Study 1, to User 1) whilst looking for a pattern that was appropriate to the users current focus. In another session, the user indicates that they want their students to examine a series of alternative presentation styles in order. In response, the facilitator suggests looking at a group of patterns that deal with ‘step-by-step’ instructions (Study 2a, User 6). The set of patterns can also be used as a “checklist”, to ensure that all the issues have been discussed. This can occur in two different ways. Either the list of patterns can be used at the end of a session to check whether all issues have been discussed, and / or the facilitator can use the list to monitor progress, noting when each pattern is used, and constantly reflecting on which pattern to introduce next. In comparing the designs produced by users, we found that where the patterns were not explicitly managed and presented by the facilitator to the user, certain issues were overlooked. For example, one pattern for travel websites recommends providing feedback about delays that occur when queries are processed. This issue was only considered when the facilitator specifically introduced the pattern. The same result occurred for the idea of including links to other useful sites (e.g. car-hire & hotel booking). Breakdowns and repair During the design sessions, breakdowns in communication occurred on many occasions. In such situations, the facilitator is required to identify and repair the breakdown. We observed such breakdowns at three different levels. At the level of the pattern language artefact, breakdowns may occur where the user misinterprets the intention of a pattern, or of the language. For example, one user reported that when she was told about the hierarchical organization of the patterns, she became concerned that this was a direction to make her website design hierarchical. Another user became confused about the intent of a pattern: “I’m not really sure what it is advising me to do” (Study 2a, User 1). Often the facilitator can avert such breakdowns by discussing ideas from patterns as they arise. Users should feel able to challenge the advice contained in a pattern. Alexander also encouraged this type of dialogue [4]. A second level of breakdown concerns the organization of the design process. Users may be familiar with other design practices such as brainstorming, use of checklists, or spending time studying a large selection of examples before beginning to produce design ideas. Facilitators need to be aware that users may have previous experiences of design processes that will influence their expectations of the design activity. These expectations can be a source of breakdowns in the design process, and our use of patterns to support participation must itself be negotiable. Finally, breakdowns can occur at the level of the domain. Our first pattern language was intended to support the design of ‘travel’ websites. Most of the examples used in the language were drawn from rail and air travel sites (e.g. totaljourney.com, theTrainLine.com, RyanAir.com, EasyJet.com and SingaporeAirlines.com). In one design session, the user interprets ‘travel’ in terms of package holidays. During the design session she uses the phrase ‘holiday site’, requests options to select ‘hotel or self-catering’, and wants to see information on ‘transfer time’ from the airport to her hotel. These concerns are not well represented by the language, and the facilitator did not recognize this divergence of interests. These events illustrate the important role of the facilitator in monitoring the progress of the design session for possible breakdowns, and repairing breakdowns when they occur. Whilst breakdowns and repairs are a natural part of any participatory design process, it may be that the use of a pattern language (or any other external advisory artefact) introduces new potential sources of confusion. The authority of external design advice The pattern language embeds design advice in a form that is separable from the facilitator, contrasting with the more typical situation in participatory design where advice is offered verbally by a single named individual. This externalization can have a variety of consequences. On the one hand, users may feel more able to challenge the advice offered by a single named individual. This was an unintentional consequence but one which has important implications. The issue of how to introduce materials and practices at the start of participatory design sessions is recognized in the literature [21], but our results show that facilitators might influence users attitudes to patterns throughout design sessions. Alexander’s work also highlights issues of the authority associated with patterns. The patterns in [2] are each rated with a number of stars reflecting the authors’ confidence in the correctness and universality of the pattern. In [4] Alexander reports on a conflict in which the users did not agree with a pattern (entrance transition) that he regarded as fundamental. In this case, Alexander insisted that the pattern was adopted in the design but did so without disadvantaging the users (no family had to sacrifice any of their own choices in order to have this feature). In the end, all users agreed that the feature enhanced their homes. This is an interesting example of the resolution of conflict between user and designer. In this case the authority of the pattern was high and therefore was adopted, even though users could not immediately see the benefit. We need to consider how patterns are validated, and how their ‘authority’ might be mediated, as well as developing our practice in encouraging users to challenge and interpret the pattern within their own context. DISCUSSION In our research, we are investigating ways of using HCI patterns within participatory design, an approach that we view as consistent with Alexander’s original writings. Our first investigation dealt with an artificial problem, developing paper prototypes for a travel website. On the basis of that initial investigation, we have developed the approach and applied it, with students and lecturers, to the design of on-line learning resources. Our results to date, suggest that pattern languages might indeed be useful to support participatory design activities. Overall, users responses were positive, and, once they became familiar with the use of the patterns, they reported that they found the patterns helpful. Of course, we must question the extent to which these positive responses can be attributed to the use of the pattern language, as opposed to the experience of paper prototyping or factors relating to the facilitator. Related work The majority of previous work on pattern languages in HCI has focused on the problem of identifying and documenting... patterns. See [9, 16, 19, 27, 29]. In our work we have explicitly sought to avoid writing new patterns, preferring to investigate the problems of applying patterns in practice. Other researchers are also beginning to investigate these issues [31]. This practical focus places our work in close relationship to work on ‘Tools for Working with Guidelines’ [28]. van Welie et al. [30] suggest that patterns could be superior to guidelines as tools to support design practice, but do not provide detailed evidence. Henninger [17] discusses the application of patterns to multiple projects in an organizational learning framework. However he does not examine participatory design, or present analysis of the processes of applying patterns within design. We are not aware of any other work, to date, investigating HCI pattern languages as aids to user participation. Further work If we are to realise the full potential of pattern languages to support active user participation in design, our work raises questions of both the pattern language form and facilitation methods that require further work. With regard to the form of patterns, we are concerned about wording, layout and physical affordances of pattern languages. Our results suggest that users did not find the Alexandrian layout particularly accessible. We are exploring different formats, including “cut down” versions and different orderings of text and illustration. We also need to refine the facilitation process to enable users to understand the process and the pattern language and to negotiate solutions suited to their own contexts. Alexander viewed pattern languages as fluid and evolving through use. In our studies we saw how this might happen through negotiation and discussion with users. However, our evidence also suggests that some users rely (heavily) on the patterns as authoritative guidance. We need to examine ways of validating patterns, and facilitation practices that emphasise interpretation of patterns in the local context, and the possibility of challenging patterns. Our ongoing work is to investigate these issues with more users and over longer time frames. We are revising the online learning language and will develop and use it with a broader range of educators and students in real design activities. We are also engaged in the development of a medical portal website in collaboration with a group of users. We are evaluating a range of pattern language formats and hope to apply pattern languages in more realistic scenarios involving groups of stakeholders, rather than small numbers of individuals. Finally, we need to investigate the issue of the quality of the outcomes. Alexander was seeking the “Quality without a Name” [3]. Both the process itself and the products that are developed through it, should contribute to improvements in the quality of life of participants. In our future work we hope to examine whether our pattern languages and processes can help to achieve this aim. ACKNOWLEDGEMENTS We acknowledge the support of our respective institutions for this research. We would like to thank the participants in our design studies for their cooperation and Kay Plowman for initiating the Travel Web Site Language. REFERENCES 15. Gamma, E., Helm, R., Johnson, R., and Vlissides J., 1995. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading, MA., USA. APPENDIX: AN EXAMPLE PATTERN FROM THE ON-LINE LEARNING LANGUAGE Note: the formatting and typography of this pattern has been adapted from that actually used in our studies for reasons of space and consistency. CONTROL PANEL (21) Adapted: Tidwell (1999) ...the user can take actions that affect the existence or state of the whole artifact. Having a control panel it can be used to assist in NAVIGABLE SPACES (16). How should the artifact present these actions? The user should know exactly how to stop or leave this artifact at any time. The user should know what other actions are available. The user may already know what they have to do, but they need to find the corresponding action. The user may need to perform these in a hurry, or under stress. Doing these actions accidentally may be disastrous. Examples: - OK / Apply / Cancel buttons on dialogs - Minimize / Maximize / Quit buttons on Windows application frames Therefore: Group these actions together, label them with words or pictures whose meanings are unmistakable, and put them where the user can easily find them regardless of the current state of the artifact. Use their design and location to make them impossible to confuse with anything else. When using a control panel you may need to consider USING COLOUR (30), VISUAL SYMBOLS (27), and USING GRAPHICS (29). You may want to consider SMALL GROUP OF RELATED THINGS (36). When thinking about the controls to use you may want to consider navigation actions such as: CONTINUE TO NEXT STEP (31), GO BACK ONE STEP (33), and GO BACK TO A SAFE PLACE (32).
{"Source-Url": "http://shura.shu.ac.uk/3/1/fulltext.pdf", "len_cl100k_base": 7435, "olmocr-version": "0.1.48", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 32760, "total-output-tokens": 9778, "length": "2e12", "weborganizer": {"__label__adult": 0.0016107559204101562, "__label__art_design": 0.275634765625, "__label__crime_law": 0.00110626220703125, "__label__education_jobs": 0.067138671875, "__label__entertainment": 0.00101470947265625, "__label__fashion_beauty": 0.001220703125, "__label__finance_business": 0.0011053085327148438, "__label__food_dining": 0.00136566162109375, "__label__games": 0.0021495819091796875, "__label__hardware": 0.0029087066650390625, "__label__health": 0.0018978118896484375, "__label__history": 0.0025997161865234375, "__label__home_hobbies": 0.0007061958312988281, "__label__industrial": 0.002170562744140625, "__label__literature": 0.0079803466796875, "__label__politics": 0.0009326934814453124, "__label__religion": 0.002593994140625, "__label__science_tech": 0.1923828125, "__label__social_life": 0.0005931854248046875, "__label__software": 0.0249481201171875, "__label__software_dev": 0.404052734375, "__label__sports_fitness": 0.0007939338684082031, "__label__transportation": 0.0021724700927734375, "__label__travel": 0.0008392333984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43508, 0.01815]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43508, 0.58776]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43508, 0.93173]], "google_gemma-3-12b-it_contains_pii": [[0, 695, false], [695, 4572, null], [4572, 9647, null], [9647, 15221, null], [15221, 17660, null], [17660, 23504, null], [23504, 29220, null], [29220, 33594, null], [33594, 38547, null], [38547, 41928, null], [41928, 43508, null]], "google_gemma-3-12b-it_is_public_document": [[0, 695, true], [695, 4572, null], [4572, 9647, null], [9647, 15221, null], [15221, 17660, null], [17660, 23504, null], [23504, 29220, null], [29220, 33594, null], [33594, 38547, null], [38547, 41928, null], [41928, 43508, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43508, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43508, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43508, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43508, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43508, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43508, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43508, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43508, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43508, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43508, null]], "pdf_page_numbers": [[0, 695, 1], [695, 4572, 2], [4572, 9647, 3], [9647, 15221, 4], [15221, 17660, 5], [17660, 23504, 6], [23504, 29220, 7], [29220, 33594, 8], [33594, 38547, 9], [38547, 41928, 10], [41928, 43508, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43508, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
089166cccd04209ae3d9334cbba46f859abe5c96
Benchmarking Motion Planning Algorithms An Extensible Infrastructure for Analysis and Visualization Mark Moll, Ioan A. Șucan, and Lydia E. Kavraki I. INTRODUCTION MOTION planning is a key problem in robotics concerned with finding a path that satisfies a goal specification subject to constraints. In its simplest form, the solution to this problem consists of finding a path connecting two states and the only constraint is to avoid collisions. Even for this version of the motion planning problem there is no efficient solution for the general case [1]. The addition of differential constraints on robot motion or more general goal specifications make motion planning even harder. Given its complexity, most planning algorithms forego completeness and optimality for slightly weaker notions such as resolution completeness or probabilistic completeness [2] and asymptotic optimality. Sampling-based planning algorithms are the most common probabilistically complete algorithms and are widely used on many robot platforms. Within this class of algorithms, many variants have been proposed over the last 20 years, yet there is still no characterization of which algorithms are well-suited for which classes of problems. Below, we will present a benchmarking infrastructure for motion planning algorithms that can be a useful component of such a characterization. The infrastructure is aimed both at end users who want to select a motion planning algorithm that performs best on some problems of interest as well as motion planning researchers who want compare the performance of a new algorithm relative to many other state-of-the-art algorithms. The benchmarking infrastructure consists of three main components (see Figure 1). First, we have created an extensive benchmarking software framework that is included with the Open Motion Planning Library (OMPL, http://ompl.kavrakilab.org), a C++ library that contains implementations of many sampling-based algorithms [3]. One can immediately compare any new planning algorithm to the 29 other planning algorithms that currently exist within OMPL. There is also much flexibility in the types of motion planning problems that can be benchmarked, as discussed in Section II-A. Second, we have defined extensible formats for storing benchmark results. The formats are fairly straightforward so that other planning libraries could easily produce compatible output. Finally, we have created an interactive, versatile visualization tool for compact presentation of collected benchmark data (see http://plannerarena.org). The tool and underlying database facilitate the analysis of performance across benchmark problems and planners. While the three components described above emphasize generality, we have also created—as an example—a simple command line tool specifically for rigid body motion planning that takes as input a plain text description of a motion planning problem. Benchmarking sampling-based planners is non-trivial for several reasons. Since these planners rely on sampling, performance cannot be judged from a single run. Instead, benchmarks need to be run repeatedly to obtain a distribution of some performance metric of interest. Simply comparing the means of such distributions may not always be the correct way to assess performance. Second, it is well-known that different sampling strategies employed by sampling-based algorithms typically perform well only for certain classes of problems, but it is difficult to exactly define such classes. Finally, different applications require optimization for different metrics (e.g., path quality versus time of computation) and there is no universal metric to assess performance of planning algorithms across all benchmarks. There have been some attempts in the past to come up with a general infrastructure for comparing different planning algorithms (see, e.g., [4], [5]). Our work is in the same spirit, but includes an extended and extensible set of metrics, offers higher levels of abstraction and at the same time concrete entry level points for end users. Furthermore, we also introduce an extensible logging format that other software can use and a visualization tool. To the best of the authors’ knowledge, none of the prior work offered the ability to interactively explore and visualize benchmark results. The MPK software system described in [4] has similar design goals as OMPL, in that both aim to provide a generic, extensible motion planning library, but MPK appears to be no longer maintained or developed. There has been much work on metrics used for comparing different planning algorithms (see, e.g., [6], [7]). Our benchmarking infrastructure includes many of these metrics. --- Mark Moll and Lydia E. Kavraki are with the Department of Computer Science at Rice University, Houston, TX 77005, USA. Email: mmoll,kavraki@rice.edu. Ioan A. Șucan is with Google[X], Mountain View, CA, USA. Email: isucan@gmail.com. The contribution of this paper lies not so much in any particular benchmark problem, metric, or planner, but in providing a generic, extensible benchmarking infrastructure that facilitates easy analysis and visualization of replicable benchmark results. Since it is integrated with the widely used and actively developed Open Motion Planning Library, it becomes straightforward to compare any new motion planning algorithm to many other state-of-the-art motion planning algorithms. All relevant information pertaining to how a benchmark was run is stored in a database to enable replicability of results. II. BENCHMARKING INFRASTRUCTURE OMPL provides a high-level of abstraction for defining motion planning problems. The planning algorithms in OMPL are to a large extent agnostic with respect to the space they are planning in. Similarly, the benchmarking infrastructure within OMPL allows the user to collect various statistics for different types of motion planning problems. The basic workflow is as follows: 1) The user defines a motion planning problem. This involves defining the state space of the robot, a function which determines which states are valid (e.g., collision-free), the start state of the robot and the goal. The complete definition of a motion planning problem is contained within a C++ object, which is used to construct a benchmark object. 2) The user specifies which planning algorithms should be used to solve the problem, time and memory limits for each run, and the number of runs for each planner. 3) The benchmark is run. Upon completion, the collected results are saved to a log file. A script is used to add the results in the log file to an SQL database. The results can be queried directly in the database, or explored and visualized interactively through a web site set up for this purpose (http://plannerarena.org). Below we will discuss these steps in more detail. A. Defining motion planning problems The most common benchmark motion planning problems are those where the robot is modeled as a rigid body, due to their simplicity (it is easy for users to intuitively assess performance). We have developed a simple plain-text file format that describes such problems with a number of key-value pairs. Robots and environments are specified by mesh files. The state validity function is in this case hard-coded to be a collision checker. Besides the start and goal positions of the robot, the user can also specify an optimization objective: path length, minimum clearance along the path, or mechanical work. There are several planning algorithms in OMPL that optimize a path with respect to a specified objective. (Others that do not support optimization simply ignore this objective.) It is also possible to specify simple kinodynamic motion planning problems. OMPL.app, the application layer on top of the core OMPL library, predefines the following systems that can be used: a first-order car, a second-order car, a blimp, and a quadrotor. We have not developed controllers or steering functions for these systems and kinodynamic planners in OMPL fall back in such cases on sampling random controls. This makes planning for these systems extremely challenging. (If controllers are available, then OMPL can use them.) With a few lines of code, the command line tool can be modified to allow new planning algorithms or new types of planning problems to be specified in the configuration files. The benchmark configuration files can be created with the GUI included with OMPL.app. A user can load meshes in a large variety of formats, define start and goal states, try to solve the problem with different planners and save the configuration file. The user can also visualize the tree/graph produced by a planning algorithm to get a sense of how hard a particular problem is. In the configuration file, the user can specify whether solution paths (all or just the best one) should be saved during benchmarking. Saved paths can be “played back” with the GUI. When defining motion planning problems in code, many of the limitations of the command line tool go away. Arbitrary state spaces and kinodynamic systems can be used, different notions of state validity can be defined, and different optimization objectives can be defined. Additionally, any user-defined planning algorithm can be used. The OMPL application programmer interface (API) imposes only minimal requirements on new planning algorithms. In particular, the API is not limited to sampling-based algorithms (in [8], for example, several non-sampling-based planners are integrated into OMPL). The low barrier to entry has lead to numerous contributions of planning algorithms from other groups: OMPL 1.0 includes 29 planning algorithms. Since all these algorithms use the same low-level functionality for, e.g., collision checking, benchmarking highlights the differences in the motion planning algorithms themselves. The benchmarking facilities in MoveIt! [9] are based on and compatible with those in OMPL. The problem setup is somewhat similar to the OMPL command line tool. In MoveIt!, robots are specified by URDF files, which specify a robot’s geometry and kinematics. Motion planning problems to be benchmarked are stored in a database. B. Specifying planning algorithms Once a motion planning problem has been specified, the next step is to select one or more planners that are appropriate for the given problem. Within OMPL, planners are divided into two categories: geometric/kinematic planners and kinodynamic planners. The first category can be further divided into two subcategories: planners that terminate when any solution is found and planners that attempt to compute an optimized solution (with respect to a user-specified optimization objective). For optimizing planners a threshold on optimality can be set to control how close to optimal the solution needs to be. At one extreme, when this threshold is set to 0, planners will run until time runs out. At the other extreme, when the threshold is set to infinity, planners act like the non-optimizing planners and will terminate as soon as any solution is found. Typically, a user specifies multiple planners. By default, OMPL will try to make reasonable parameter choices for each With the help of a script, the benchmark results stored in the log files would require significantly more effort to perform the types of analysis and visualization that is enabled by our database directly from the log files or some other custom storage format. In contrast, extracting information users can simply transfer one single file. Furthermore, the database facilitates distribution of all relevant benchmark data: log files can be added to the same database. The SQLite3 file can be added to a SQLite3 database. Multiple benchmark log files correspond to one run of a particular planner trying to solve a particular motion planning problem. After a run is completed, several attributes are collected such as the number of generated states (graph_states), duration of the run (time), length of the solution path (solution_length), clearance along the solution path (solution_clearance), etc. By default solutions are simplified (through a combination of shortcutting and smoothing [10]), which usually significantly improves the solution quality at minimal time cost. Runs can terminate for a variety of reasons: a solution was found, the planner timed out (without any solution or with an approximate solution), or the planner crashed. We use an enum type for this attribute (stored in status), and the labels for each value are stored in the enums table (not shown in Figure 2). The progress table stores information periodically collected during a run. This is done in a separate thread so as to minimize the effect on the run itself. Progress information is currently only available for optimizing planners. It is used to store the cost of the solution found at a particular time. By aggregating progress information from many runs for each planner, we can compare rates of convergence to optimality (see next section). The database schema has been designed with extensibility in mind. Large parts of the schema are optional and other columns can be easily added. This does not require new parsers or additional code. Instead, the log files contain enough structure to allow planners to define their own run and progress properties. Thus, when new log files are added to a database, new columns are automatically added to runs and progress. Planners that do not report on certain properties will just store “N/A” values in the corresponding columns. Additional run properties for a new type of planner are easily defined by storing key-value pairs in a dictionary of planner data which is obtained after each run. Additional progress properties are defined by adding a function to a list of callback functions. Log files have a fairly straightforward plain text format that is easy to generate and parse\(^1\). This makes it easy for other motion planning libraries to generate compatible log files, which can then be added to the same type of benchmark database. For example, MoveIt!’s benchmarking capabilities do not directly build on OMPL’s benchmark capabilities, yet it can produce compatible benchmark log files. This makes it possible to see how a planning algorithm’s performance changes when moving from abstract benchmark problems in OMPL to elaborate real-world settings created with MoveIt! (possibly from experimental data). --- \(^1\)The complete syntax is specified at http://ompl.kavrakilab.org/benchmark.html. Fig. 3: Sample output produced from a benchmark database by the Planner Arena server for various motion planning problems (but not the ones shown in Fig. 4). III. INTERACTIVE ANALYSIS OF RESULTS There are many different ways to visualize benchmark performance. It is nearly impossible to create a tool that can automatically select the “right” visualizations for a given benchmark database. We have therefore created a web site called Planner Arena (http://plannerarena.org), where benchmark data can be uploaded and selected results can be visualized. The web site interface is dynamically constructed based on the contents of the benchmark database: selection widgets are created automatically for the benchmark problems, the performance attributes, the planning algorithms, etc. The code that powers Planner Arena is included in the OMPL distribution and can be run locally to evaluate one’s own results privately or be modified to create custom visualizations. There are currently three types of plots included on the Planner Arena site: overall performance plots, progress plots, and regression plots. We will describe these plots in more detail below. Plots of overall performance: The overall performance plots can show how different planners compare on various measures. The most common performance measure is the time it took a planner to find a feasible solution. By default, integer- and real-valued performance metrics (such as solution time) are plotted as box plots which provide useful summary statistics for each planner: median, confidence intervals, and outliers. However, in some cases visualizing the cumulative distribution function can reveal additional useful information. For instance, from Figure 3(a) one can easily read off the probability that a given planner can solve a particular benchmark within a specified amount of time. For very hard problems where most planners time out without finding a solution, it might be informative to look at solution difference: the gap between the best found solution and the goal (Figure 3(b)). For optimizing planners, it is often more interesting to look at the best solution found within some time limit. The overall performance page allows you to select a motion planning problem that was benchmarked, a particular benchmark attribute to plot, the OMPL version (in case the database contains data for multiple versions), and the planners to compare. Missing data is ignored. This is very important to keep in mind: if a planner failed to solve a problem 99 times out of a 100 runs, then the average solution length is determined by one run! To make missing data more apparent, a table below the plot shows how many data points there were for each planner and how many of those were missing values. Performance is often hard to judge by one metric alone. Depending on the application, a combination of metrics is often necessary to be able to choose an appropriate planner. For example, in our experience LBKPIECE [11] (one of the planning algorithms in OMPL) tends to be among the fastest planners, but it also tends to produce longer paths. For time-critical applications this may be acceptable, but for applications that place greater importance on short paths another planner might be more appropriate. There will also be exceptions to general trends. As another example, bidirectional planners (such as RRT-Connect [12]) tend to be faster than unidirectional planners (such as RRT), but Figure 3(a) shows that this not always the case. This underscores the need for a good set of benchmark problems that are representative of different applications. Progress plots: Some planners in OMPL are not limited to reporting information after a run is completed, but can also periodically report information during a run. In particular, for asymptotically optimal planners it is interesting to look at the convergence rate of the best path cost (e.g., path length). By default, Planner Arena will plot the smoothed mean as well as a 95% confidence interval for the mean (Figure 3(c)). Optionally, individual measurements can be shown as semi-transparent dots, which can be useful to get a better idea of the overall distribution. Analogous to the performance plots, missing data is ignored. During the first couple seconds of a run, a planner may never find a solution path. Below the progress plot, we therefore plot the number of data points available for a particular planner at each 1 second time interval. Regression plots: Regression plots show how the performance of the same planners change over different versions of OMPL (Figure 3(d)). This is mostly a tool for developers using OMPL that can help in the identification of changes with unintended side-effects on performance. However, it also allows a user to easily compare the performance of a user’s modifications to the planners in OMPL with the latest official release. In regression plots, the results are shown as a bar plot with error bars. Any of the plots can be downloaded in two formats: PDF and RData. The PDF format is useful if the plot is more or less “camera-ready” and might just need some touch ups. The RData file contains both the plot as well as all the data shown in the plot and can be loaded into R. The plot can be completely customized, further analysis can be applied to the data, or the data can be plotted in an entirely different way. The default benchmark database stored on the server currently contains results for nine different benchmark problems. They include simple rigid body type problems, but also hard problems specifically designed for optimizing planners (problems that contain several suboptimal decoy homotopy classes), kinodynamic problems, and a multi-robot problem (see Figure 4). IV. DISCUSSION We expect that with input from leaders in the motion planning community as well as extensive simulations and experiments we can create a suite of motion planning benchmarks. We plan to develop benchmarks along two different directions. First, there are “toy problems” that isolate one of a number of common difficulties that could trip up a motion planning algorithm (such as a very narrow passage or the existence of many false leads). Such benchmarks may provide some insights that lead to algorithmic improvements. Second, we would like to develop a benchmark suite where performance (by some measure) is predictive of performance of more complex real-world scenarios. Other planning libraries can use the same set of benchmark problems. While OMPL could be extended with other planning algorithms, we recognize that for community-wide adoption of benchmarks it is important to adopt standard input and output file formats. The log file format and database schema for storing benchmark results described in this paper are general enough that they can be adapted by other motion planning software. This would allow for a direct comparison of different implementations of planning algorithms. The Planner Arena web site makes it easy to interactively explore benchmark results. At this point, we do not claim that the benchmarks included in the default database on Planner Arena form some sort of “standard” benchmark set, although they are representative of the types of problems that have been used in prior work [13]. Furthermore, the set of problems we present results for will increase over time. ACKNOWLEDGMENTS MM and LEK are supported in part by NSF NRI 1317849. The authors wish to thank Luis Torres for his contributions to the benchmarking capabilities in OMPL. REFERENCES Mark Moll, Department of Computer Science, Rice University, Houston, TX. mmoll@rice.edu Ioan A. Șucan, Google[X], Mountain View, CA. isucan@gmail.com Lydia E. Kavraki, Department of Computer Science, Rice University, Houston, TX. kavraki@rice.edu
{"Source-Url": "http://www.cs.rice.edu/~mmoll/publications/moll2015benchmarking-motion-planning-algorithms.pdf", "len_cl100k_base": 4320, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18527, "total-output-tokens": 5314, "length": "2e12", "weborganizer": {"__label__adult": 0.0004448890686035156, "__label__art_design": 0.00058746337890625, "__label__crime_law": 0.0009293556213378906, "__label__education_jobs": 0.0013475418090820312, "__label__entertainment": 0.0001392364501953125, "__label__fashion_beauty": 0.00026345252990722656, "__label__finance_business": 0.000377655029296875, "__label__food_dining": 0.00040078163146972656, "__label__games": 0.0016918182373046875, "__label__hardware": 0.002407073974609375, "__label__health": 0.0008411407470703125, "__label__history": 0.0005321502685546875, "__label__home_hobbies": 0.0003809928894042969, "__label__industrial": 0.001674652099609375, "__label__literature": 0.0003452301025390625, "__label__politics": 0.0004875659942626953, "__label__religion": 0.0004954338073730469, "__label__science_tech": 0.447021484375, "__label__social_life": 0.00017464160919189453, "__label__software": 0.0169525146484375, "__label__software_dev": 0.5185546875, "__label__sports_fitness": 0.0010309219360351562, "__label__transportation": 0.00289154052734375, "__label__travel": 0.00027871131896972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24375, 0.01313]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24375, 0.32307]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24375, 0.90595]], "google_gemma-3-12b-it_contains_pii": [[0, 4950, false], [4950, 11215, null], [11215, 14560, null], [14560, 17301, null], [17301, 21946, null], [21946, 24375, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4950, true], [4950, 11215, null], [11215, 14560, null], [14560, 17301, null], [17301, 21946, null], [21946, 24375, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24375, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24375, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24375, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24375, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24375, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24375, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24375, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24375, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24375, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24375, null]], "pdf_page_numbers": [[0, 4950, 1], [4950, 11215, 2], [11215, 14560, 3], [14560, 17301, 4], [17301, 21946, 5], [21946, 24375, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24375, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
f688ff4f9296421ba07cfa87d1f4c4aa05b61969
ABSTRACT We present JOG, a framework for developing peephole optimizations and accompanying tests for Java compilers. JOG allows developers to write a peephole optimization as a pattern in Java itself. Such a pattern contains code before and after the desired transformation defined by the peephole optimization, with any necessary preconditions, and the pattern can be written in the same way that tests for the optimization are already written in OpenJDK. JOG automatically translates each pattern into C/C++ code as a JIT optimization pass, and generates tests for the optimization. Also, JOG automatically analyzes the shadow relation between a pair of optimizations where the effect of the shadowed optimization is overridden by the other. We used JOG to write 162 patterns, including many patterns found in OpenJDK and LLVM, as well as some that we proposed. We opened ten pull requests (PRs) for OpenJDK, introducing new optimizations, removing shadowed optimizations, and adding generated tests for optimizations; nine of PRs have already been integrated into the master branch of OpenJDK. The demo video for JOG can be found at https://youtu.be/z2q6dhOiogw. KEYWORDS Just-in-time compilers, code generation, peephole optimizations ACM Reference Format: 1 INTRODUCTION Peephole optimizations [11, 13] belong to an essential class of compiler optimizations that examine a few adjacent code instructions or a basic block, known as a window, and make targeted changes to improve performance or reduce the code’s size, e.g., \( A + A \) is transformed into \( A \ll 1 \). Peephole optimizations are widely used in popular compilers such as GCC, LLVM, and Java Just-in-Time compilers (Java JIT for short) [2, 9, 16]. Peephole optimizations are typically implemented as compiler passes, such that each detects a window and replaces it with an optimized form. Implementation of an optimization is commonly done in the language in which the compiler itself is implemented (e.g., C/C++ for Java JIT), using the compiler infrastructure, e.g., internal data structure representation, to manipulate windows. This low-level internal representation is quite different from the actual code (written in Java) being optimized. The mismatch hinders developers from effectively reasoning about windows of interest, because they have to repeatedly map instructions from high-level code (e.g., Java) to low-level code (e.g., C/C++) and data. The mismatch also makes implementation error-prone [7, 8, 19, 24, 26–28]. Alive [10] improves the traditional approach by introducing patterns, which are written in a domain specific language (DSL) and manipulate LLVM bitcode. Developers can write patterns in the DSL which are then translated into compiler passes. However, Alive still remains significantly detached from the programming language it optimizes (C++), leading to a steep learning curve and it lacks support for software tools, e.g., syntax highlighting in IDEs. Our key insight is that many peephole optimizations can be expressed within the programming language being optimized, thus avoiding complex patterns that manipulate low-level code representations. In OpenJDK, a significant portion of JIT optimization tests (known as IR tests) are written in Java and incorporate specific patterns within their code to trigger the optimizations being evaluated [15]. We propose to extend the concept, not only to use patterns to write IR tests but to comprehensively describe the entire optimization, encompassing both code before and after the optimization, which in turn implicitly describe IR tests. We present JOG [25], which enables developers to write peephole optimizations for Java JIT as high-level Java statements. These patterns undergo Java compiler type-checking and are automatically translated into compiler passes (in C/C++) by JOG. Furthermore, JOG can automatically generate IR tests in (Java) from these patterns. By writing patterns in Java for Java JIT, we ensure the meaningfulness of statement sequences within programs, i.e., windows, can indeed appear in programs (a guarantee not always achieved when working with IRs or compiler abstractions). Our approach also simplifies the rationale behind each peephole optimization, transforming what was once extensive comments or test cases into self-explanatory patterns. Moreover, developers can leverage software engineering tools like IDEs and linters while creating patterns in JOG. Having patterns written in Java also opens the door for future program equivalence checkers [1] compatible with both Java code and bytecode, readily obtained by compiling JOG patterns. The brevity of patterns eases the analysis of relations between optimizations. Java JIT compilers contain a large number of peephole optimizations. The maintenance becomes difficult as new optimizations are included. When developers want to add a new optimization, they have to be careful that this optimization’s effect is not overridden by some existing optimization. For instance, consider two optimizations, \( X \) and \( Y \): \( X \) transforms \( (a - b) + (c - d) \) into \( (a + c) - (b + d) \), and \( Y \) transforms \( (a - b) + (b - c) \) into a Figure 1: An example IR test available in OpenJDK (SHA fd910f7) [18]. - c, with variables a, b, c, and d. Notably, any expression matching \((a - b) + (b - c)\) (Y) also matches \((a - b) + (c - d)\) (X). If X is always applied before Y in a compiler pass, the effect of X will shadow Y. JOG can automatically report this shadow relation. Using JOG, we wrote 162 optimization patterns: 68 from Open- JDK, 92 adapted from LLVM, and two entirely new. Most OpenJDK patterns were taken from existing tests or hand-written examples in C/C++ comments. Our most complex pattern is just 115 characters, compared to the 462-character C/C++ counterpart that manipulates the IR. Our evaluation confirms that JOG is open source and publicly available at https://github.com/ EngineeringSoftware/jog. 2 EXAMPLE Figure 1 shows a test written using the IR test framework [17] which is a recommended approach to testing JIT peephole optimizations in OpenJDK. The test is expected to compile the annotated (@Test) method test8 and optimize \((a - b) + (c - a)\) to \(c - b\); the expected transformation is written as a comment. The IR shape of the compiled method is checked against certain rules specified using the @IR annotation (lines 2–3). The rules validate that the compiled method does not contain ADD node (line 2) and contains exactly one SUB node (line 3). Using JOG, developers can write an optimization, i.e., \((a - b) + (c - a)\) \(\rightarrow\) \((a - b)\) to \(c - b\), in a way that mirrors the existing IR test. In Figure 2a, a pattern written in JOG is a Java method annotated with @Pattern. The method’s parameters (line 2 in Figure 2a) declare variables (a, b, and c), specifying the data type of each as long. Inside the method, two API calls, before((a - b) + (c - a)) (line 3 in Figure 2a) and after((c - b) (line 4 in Figure 2a), define the expressions before and after the optimization. Both calls follow the format of existing IR tests. before((a - b) + (c - a)) returns code from the existing test return (a - b) + (c - a); (line 6 in Figure 1), and after((c - b) is taken from the comment // Check (a - b) + (c - a) => (c - b) (line 4 in Figure 1). Moreover, since the pattern and the test follow the same structure, not only does JOG enable developers to write patterns, but it can also automatically generate IR tests from patterns. JOG automatically translates a pattern into C/C++ code for direct inclusion in a JIT optimization pass (Figure 2b). Figure 2c displays hand-written code extracted from OpenJDK, achieving the same JIT peephole optimization to transform \((a - b) + (c - a)\) into \(c - b\). The implementation matches expressions of interest and then returns a new optimized equivalent expression. In this example, the matched expression must meet four conditions: (1) It is an addition expression (implicitly line 1 in Figure 2b because the method belongs to AddlNode); (2) its left operand is a subtraction expression \((a - b)\) (line 12 in Figure 2b); (3) its right operand is a subtraction expression \((c - a)\) (line 13 in Figure 2b); and (4) both subtraction expressions share a same operand \((a)\) (line 14 in Figure 2b). Once a match is found, the code constructs the new subtraction expression \((c - b)\) using b and c (line 15 in Figure 2b), reducing the evaluation cost from two subtractions and one addition to a single subtraction. Notably, a bug existed in the OpenJDK code due to incorrect access to the right operand of the right sub- expression (line 8 in Figure 2c), taking 13 years to discover it [19]. If JOG had been used for implementing the optimization, this bug could have been avoided. JOG analyzes the before and after API calls to infer conditions and construct new expressions, eventually generating C/C++ code as compiler passes. Figure 2b shows code generated from the pattern in Figure 2a, preserving functionality and avoiding the bug found in the hand-written code shown in Figure 2c. ### 3 TECHNIQUE AND IMPLEMENTATION Figure 3 shows a high-level overview of the workflow of the JOG framework. In this section, we briefly describe the design and implementation of patterns, translation details, test generation, and shadow relation detection [25]. **Design and implementation of patterns.** As the example in Figure 2a shows, we define the syntax of patterns using a subset of the Java programming language, where each optimization is represented as a Java method annotated with @pattern. The parameters of these methods declare variables used in patterns, with two types: constant values (representing literals that are annotated with @Constant) and free variables (representing any expression). We also provide two API methods, void before(int expression), which specifies the expression to match in the pattern, and void after(int expression), which specifies the optimized expression (int can also be long). A valid pattern must contain both a before and after method call in the method body, which may also feature if statements for preconditions and assignments for local variable re-assignments. **Translation.** JOG translates patterns into C/C++ code that implements compiler passes for JIT optimizations. JOG starts translation with parsing the expression provided in the before API and constructing an extended abstract syntax tree (eAST) for it. The eAST represents the structure of IR that matches the expression, which is essentially a directed acyclic graph (DAG). JOG maps identifiers in the pattern to eAST nodes. The same identifiers are reused to construct eAST for the after API. Figure 4 shows the eASTs constructed from the pattern ADD8 (Figure 2a). Next, JOG creates an if statement where the condition represents the necessary conditions for expression matching. These conditions may check operators, constants, identical identifiers, etc., and any preconditions specified in the pattern. The “then” branch of the if statement ends with a return statement providing the optimized expression. Finally, JOG prepends the if statement with proper variable declarations, concluding translation of the pattern. When handling multiple patterns, JOG follows the order specified in the provided file. **Test generation.** We use the example in Figure 1 to describe how JOG generates an IR test from the pattern in Figure 2a. The @Test method first declares exactly the same free variables as the pattern (long a, long b, long c), and returns exactly the expression inside the before API in the pattern (return (a - b) + (c - a)). One exception is that when the pattern has a constant variable, JOG uses a random number to substitute the constant variable. Next, JOG analyzes before and after in the pattern. JOG searches in after’s eAST (c - b) to count the number of operators (one SUB), and compares before’s and after’s eASTs to obtain the operators that exist in before but not in after (ADD). JOG then maps the operators to the corresponding IR node types used in IR tests and creates @IR annotations (@IR(counts = IRNode.SUB, "1") and @IR(failOn = IRNode.ADD)). **Shadowing optimizations.** Consider two optimizations X and Y in an optimization pass, which are sequentially placed, i.e., X followed by Y. If the set of instructions that Y matches is a subset of the set of instructions that X matches, then Y will never be invoked because X is always invoked before Y for any matched instructions. In this case, we say X shadows Y or Y is shadowed by X, e.g., X transforms (a - b) + (c - d) into (a + c) - (b + d), and Y transforms (a - b) + (b - c) into a - c, with variables a, b, c, and d. Given a pair of optimizations expressed in patterns X and Y, JOG rewires the problem of whether X shadows Y formally as follows: For every expression E matched by Y, it is also matched by X? JOG then encodes this problem in an SMT formula and leverages a constraint solver (Z3 [5]) to obtain a result on the shadow relation between the given pair of patterns [25]. ### 4 TOOL INSTALLATION AND USAGE JOG requires JDK 11 or later versions. We describe the installation steps and usage instructions using a Linux system (Ubuntu 20.04) with GNU Bash (version 5.0) as an example. We also provide a docker image that contains a built OpenJDK and the cloned JOG repository, which can be obtained by docker pull zzqut/jog:latest. **4.1 Installation** The first step is to clone the JOG repository¹. ``` $ git clone https://github.com/EngineeringSoftware/jog $ cd jog ``` To install JOG, one can execute the installation script like so: ``` $ ./tool/install.sh ``` ¹We provide the icse24-demo tag for the archive purpose. We wrote 162 patterns using JOG. We submitted a pull request to include these 10 tests in OpenJDK's JDK. We discovered that 10 tests were missing in OpenJDK, indicating the corresponding optimizations had not been tested. Thus, we submitted a pull request to include these 10 tests in OpenJDK's existing test suites. This pull request has been integrated into the master branch of OpenJDK (SHA fd910f7). 4.2 Usage After installation, one can run JOG through the executable jar ./tool/jog.jar. We provide an example file Example.java in the repository, which contains two patterns. To run JOG: $ java -jar tool/jog.jar Example.java This command calls a bash script to build the JOG jar. If the command completes normally, an executable jar jog.jar will appear in the tool directory, i.e., ./tool/jog.jar. Table 1: Summary of patterns that we wrote in JOG. <table> <thead> <tr> <th>#Patterns</th> <th>#OpenJDK</th> <th>#LLVM</th> <th>#Original</th> <th>#PRs</th> </tr> </thead> <tbody> <tr> <td>162</td> <td>68</td> <td>92</td> <td>2</td> <td>10</td> </tr> </tbody> </table> This command generates C/C++ code as compiler passes, (b) generates IR tests for the optimizations, and (c) reports a shadow relation between the pair of patterns (optimizations) provided. Figure 5 shows a screenshot of running the command. JOG saves the generated C/C++ code in cpp files with names matching the top level operator of the before API, e.g., generated code for pattern ADD2 with before((a-b)+(b-c)) is saved into addnode.cpp. To integrate these compiler passes into OpenJDK, one can simply copy the contents of these cpp files into the corresponding files in OpenJDK with identical names. JOG also generates IR tests as java files, which can be directly run with OpenJDK IR testing framework. 5 EVALUATION We wrote 162 patterns using JOG, as detailed in Table 1, spanning three categories. First, we studied the Ideal methods from OpenJDK, and we identified and rewrote 68 optimizations into patterns found in addnode.cpp, subnode.cpp and mulnode.cpp within src/hotspot/share/opto/. Second, we studied LLVM's InstCombine pass, inspired by the Alive approach [10], and translated 92 patterns from InstCombineAddSub.cpp and InstCombineAndOrXor.cpp in llvm/lib/Transforms/InstCombine/. Additionally, we proposed two optimizations and wrote them as patterns. We evaluated the code size and complexity [4] of the 68 patterns rewritten from OpenJDK using JOG. Compared to hand-written optimizations, using JOG to write patterns reduced the total character count from 11,000 to 3,987 (by 63.75%), and the total identifiers from 1,462 to 692 (by 52.67%). We also evaluated JOG-generated C/C++ code performance in comparison to hand-written code as compiler passes using the Renaissance benchmark suite [20]. Overall, we found no significant difference on execution time (which is average over 5 runs) between hand-written code and JOG-generated code. Furthermore, we used JOG to generate tests from patterns we wrote. We discovered that 10 tests were missing in OpenJDK, indicating the corresponding optimizations had not been tested. Thus, we submitted a pull request to include these 10 tests in OpenJDK's existing test suites. This pull request has been integrated into the master branch of OpenJDK (SHA fd910f7). 6 RELATED WORK Notable research explores implementing compiler optimizations using domain specific languages (DSLs). While prior works [6, 10, 21, 23] have introduced DSLs operating at the intermediate representation level of GCC or LLVM, JOG takes a different approach: JOG prioritizes developer productivity, allowing optimizations to be written in a high-level language (Java) using an approach very similar to the one for writing tests for optimizations. Also, researchers have explored relations between optimizations, such as detecting non-termination bugs due to repeated application of peephole optimizations [12, 14], and automatic discovery of new optimizations [3, 22]. 7 CONCLUSION Writing peephole optimizations is labor-intensive and error-prone. We introduced JOG, a framework that simplifies development by allowing patterns to be written in Java and then automatically translating them into C/C++ that can be integrated as a JIT optimization pass. JOG can also generate IR optimization tests in Java from the patterns and uncover shadow relations between optimizations. We wrote 162 patterns from OpenJDK, LLVM, along with some we adapted from LLVM or proposed ourselves. One PR is currently under review, and the remaining seven PRs have been accepted and integrated into the master branch of OpenJDK. In the future, we plan to prepare more PRs for the patterns we already wrote. Table 2: Pull requests we submitted to OpenJDK. <table> <thead> <tr> <th>Type</th> <th>Id</th> <th>Status</th> </tr> </thead> <tbody> <tr> <td>New optimizations</td> <td>7395</td> <td>Submitted</td> </tr> <tr> <td>7396</td> <td></td> <td>Merged</td> </tr> <tr> <td>7795</td> <td></td> <td>Merged</td> </tr> <tr> <td>16333</td> <td></td> <td>Merged</td> </tr> <tr> <td>16334</td> <td></td> <td>Merged</td> </tr> <tr> <td>Shadowing optimizations</td> <td>6752</td> <td>Merged</td> </tr> <tr> <td>New tests</td> <td>11049</td> <td>Merged</td> </tr> </tbody> </table> In total, we submitted ten pull requests (PRs) to OpenJDK (see Table 2). In addition to the aforementioned PR for missing tests, we identified two shadow relations where one optimization was found to override the effect of another, and we reported the issue through a PR, which have been confirmed and resolved. Also, eight other PRs introduced new JIT optimizations based on patterns we adapted from LLVM or proposed ourselves. One PR is currently under review, and the remaining seven PRs have been accepted and integrated into the master branch of OpenJDK. In the future, we plan to prepare more PRs for the patterns we already wrote. ACKNOWLEDGMENTS We thank Nader Al Awar, Yu Liu, Pengyu Nie, August Shi, Fu-Yao Yu, Jiyang Zhang, and the anonymous reviewers for their comments and feedback. This work is partially supported by a Google Faculty Research Award, a grant from the Army Research Office, and the US National Science Foundation under Grant Nos. CCF-2107291, CCF-2217696, and CCF-2313027. REFERENCES
{"Source-Url": "https://zzqpro.net/dl/papers/zang24jogtool.pdf", "len_cl100k_base": 4700, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 19236, "total-output-tokens": 7105, "length": "2e12", "weborganizer": {"__label__adult": 0.00034332275390625, "__label__art_design": 0.00018393993377685547, "__label__crime_law": 0.00023543834686279297, "__label__education_jobs": 0.0003368854522705078, "__label__entertainment": 4.0471553802490234e-05, "__label__fashion_beauty": 0.00013911724090576172, "__label__finance_business": 0.0001533031463623047, "__label__food_dining": 0.00029540061950683594, "__label__games": 0.0003893375396728515, "__label__hardware": 0.0006256103515625, "__label__health": 0.0003216266632080078, "__label__history": 0.00013947486877441406, "__label__home_hobbies": 6.395578384399414e-05, "__label__industrial": 0.0002472400665283203, "__label__literature": 0.00016796588897705078, "__label__politics": 0.0002086162567138672, "__label__religion": 0.0003910064697265625, "__label__science_tech": 0.0025806427001953125, "__label__social_life": 6.74128532409668e-05, "__label__software": 0.00345611572265625, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.0002841949462890625, "__label__transportation": 0.0003559589385986328, "__label__travel": 0.0001964569091796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25723, 0.04057]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25723, 0.38126]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25723, 0.86338]], "google_gemma-3-12b-it_contains_pii": [[0, 5549, false], [5549, 9569, null], [9569, 14225, null], [14225, 20303, null], [20303, 25723, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5549, true], [5549, 9569, null], [9569, 14225, null], [14225, 20303, null], [20303, 25723, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25723, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25723, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25723, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25723, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25723, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25723, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25723, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25723, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25723, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25723, null]], "pdf_page_numbers": [[0, 5549, 1], [5549, 9569, 2], [9569, 14225, 3], [14225, 20303, 4], [20303, 25723, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25723, 0.08]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
bc64ddc7beb62bbdd994381d77ae28a6f96db386
The IBM High Performance Computing Toolkit David Klepacki Advanced Computing Technology IBM T.J. Watson Research Center March 2004 What’s New? - One consolidated package “IBM High Performance Computing Toolkit” - New Software: Watson Sparse Matrix Library (WSMP) now included http://www-users.cs.umn.edu/~agupta/wsmp.html Modular I/O Performance Tool (MIO) MPI Tracer SHMEM Profiler GUI integration tool w/ source code traceback (PeekPerf) - New Installation Method RPM modules – allows query to existence and version IBM High Performance Toolkit Software (IBM pSeries / AIX) - **Hardware Performance** - HPM Toolkit - Catch - hpmcount - libhpm - hpmstat - **Shared Memory Performance** - DPOMP - pomprof - **Message-Passing Performance** - MP_Profiler, MP_Tracer - TurboSHMEM, TurboMPI - **Memory Performance** - Sigma - **Performance Visualization** - PeekPerf - **I/O Performance** - MIO (modular I/O) - **Math Performance** - WSMP (Watson sparse matrix) Unified GUI with Source Code Traceback HPM MP_profiler/tracer PomProf SiGMA MIO PeekPerf (work in progress) Simultaneous Visualization (HPM+MPI+OpenMP+MIO) ``` Label | Count | ExcSec | IncSec ----------|-------|--------|-------- BRNCHX | 1 | 62.664 | 62.66 Games Main | 1 | 0.037 | 62.71 LMOINF | 1 | 0 | 0 LMOX | 1 | 0 | 0 TRUNC | 1 | 0.013 | 0.013 VA7FM (END) | 1 | 0 | 0 VA7FM (INIT)| 1 | 0 | 0 ``` ``` call f_hpmstop(13) ENDIF IF(RPAC .AND. EXETYP.NE.CHECK) THEN call f_hpmstart(14, "RPAC") CALL RPAC call f_hpmstop(14) ENDIF IF(FRIEND.NE.BLANK) CALL ZEALX C C C C C ``` ``` call f_hpmstart(9, "VA7FM (END)") CALL VA7FM(LASTFM) call f_hpmstop(9) IF(LASTFM.NE.INITFM .AND. MASWRK) WRITE(IW,S50) 0 ``` ``` 0 0 1 4.909 0 0 2.471 0.035 0 0 0 0 1 0 1 1.403 0 0 2.8 0.137 0 0 0 0 ``` ``` ``` --- The IBM High Performance Computing Toolkit © 2004 IBM Corporation HPM Visualization Using PeekPerf The image shows a graphical user interface for visualizing high-performance computing metrics. The interface includes tabs and a main window displaying performance data for a particular loop (Loop 300) with labels such as "swim_comp.f", "calc1.f", "calc2.f", and "calc3.f". The interface also includes a metric browser and option menu with various performance counters and options for analyzing the data. Message-Passing Performance: MP_Profiler and MP_Tracer Libraries - **MP_Profiler** - Captures “summary” data for MPI and SHMEM calls with source code traceback - No changes to source code, but MUST compile with `-g` - ~1.7 microsecond overhead per MPI / SHMEM call - Several libraries needed: - mpiprof, smaprof, turbo1prof, turbo2prof - **MP_Tracer** - Captures “timestamped” data for MPI and SHMEM calls with source traceback - ~1.4 microsecond overhead per MPI / SHMEM call - More libraries needed: - mpitrace, smatrace, turbo1trace, turbo2trace ### MP_Profiler Summary Output <table> <thead> <tr> <th>MPI Routine</th> <th>#calls</th> <th>avg. bytes</th> <th>time(sec)</th> </tr> </thead> <tbody> <tr> <td>MPI_Comm_size</td> <td>3</td> <td>0.0</td> <td>0.000</td> </tr> <tr> <td>MPI_Comm_rank</td> <td>12994</td> <td>0.0</td> <td>0.016</td> </tr> <tr> <td>MPI_Send</td> <td>19575</td> <td>11166.9</td> <td>13.490</td> </tr> <tr> <td>MPI_Isend</td> <td>910791</td> <td>5804.2</td> <td>9.216</td> </tr> <tr> <td>MPI_Recv</td> <td>138173</td> <td>2767.9</td> <td>73.835</td> </tr> <tr> <td>MPI_Irecv</td> <td>784936</td> <td>15891.6</td> <td>2.407</td> </tr> <tr> <td>MPI_Sendrecv</td> <td>894809</td> <td>352.0</td> <td>88.705</td> </tr> <tr> <td>MPI_Wait</td> <td>1537375</td> <td>0.0</td> <td>288.049</td> </tr> <tr> <td>MPI_Waitall</td> <td>44042</td> <td>0.0</td> <td>25.312</td> </tr> <tr> <td>MPI_Bcast</td> <td>464</td> <td>41936.8</td> <td>3.272</td> </tr> <tr> <td>MPI_BARRIER</td> <td>1312</td> <td>0.0</td> <td>34.206</td> </tr> <tr> <td>MPI_GATHER</td> <td>68</td> <td>16399.1</td> <td>2.680</td> </tr> <tr> <td>MPI_SCATTER</td> <td>6</td> <td>17237.3</td> <td>0.532</td> </tr> </tbody> </table> --- total communication time = 770.424 seconds. total elapsed time = 1168.662 seconds. user cpu time = 1160.960 seconds. system time = 0.620 seconds. maximum memory size = 68364 KBytes. To check load balance: grep "total comm" mpi_profile.* MP_Profiler Visualization Using PeekPerf The data shows various MPI calls with their respective call counts. For example, `MPI_Allreduce_807` has a call count of 16, and `MPI_Bcast_924` has a call count of 285. The code snippet includes comments indicating that it processes only the total number of processes and imports necessary headers. ``` OP COMPILE C PROCESSES ONLY, NOT THE TOTAL NUMBER OF PROCESSES. C C IMPLICIT NONE INTEGER DDI_NP, DDI_ME C INCLUDE 'mpif.h' ``` SHMEM Profiling Capability <table> <thead> <tr> <th>Label</th> <th>Call Count</th> </tr> </thead> <tbody> <tr> <td>my_pe_945</td> <td>2</td> </tr> <tr> <td>num_pes_944</td> <td>2</td> </tr> <tr> <td>-shmem_barrier_all_788</td> <td>431</td> </tr> <tr> <td>-shmem_barrier_all_799</td> <td>431</td> </tr> <tr> <td>-shmem_barrier_all_859</td> <td>186</td> </tr> <tr> <td>-shmem_barrier_all_871</td> <td>186</td> </tr> <tr> <td>-shmem_barrier_all_915</td> <td>114</td> </tr> <tr> <td>-shmem_barrier_all_927</td> <td>114</td> </tr> <tr> <td>shmem_broadcast8_796</td> <td>273</td> </tr> <tr> <td>shmem_int8_sum_to_all_923</td> <td>114</td> </tr> </tbody> </table> ``` WPR_START = PE_START WLOG_STRIDE = LOG_STRIDE WPR_SIZE = PR_SIZE CALL SHMEM_BROADCAST8 (TARGET, SOURCE, WLENGTH) ``` **Metric Browser: shmem_broadcast8_796** <table> <thead> <tr> <th>Task</th> <th>Message Size</th> <th>Call Count [Max]</th> <th>WallClock [Max]</th> <th>Transferred Bytes</th> <th>Count</th> <th>WallClock</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>(2) 5 ... 16</td> <td>49</td> <td>0.040212</td> <td>392</td> <td>49</td> <td>0.040212</td> </tr> <tr> <td>0</td> <td>(3) 17 ... 64</td> <td>7</td> <td>0.002289</td> <td>168</td> <td>7</td> <td>0.002289</td> </tr> <tr> <td>0</td> <td>(4) 65 ... 256</td> <td>48</td> <td>0.015485</td> <td>3192</td> <td>48</td> <td>0.015485</td> </tr> <tr> <td>0</td> <td>(5) 257 ... 1K</td> <td>273</td> <td>0.105899</td> <td>76440</td> <td>273</td> <td>0.105899</td> </tr> <tr> <td>0</td> <td>(6) 1K ... 4K</td> <td>29</td> <td>0.019601</td> <td>20888</td> <td>29</td> <td>0.019601</td> </tr> <tr> <td>0</td> <td>(7) 4K ... 16K</td> <td>25</td> <td>0.008123</td> <td>162816</td> <td>25</td> <td>0.008123</td> </tr> <tr> <td>1</td> <td>(2) 5 ... 16</td> <td>49</td> <td>0.020304</td> <td>392</td> <td>49</td> <td>0.020304</td> </tr> <tr> <td>1</td> <td>(3) 17 ... 64</td> <td>7</td> <td>0.003646</td> <td>168</td> <td>7</td> <td>0.003646</td> </tr> <tr> <td>1</td> <td>(4) 65 ... 256</td> <td>48</td> <td>0.029487</td> <td>3192</td> <td>48</td> <td>0.029487</td> </tr> <tr> <td>1</td> <td>(5) 257 ... 1K</td> <td>273</td> <td>0.142042</td> <td>76440</td> <td>273</td> <td>0.142042</td> </tr> <tr> <td>1</td> <td>(6) 1K ... 4K</td> <td>29</td> <td>0.184179</td> <td>20888</td> <td>29</td> <td>0.184179</td> </tr> <tr> <td>1</td> <td>(7) 4K ... 16K</td> <td>25</td> <td>0.015682</td> <td>162816</td> <td>25</td> <td>0.015682</td> </tr> </tbody> </table> MP_Tracer Visualization Using PeekPerf Identifier - MPI_Comm_size - MPI_Comm_rank - MPI_Bcast - MPI_Barrier - MPI_Recv - MPI_Send - MPI_All_reduce Performance of OpenMP: POMP - Portable cross-platform/cross-language API to simplify the design and implementation of OpenMP tools - POMP was motivated by the MPI profiling interface (PMPI) PMPI allows selective replacement of MPI routines at link time Used by most MPI performance tools (including MP-Profiler) POMP Proposal - Three groups of events - **OpenMP constructs and directives/pragmas** - Enter/Exit around each OpenMP construct - Begin/End around associated body - Special case for parallel loops: - ChunkBegin/End, IterBegin/End, or IterEvent instead of Begin/End - “Single” events for small constructs like atomic or flush - **OpenMP API calls** - Enter/Exit events around omp_set_*_lock() functions - “Single” events for all API functions - **User functions and regions** - Allows application programmers to specify and control amount of instrumentation POMP Profiler (PompProf) - Generates a detailed profile describing overheads and time spent by each thread in three key regions of the parallel application: - Parallel regions - OpenMP loops inside a parallel region - User defined functions - Profile data is presented in the form of an XML file that can be visualized with PeekPerf # PompProf Visualization Using PeekPerf ![PompProf Visualization Using PeekPerf](image) ### Metrics <table> <thead> <tr> <th>Task</th> <th>Thread ID</th> <th>Master Time</th> <th>Thread Time</th> <th>Computation Time</th> <th>Idle Time</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> <td>0.109860</td> <td>0.013279</td> <td>0.003797</td> <td></td> </tr> <tr> <td>0</td> <td>1</td> <td>0.063225</td> <td>0.012801</td> <td>0.015644</td> <td></td> </tr> <tr> <td>0</td> <td>2</td> <td>0.08116</td> <td>0.023186</td> <td>0.028004</td> <td></td> </tr> <tr> <td>0</td> <td>3</td> <td>0.023728</td> <td>0.013003</td> <td>0.039373</td> <td></td> </tr> <tr> <td>1</td> <td>0</td> <td>0.054689</td> <td>0.013006</td> <td>0.031863</td> <td></td> </tr> <tr> <td>1</td> <td>1</td> <td>0.023176</td> <td>0.013514</td> <td>0.015503</td> <td></td> </tr> <tr> <td>1</td> <td>2</td> <td>0.069733</td> <td>0.012593</td> <td>0.03670</td> <td></td> </tr> <tr> <td>1</td> <td>3</td> <td>0.07246</td> <td>0.013033</td> <td>0.039173</td> <td></td> </tr> </tbody> </table> ### Code Snippet ```c DO 100 i=1,nX C(i)= C(i)+p(i+1)*(|V(i)|+|V(i+1)|) if (f(i)) then C(i+1)= C(i+1)-|V(i+1)|+|V(i+1)| 1 endif if (f(i+1)) then C(i+1)= C(i+1)-|V(i+1)|+|V(i+1)| 1 endif if (f(i+2)) then C(i+1)= C(i+1)-|V(i+1)|+|V(i+1)| 1 endif if (f(i+3)) then C(i+1)= C(i+1)-|V(i+1)|+|V(i+1)| 1 endif 1 CONTINUE ``` # The IBM High Performance Computing Toolkit © 2004 IBM Corporation TurboMP Libraries (latest version: 3.0.2) - **TurboMPI-1 Libraries** - Collective Communications Enhanced for shared memory - AllReduce, Alltoall, Alltoallv, Bcast, Barrier, Reduce - No LAPI dependency (to accommodate pre-Federation switched systems) - 32-bit and 64-bit support, Fortran/C/C++ - Syntax: -lturbo1 - **TurboMPI-2 Libraries** - All of MPI-1 above, plus MPI-2 RMA operations - Put, Get, Accumulate, Fence, Lock/Unlock, Start/Post/Wait/Complete, Test - Syntax: -lturbo2 - **TurboSHMEM Libraries** - Complete implementation (400+ routines) - 32-bit and 64-bit support, Fortran/C/C++ - Syntax: -lsma (requires shmem_init() / shmem_finalize() in source) Turbo MPI_Put Comparison Synchronization (SMP) (Power 3 Data On-Node) Bandwidth Vs. Message-Size <table> <thead> <tr> <th>Bytes</th> <th>Turbo Fence</th> <th>Turbo Lock Exclusive</th> <th>Turbo Lock Shared</th> <th>Turbo Start…Wait</th> </tr> </thead> <tbody> <tr> <td>4</td> <td>0.224</td> <td>0.114</td> <td>0.271</td> <td>0.404</td> </tr> <tr> <td>8</td> <td>0.439</td> <td>0.227</td> <td>0.547</td> <td>0.768</td> </tr> <tr> <td>16</td> <td>0.882</td> <td>0.453</td> <td>1.097</td> <td>1.534</td> </tr> <tr> <td>32</td> <td>1.74</td> <td>0.904</td> <td>2.187</td> <td>3.242</td> </tr> <tr> <td>64</td> <td>3.652</td> <td>1.81</td> <td>4.357</td> <td>6.084</td> </tr> <tr> <td>128</td> <td>7.064</td> <td>3.615</td> <td>8.667</td> <td>12.44</td> </tr> <tr> <td>256</td> <td>14.33</td> <td>7.226</td> <td>17.26</td> <td>24.21</td> </tr> <tr> <td>512</td> <td>28.77</td> <td>14.39</td> <td>34</td> <td>45</td> </tr> <tr> <td>1K</td> <td>51.9</td> <td>28.39</td> <td>66.2</td> <td>90.14</td> </tr> </tbody> </table> Turbo MPI_Put Comparison Synchronization (SMP) (Power 3 Data On-Node) Bandwidth Vs. Message-Size MB / sec <table> <thead> <tr> <th>Bytes</th> <th>1K</th> <th>2K</th> <th>4K</th> <th>8K</th> <th>16K</th> <th>32K</th> <th>64K</th> <th>128 K</th> <th>256 K</th> <th>512 K</th> <th>1M</th> <th>2M</th> </tr> </thead> <tbody> <tr> <td>Turbo Fence</td> <td>51.9</td> <td>89.2</td> <td>171</td> <td>263</td> <td>232</td> <td>432</td> <td>727</td> <td>1086</td> <td>1424</td> <td>1361</td> <td>1377</td> <td>1183</td> </tr> <tr> <td>Turbo Lock Exclusive</td> <td>28.4</td> <td>56.1</td> <td>108</td> <td>201</td> <td>212</td> <td>406</td> <td>747</td> <td>1261</td> <td>1918</td> <td>2059</td> <td>1308</td> <td>1210</td> </tr> <tr> <td>Turbo Lock Shared</td> <td>66.2</td> <td>128</td> <td>234</td> <td>397</td> <td>294</td> <td>553</td> <td>988</td> <td>1623</td> <td>2292</td> <td>2352</td> <td>1350</td> <td>1230</td> </tr> <tr> <td>Turbo Start…Wait</td> <td>90.1</td> <td>186</td> <td>336</td> <td>540</td> <td>285</td> <td>507</td> <td>858</td> <td>1238</td> <td>1586</td> <td>1564</td> <td>1217</td> <td>1277</td> </tr> </tbody> </table> ### ACTC Turbo MPI_Put Latency Comparison (Power 3 Data On-Node) <table> <thead> <tr> <th></th> <th>IBM MPI_Put (usec)</th> <th>Turbo MPI_Put (usec)</th> </tr> </thead> <tbody> <tr> <td>Fence</td> <td>62</td> <td>18</td> </tr> <tr> <td>Lock - Exclusive</td> <td>179</td> <td>35</td> </tr> <tr> <td>Lock Shared</td> <td>174</td> <td>15</td> </tr> <tr> <td>Start/Post/Wait/Complete</td> <td>99</td> <td>21</td> </tr> <tr> <td>TurboSHMEM Put</td> <td>N/A</td> <td>5</td> </tr> <tr> <td>ISend/IRecv</td> <td>11</td> <td>N/A</td> </tr> </tbody> </table> Modular I/O Performance Tool (MIO) - **I/O Analysis** - Trace module - Summary of File I/O Activity + Binary Events File - Low CPU overhead - **I/O Performance Enhancement Library** - Prefetch module (optimizes asynchronous prefetch and write-behind) - System Buffer Bypass capability - User controlled pages (size and number) - **Recoverable Error Handling** - Recover module (monitors return values and error + reissues failed requests) - **Remote Data Server** - Remote module (simple socket protocol for moving data) - **Shared object library for AIX** MIO User Code Interface ```c #define open64(a,b,c) MIO_open64(a,b,c,0) #define read MIO_read #define write MIO_write #define close MIO_close #define lseek64 MIO_lseek64 #define fcntl MIO_fcntl #define ftruncate64 MIO_ftruncate64 #define fstat64 MIO_fstat64 ``` MIO Trace Module (sample partial output) Trace close : program <-> pf : /bmwfs/cdh108.T20536_13.SCR300 : (281946/2162.61)=130.37 mbytes/s current size=0 max_size=16277 mode =0777 sector size=4096 oflags =0x302=RDWR CREAT TRUNC open 1 0.01 write 478193 462.10 59774 59774 131072 131072 read 1777376 1700.48 222172 222172 131072 131072 seek 911572 2.83 fcntl 3 0.00 trunc 16 0.40 close 1 0.03 size 127787 Performance Visualization (work in progress) JFS performance vmtune -p20 -P80 -f120 -F128 -r2 -R8 writes reads file position (bytes) 0 2+3 4+3 6+3 8+3 10+3 12+3 14+3 16+3 0 2+3 4+3 6+3 8+3 10+3 12+3 14+3 16+3 4500 15500 (time (seconds)) slope=115.6+6 MSC.Nastran V2001 Benchmark: SOL 111, 1.7M DOF, 1578 modes, 146 frequencies, residual flexibility and acoustics. 120 GB of disk space. Machine: 4-way, 1.3 GHz p655, 32 GB with 16 GB large pages, JFS striped on 16 SCSI disks. MSC.Nastran: V2001.0.9 with large pages, dmp=2 parallel=2 mem=700mb The run with MIO used mio=1000mb 6.8 TB of I/O in 26666 seconds is an average of about 250 MB/sec ABAQUS Standard v6.3-4 Engine models Parallel direct solver 16 POWER4 processors Elapsed Time (seconds) - 5 M dof, 36 GB fct file - 11.5 M dof, 80 GB fct file - with MIO - w/o MIO How to Obtain the IBM HPC Toolkit? - **Current Possibilities:** 1. Acquire as part of new procurement. 2. Acquire as part of ACTC Performance Tuning Workshop. 3. Purchase license directly from IBM Research - **Future Possibility:** Purchase from third-party vendor (e.g., Absoft) Why a Fee? - In previous years, ACTC tools costs were factored into pricing of pSeries systems. Today, Research software is no longer accounted for in system pricing, and costs must be accounted for directly through licensing. The result is a loss of transparency regarding the costs of these tools. IBM HPC Toolkit Software Maintenance Cycle - Major releases at SciComp meetings (twice a year) - Minor releases (bug fixes) as-needed - “New Technology” (e.g., DPOMP) placed on Alphaworks server: - http://alphaworks.ibm.com - 6 month lifetime - 90-day evaluation licenses
{"Source-Url": "http://www.spscicomp.org/ScicomP10/Presentations/Austin_Klepacki.pdf", "len_cl100k_base": 6252, "olmocr-version": "0.1.50", "pdf-total-pages": 29, "total-fallback-pages": 0, "total-input-tokens": 41153, "total-output-tokens": 7080, "length": "2e12", "weborganizer": {"__label__adult": 0.0002961158752441406, "__label__art_design": 0.0002987384796142578, "__label__crime_law": 0.00029468536376953125, "__label__education_jobs": 0.0005469322204589844, "__label__entertainment": 0.00010734796524047852, "__label__fashion_beauty": 0.0001468658447265625, "__label__finance_business": 0.000530242919921875, "__label__food_dining": 0.0002849102020263672, "__label__games": 0.0007038116455078125, "__label__hardware": 0.00437164306640625, "__label__health": 0.0003898143768310547, "__label__history": 0.0002560615539550781, "__label__home_hobbies": 9.28044319152832e-05, "__label__industrial": 0.0008378028869628906, "__label__literature": 0.00016772747039794922, "__label__politics": 0.00020241737365722656, "__label__religion": 0.0004239082336425781, "__label__science_tech": 0.1441650390625, "__label__social_life": 8.243322372436523e-05, "__label__software": 0.03411865234375, "__label__software_dev": 0.810546875, "__label__sports_fitness": 0.0004096031188964844, "__label__transportation": 0.00046944618225097656, "__label__travel": 0.00017940998077392578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15585, 0.07989]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15585, 0.02967]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15585, 0.59347]], "google_gemma-3-12b-it_contains_pii": [[0, 133, false], [133, 537, null], [537, 1016, null], [1016, 1131, null], [1131, 2037, null], [2037, 2476, null], [2476, 3049, null], [3049, 4132, null], [4132, 4608, null], [4608, 6681, null], [6681, 6830, null], [6830, 7148, null], [7148, 7746, null], [7746, 8087, null], [8087, 9454, null], [9454, 9454, null], [9454, 10146, null], [10146, 11191, null], [11191, 11864, null], [11864, 12480, null], [12480, 13054, null], [13054, 13316, null], [13316, 13879, null], [13879, 14137, null], [14137, 14532, null], [14532, 14716, null], [14716, 15006, null], [15006, 15307, null], [15307, 15585, null]], "google_gemma-3-12b-it_is_public_document": [[0, 133, true], [133, 537, null], [537, 1016, null], [1016, 1131, null], [1131, 2037, null], [2037, 2476, null], [2476, 3049, null], [3049, 4132, null], [4132, 4608, null], [4608, 6681, null], [6681, 6830, null], [6830, 7148, null], [7148, 7746, null], [7746, 8087, null], [8087, 9454, null], [9454, 9454, null], [9454, 10146, null], [10146, 11191, null], [11191, 11864, null], [11864, 12480, null], [12480, 13054, null], [13054, 13316, null], [13316, 13879, null], [13879, 14137, null], [14137, 14532, null], [14532, 14716, null], [14716, 15006, null], [15006, 15307, null], [15307, 15585, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15585, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15585, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15585, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15585, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15585, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15585, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15585, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15585, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15585, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15585, null]], "pdf_page_numbers": [[0, 133, 1], [133, 537, 2], [537, 1016, 3], [1016, 1131, 4], [1131, 2037, 5], [2037, 2476, 6], [2476, 3049, 7], [3049, 4132, 8], [4132, 4608, 9], [4608, 6681, 10], [6681, 6830, 11], [6830, 7148, 12], [7148, 7746, 13], [7746, 8087, 14], [8087, 9454, 15], [9454, 9454, 16], [9454, 10146, 17], [10146, 11191, 18], [11191, 11864, 19], [11864, 12480, 20], [12480, 13054, 21], [13054, 13316, 22], [13316, 13879, 23], [13879, 14137, 24], [14137, 14532, 25], [14532, 14716, 26], [14716, 15006, 27], [15006, 15307, 28], [15307, 15585, 29]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15585, 0.20596]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
bd5105e07d7b04e69001595bba7a401e107b054a
Quality Estimation for Machine Translation: different users, different needs Lucia Specia L.Specia@wlv.ac.uk http://pers-www.wlv.ac.uk/~in1316/ JEC Workshop October 14th 2011 Quality Estimation for Machine Translation: different translators, same needs Lucia Specia L.Specia@wlv.ac.uk http://pers-www.wlv.ac.uk/~in1316/ JEC Workshop October 14th 2011 Why are you not (yet) using MT? - Why do you use translation memories? - Perfect translations? Outline - Quality Estimation (QE) for Machine Translation (MT) - Applications - General approach - What aspect of quality we want to estimate and how to represent it - How we assess quality estimation systems Goal: given the output of an MT system for a given input, provide an estimate of its quality Motivations: assessing the quality of translations is Time consuming, tedious, not worth it Not always possible Une interdiction gouvernementale sur la non-UE conjoints étrangers de moins de 21 à venir au Royaume-Uni, qui a été introduit par le Labour en 2008 et vise un partenaire étranger de l'extérieur de l'UE ne pouvait pas se joindre à leurs partenaires au Royaume-Uni si elles étaient moins de 21 ans, est illégale, disent les juges haut. Main applications: Is it worth providing this translation to a professional translator for post-editing? Should this translation be highlighted as “not reliable” to a reader? Given multiple translation options for a given input, can we select the best one? Is this sentence good enough for publishing as is? Different from MT evaluation (BLEU, NIST, etc): - MT system in use, translating unseen text - Translation unit: sentence ➔ not about average quality - Independent from MT system (post-MT) General approach 1. Decide **which aspect of quality** to estimate 2. Decide **how to represent** this aspect of quality 3. Collect **examples** of translations with different levels of quality 4. Identify and extract **indicators** that represent this quality 5. Apply an algorithm to induce a **model** to predict quality scores for new translations 6. Evaluate this model on new translations General approach 1. Decide **which aspect of quality** to estimate: “**post-edit effort**” 5. Apply an algorithm to induce a model to predict quality How is quality defined? 1. **Good vs bad translations**: good for what? (Blatz et al., 2003) 2. **MT1 vs MT2**: is MT1 better than MT2. Yes, but is MT1 good enough? (Blatz et al., 2003; He et al., 2010) 3. **Perfect vs not perfect translations**: can we publish this translation as is? (Soricut and Echihabi, 2010) Define “quality” in terms of post-editing effort 4. Which translations are **good enough** for post- How is quality defined? What levels of quality can we expect from an MT system? 1. **Perfect**: no post-editing needed at all 2. **Good**: some post-editing needed, but faster/easier than translating from scratch 3. **Bad**: too much post-editing needed, faster/easier to translate from scratch We expect the machine to estimate this well, but can humans do it well? How is quality defined? The court said that the rule was unjustified. La cour a déclaré que la règle était injustifiée. "I basically felt like I'd been exiled from my country and in forcing him to leave they'd also forced me to leave," she said. "J'ai essentiellement ressenti si j'avais été exilé de mon pays et dans le forçant à quitter leur avais m'a aussi forcé de partir", dit-elle. How is quality defined? Tomorrow, and tomorrow, and tomorrow, Creeps in this petty pace from day to day, To the last syllable of recorded time; And all our yesterdays have lighted fools The way to dusty death. Out, out, brief candle! ... How do humans perform? Humans are good at identifying perfect translations, as well as terribly bad translations. But medium quality translations are more difficult: “good enough” depends on the translator. - Very experienced translators: may prefer only close to perfect translations - Less experienced translators: may benefit from How do QE systems perform? - **Humans**: agreement on **en-es Europarl**: 85% (prof., 2 an.) - **Humans**: agreement on **en-pt subtitles** of TV series: 850 sentences (non prof., 3 an.) - 351 cases (41%) have **full** agreement - 445 cases (52%) have **partial** agreement - 54 cases (7%) have **null** agreement - Agreement by score: <table> <thead> <tr> <th>Score</th> <th>Full</th> <th>Partial</th> </tr> </thead> <tbody> <tr> <td>4</td> <td>59%</td> <td>41%</td> </tr> <tr> <td>3</td> <td>35%</td> <td>65%</td> </tr> <tr> <td>2</td> <td>23%</td> <td>77%</td> </tr> <tr> <td>1</td> <td>50%</td> <td>50%</td> </tr> </tbody> </table> How do QE systems perform? Simplify the task, if we know how experienced the translator is: binary problem -> **good** <table> <thead> <tr> <th>Languages</th> <th>MT system</th> <th>Accuracy</th> <th>Most frequent score</th> <th>Sentence length</th> </tr> </thead> <tbody> <tr> <td>en-es</td> <td>MT1</td> <td>70%</td> <td>52%</td> <td>36%</td> </tr> <tr> <td>en-es</td> <td>MT2</td> <td>77%</td> <td>74%</td> <td>21%</td> </tr> <tr> <td>en-es</td> <td>MT3</td> <td>66%</td> <td>57%</td> <td>30%</td> </tr> <tr> <td>en-es</td> <td>MT4</td> <td>94%</td> <td>94%</td> <td>70%</td> </tr> </tbody> </table> How do QE systems perform? - **Evaluation in terms of classification accuracy → clear** - Upper bound = 100% - 50% = we are selecting 50% of the bad cases as good / of the good cases as bad - Is ~70% accuracy enough? - A different perspective: **precision/recall** by category: - How many bad translations the system says are good (**false rate**) - How many good the system says are bad (**miss rate**) How do QE systems perform? - Selecting only good translations: [3-4] (en-es) - Number of good translations in the top $n$ translations: <table> <thead> <tr> <th>$n$</th> <th>Human</th> <th>CE</th> <th>Aborted N</th> <th>Length</th> </tr> </thead> <tbody> <tr> <td>100</td> <td>80</td> <td>100</td> <td>120</td> <td>140</td> </tr> <tr> <td>200</td> <td>160</td> <td>180</td> <td>200</td> <td>220</td> </tr> <tr> <td>500</td> <td>340</td> <td>360</td> <td>380</td> <td>400</td> </tr> </tbody> </table> Legend: - Human - CE - Aborted N - Length Are 2/4 discrete scores enough? - We want to estimate: 1, 2 or 1, 2, 3, 4 - It’s like saying you can get, from a TM: - Only 0% match or 100% match - Or the following (fuzzy) match levels: 0%, 50%, 75%, 100% - Isn’t there anything in between? Estimate a continuum: a real number in [1, 4] Estimating a continuous score - **English-Spanish** Europarl data - 4 SMT systems, 4 sets of 4,000 translations - **Quality score**: 1-4 <table> <thead> <tr> <th>Languages</th> <th>MT System</th> <th>Error</th> </tr> </thead> <tbody> <tr> <td>en-es</td> <td>MT1</td> <td>0.653</td> </tr> <tr> <td>en-es</td> <td>MT2</td> <td>0.718</td> </tr> <tr> <td>en-es</td> <td>MT3</td> <td>0.706</td> </tr> <tr> <td>en-es</td> <td>MT4</td> <td>0.603</td> </tr> </tbody> </table> Is a number in $[1,4]$ informative? Can we see this number as a fuzzy match level? • Not really... How much work to do on a 3.2 translation? Try more objective ways of representing quality: \[ \text{HTER} = \frac{\# \text{ edits}}{\# \text{ words in post-edited version}} \] • Edit distance (HTER): distance (in $[0,1]$) between original MT and post-edited version. What is the proportion of edits (words) will I have to perform to correct Is a number in $[1, 4]$ informative? - **Time**: how many seconds will it take to post-edit this sentence? - **Time varies** considerably from annotator to annotator. This annotation is **cheap and easy** to obtain if translators already post-edit MT. Other ways of representing quality - **English-Spanish, French-English** news articles - 1,500-2,500 translations - Quality scores: - Score1 = HTER - Score2 = [1-4] - Score3 = time - Annotation tool to collect data from translators Other ways of representing quality Results - Each model trained on examples from a single translator <table> <thead> <tr> <th>Dataset</th> <th>Error</th> </tr> </thead> <tbody> <tr> <td>fr-en</td> <td></td> </tr> <tr> <td>Distance</td> <td>0.16</td> </tr> <tr> <td>[1-4]</td> <td>0.66</td> </tr> <tr> <td>Time</td> <td>0.65</td> </tr> <tr> <td>en-es</td> <td></td> </tr> <tr> <td>Distance</td> <td>0.18</td> </tr> <tr> <td>[1-4]</td> <td>0.55</td> </tr> <tr> <td>Time</td> <td>1.97</td> </tr> </tbody> </table> Other ways of representing quality - So we are almost happy: - We can estimate an aspect of quality that is clear and objective (time, distance) - But do these error metrics say something about how good the QE model is? Or which model is better? Evaluation by ranking rank translations by their QE scores (best first) Based on the quality of the MT system for a small development data, find the percentage of “good enough” translations, using any annotation scheme. E.g. 30% of the translation are good easure improvement of top 30% according to QE scores: ◆ Compare average quality of full dataset Evaluation by ranking <table> <thead> <tr> <th>Languages</th> <th>Delta [1-4]</th> <th>Delta Distance [0, 1]</th> <th>Delta Time (sec/word)</th> </tr> </thead> <tbody> <tr> <td>fr-en (70% good)</td> <td>0.07</td> <td>-0.02</td> <td>-0.11</td> </tr> <tr> <td>en-es (40% good)</td> <td>0.20</td> <td>-0.06</td> <td>-0.19</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Languages</th> <th>Delta [1-4]</th> <th>Delta Distance [0, 1]</th> <th>Delta Time (sec/word)</th> </tr> </thead> <tbody> <tr> <td>fr-en</td> <td>0.16</td> <td>-0.04</td> <td>-0.20</td> </tr> <tr> <td>en-es</td> <td>0.15</td> <td>-0.04</td> <td>-0.26</td> </tr> </tbody> </table> 25%, 50% and 75% Extrinsic evaluation by ranking - Measure **post-editing time** to correct top 30% translations selected according to QE scores. - Compare it against post-editing time of randomly selected 30% translations. - If can't decide on the %, measure **number of words that can be post-edited in a fixed amount of time** from best to worse translations ranked according to QE model. - Compare it against number of words post- Extrinsic evaluation by ranking Evaluation: - Model 1 (HTER) - Model 2 (1-4 scores) - Model 3 (Time) 2.4K new translations 600 translations 600 translations 600 translations 600 translations Sorted 600 translations Sorted 600 translations Sorted 600 translations # words? # words? # words? # words? # Extrinsic evaluation by ranking - **Post-editing in 1 hour:** <table> <thead> <tr> <th>MT System / Dataset</th> <th>Words/second</th> </tr> </thead> <tbody> <tr> <td>S6 fr-en HTER (0-1)</td> <td>0.96</td> </tr> <tr> <td>S6 fr-en [1-4]</td> <td>0.91</td> </tr> <tr> <td>S6 fr-en time (sec/word)</td> <td>1.09</td> </tr> <tr> <td>S7 en-es HTER (0-1)</td> <td>0.41</td> </tr> <tr> <td>S7 en-es [1-4]</td> <td>0.43</td> </tr> <tr> <td>S7 en-es time (sec/word)</td> <td>0.57</td> </tr> <tr> <td>S7 en-es no CE</td> <td>0.32</td> </tr> </tbody> </table> Extrinsic evaluation by ranking Summing up: - The aspect of quality we estimate is clear (time, distance) - The number of extrinsic ranking-based (esp. extrinsic) evaluations says something about how good a model is How about other users? Post-editing time/distance/[1-4] scores have a good (pearson) correlation: - **Distance** and [1-4] = 0.75 - 0.82 - **Time** and [1-4] = 0.50 - 0.60 (the smaller values are when scores are given by different translators) If we correlate post-editing time/distance and [1-4] scores reflecting adequacy (not post-editing effort) - **Distance** and [1-4] Adequacy = 0.55 - **Time** and [1-4] Adequacy = 0.40 Is this enough? - Is an accurate QE system at the sentence level enough? - QE should also indicate, for sentences that are not perfect, what the bad parts are. Sub-sentence level QE (error detection in translations) (Xiong et al., 2010): link grammar: mostly words Conclusions - It is possible to estimate the quality of MT systems with respect to post-editing needs. - Measuring and estimating post-editing time seems to be the best way to build and evaluate QE systems. - **Translator-dependent** measure: build a model per translator or project the time differences. - **Extrinsic** evaluation using time is expensive, not feasible to compare many QE systems. - Alternative: intrinsic ranking-based measures based on a pre-annotated test set: $\Delta X$, $X = \{1-4\}$, Time, BLEU, etc. Conclusions - QE is a relatively new area - It has a great potential to make MT more useful to end-users: - Translation: minimize post-editing time, allow for fair pricing models - Localization: keep the “brand” of the product/company - Gisting: avoid misunderstandings - Dissemination of large amounts of content, e.g.: user reviews Advertisement: - **Shared task on QE** - Most likely with WMT at NAACL, June 2012 - **Sentence-level**: classification, regression and ranking - We will provide: - Training sets annotated for quality - Baseline feature sets - Baseline systems to extract features - Test sets annotated for quality Questions? Lucia Specia l.specia@lv.ac.uk Regression + Confidence Machines to define the splitting point according to expected confidence level. QE score x MT metrics: Pearson’s correlation across datasets produced by different MT systems: <table> <thead> <tr> <th>Test set</th> <th>Training set</th> <th>Pearson QE and human</th> </tr> </thead> <tbody> <tr> <td>S3 en-es</td> <td>S1 en-es</td> <td>0.478</td> </tr> <tr> <td></td> <td>S2 en-es</td> <td>0.517</td> </tr> <tr> <td></td> <td>S3 en-es</td> <td><strong>0.542</strong></td> </tr> <tr> <td></td> <td>S4 en-es</td> <td>0.423</td> </tr> <tr> <td>S2 en-es</td> <td>S1 en-es</td> <td>0.531</td> </tr> <tr> <td></td> <td><strong>S2 en-es</strong></td> <td><strong>0.562</strong></td> </tr> <tr> <td></td> <td>S3 en-es</td> <td>0.547</td> </tr> <tr> <td></td> <td>S4 en-es</td> <td>0.442</td> </tr> </tbody> </table> (Journal of MT 2010) Features - Adequacy indicators - Translation - Fluency indicators - Source text - MT system - Complexity indicators - Confidence indicators - Shallow vs linguistically motivated - MT system dependent vs independent Source features - Source sentence length - Language model of source - Average number of possible translations per source word - % of n-grams belonging to different frequency quartiles of the source side of the parallel corpus - Average source word length - ... - ... Target features - Target sentence length - Language model of target - Proportion of untranslated words - Grammar checking - Mismatching opening/closing brackets, quotation symbols - Coherence of the target sentence - ... MT features (confidence) - SMT model global score and internal features - Distortion count, phrase probability, ... - % search nodes aborted, pruned, recombined ... - Language model using n-best list as corpus - Distance to centre hypothesis in the n-best list - Relative frequency of the words in the translation in the n-best list - Ratio of SMT model score of the top translation to the sum of the scores of all hypotheses in the n-best list - ... Source-target features - Ratio between source and target sentence lengths - Punctuation checking (target vs source) - Correct translation of pronouns - Matching of phrase/POS tags - Matching of dependency relations - Matching of named entities - Matching of semantic role labels - Alignment of these and other linguistic markers - ... Source-target features MT system selection Approach: - **Train** QE models for each MT system (individually) - Use all MT systems to **translate** each input segment - **Estimate** the QE score for each alternative translation - **Select** the translation with the highest CE score Experiments: - **En-Es Europarl [1-4] datasets**, 4 MT systems Results: How do QE systems perform? - Selecting only good translations: [3-4] (en-es) Average human scores in the top N translations: ![Average scores x TOP N graph](image)
{"Source-Url": "http://www.mt-archive.info/10/JEC-2011-Specia.pdf", "len_cl100k_base": 4694, "olmocr-version": "0.1.50", "pdf-total-pages": 46, "total-fallback-pages": 0, "total-input-tokens": 68617, "total-output-tokens": 5941, "length": "2e12", "weborganizer": {"__label__adult": 0.0006566047668457031, "__label__art_design": 0.0015783309936523438, "__label__crime_law": 0.0010023117065429688, "__label__education_jobs": 0.01053619384765625, "__label__entertainment": 0.0005860328674316406, "__label__fashion_beauty": 0.0004127025604248047, "__label__finance_business": 0.0015478134155273438, "__label__food_dining": 0.0005273818969726562, "__label__games": 0.0015096664428710938, "__label__hardware": 0.0009975433349609375, "__label__health": 0.0009508132934570312, "__label__history": 0.0006327629089355469, "__label__home_hobbies": 0.00013124942779541016, "__label__industrial": 0.0009984970092773438, "__label__literature": 0.0081939697265625, "__label__politics": 0.0008635520935058594, "__label__religion": 0.0010881423950195312, "__label__science_tech": 0.28857421875, "__label__social_life": 0.0004808902740478515, "__label__software": 0.08782958984375, "__label__software_dev": 0.58935546875, "__label__sports_fitness": 0.0005598068237304688, "__label__transportation": 0.0006685256958007812, "__label__travel": 0.0002435445785522461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15177, 0.04075]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15177, 0.10646]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15177, 0.79198]], "google_gemma-3-12b-it_contains_pii": [[0, 177, false], [177, 355, null], [355, 451, null], [451, 661, null], [661, 1204, null], [1204, 1516, null], [1516, 1705, null], [1705, 2101, null], [2101, 2253, null], [2253, 2686, null], [2686, 3058, null], [3058, 3450, null], [3450, 3689, null], [3689, 4026, null], [4026, 4539, null], [4539, 5122, null], [5122, 5537, null], [5537, 5934, null], [5934, 6228, null], [6228, 6580, null], [6580, 7025, null], [7025, 7282, null], [7282, 7521, null], [7521, 7827, null], [7827, 8077, null], [8077, 8434, null], [8434, 9123, null], [9123, 9549, null], [9549, 9857, null], [9857, 10268, null], [10268, 10486, null], [10486, 10920, null], [10920, 11190, null], [11190, 11724, null], [11724, 12068, null], [12068, 12379, null], [12379, 12422, null], [12422, 12525, null], [12525, 13153, null], [13153, 13370, null], [13370, 13638, null], [13638, 13862, null], [13862, 14317, null], [14317, 14677, null], [14677, 15011, null], [15011, 15177, null]], "google_gemma-3-12b-it_is_public_document": [[0, 177, true], [177, 355, null], [355, 451, null], [451, 661, null], [661, 1204, null], [1204, 1516, null], [1516, 1705, null], [1705, 2101, null], [2101, 2253, null], [2253, 2686, null], [2686, 3058, null], [3058, 3450, null], [3450, 3689, null], [3689, 4026, null], [4026, 4539, null], [4539, 5122, null], [5122, 5537, null], [5537, 5934, null], [5934, 6228, null], [6228, 6580, null], [6580, 7025, null], [7025, 7282, null], [7282, 7521, null], [7521, 7827, null], [7827, 8077, null], [8077, 8434, null], [8434, 9123, null], [9123, 9549, null], [9549, 9857, null], [9857, 10268, null], [10268, 10486, null], [10486, 10920, null], [10920, 11190, null], [11190, 11724, null], [11724, 12068, null], [12068, 12379, null], [12379, 12422, null], [12422, 12525, null], [12525, 13153, null], [13153, 13370, null], [13370, 13638, null], [13638, 13862, null], [13862, 14317, null], [14317, 14677, null], [14677, 15011, null], [15011, 15177, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15177, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15177, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15177, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15177, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15177, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15177, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15177, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15177, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15177, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15177, null]], "pdf_page_numbers": [[0, 177, 1], [177, 355, 2], [355, 451, 3], [451, 661, 4], [661, 1204, 5], [1204, 1516, 6], [1516, 1705, 7], [1705, 2101, 8], [2101, 2253, 9], [2253, 2686, 10], [2686, 3058, 11], [3058, 3450, 12], [3450, 3689, 13], [3689, 4026, 14], [4026, 4539, 15], [4539, 5122, 16], [5122, 5537, 17], [5537, 5934, 18], [5934, 6228, 19], [6228, 6580, 20], [6580, 7025, 21], [7025, 7282, 22], [7282, 7521, 23], [7521, 7827, 24], [7827, 8077, 25], [8077, 8434, 26], [8434, 9123, 27], [9123, 9549, 28], [9549, 9857, 29], [9857, 10268, 30], [10268, 10486, 31], [10486, 10920, 32], [10920, 11190, 33], [11190, 11724, 34], [11724, 12068, 35], [12068, 12379, 36], [12379, 12422, 37], [12422, 12525, 38], [12525, 13153, 39], [13153, 13370, 40], [13370, 13638, 41], [13638, 13862, 42], [13862, 14317, 43], [14317, 14677, 44], [14677, 15011, 45], [15011, 15177, 46]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15177, 0.18182]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
1af05bccd4a96f23e2c1981fb79924552acacbaa
General guidelines for biomedical software development [version 1; peer review: 2 approved] Luis Bastiao Silva, Rafael C. Jiménez, Niklas Blomberg, José Luis Oliveira 1 BMD Software, Aveiro, Portugal 2 ELIXIR Hub, Wellcome Trust Genome Campus, Hinxton, UK 3 Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro, Aveiro, Portugal Abstract Most bioinformatics tools available today were not written by professional software developers, but by people that wanted to solve their own problems, using computational solutions and spending the minimum time and effort possible, since these were just the means to an end. Consequently, a vast number of software applications are currently available, hindering the task of identifying the utility and quality of each. At the same time, this situation has hindered regular adoption of these tools in clinical practice. Typically, they are not sufficiently developed to be used by most clinical researchers and practitioners. To address these issues, it is necessary to re-think how biomedical applications are built and adopt new strategies that ensure quality, efficiency, robustness, correctness and reusability of software components. We also need to engage end-users during the development process to ensure that applications fit their needs. In this review, we present a set of guidelines to support biomedical software development, with an explanation of how they can be implemented and what kind of open-source tools can be used for each specific topic. Keywords biomedical software, guidelines, software development, bioinformatics, Agile This article is included in the International Society for Computational Biology Community Journal gateway. This article is included in the Bioinformatics Education and Training Collection collection. Introduction As an increasing number of scientific results are being generated from omics studies, new translational medicine applications and bioinformatics tools are needed to promote the flow of these results into clinical practice, i.e. the knowledge needs to be translated from the bench to the bedside, to foster development of new biotechnological products and improve patients’ health. Biomedical informatics intends to support the integration and transfer of knowledge across all major subject areas of translational medicine – from the study of individual molecules to the study of whole populations. Translational medicine brings together many areas of informatics, including bioinformatics, imaging informatics, clinical informatics and public health informatics. Bioinformaticians, translational researchers and computational biologists identify the molecular and cellular components that can be targeted for specific clinical interventions and treatments for specific diseases. Imaging informatics also plays a significant role in understanding pathogenesis and identifying treatments at the molecular, cellular, tissue and organ level. Richer methods to visualize and analyse imaging data are already being investigated and developed. Other techniques such as text and data mining have been applied to clinical reports. Additionally, translational research teams need to focus on decision support, natural language processing (NLP), standards, information retrieval and electronic health records. The biomedical informatics landscape is pushing for the development of more professional and easy-to-use software applications, in order to address the pressing need to translate research outcomes into clinical practice. To accomplish this, solid software engineering approaches must be adopted. Despite being a relatively young discipline, biomedical informatics has evolved at an impressive rate, constantly creating new software solutions and tools. However, due to their multidisciplinary nature, it is often difficult for individual studies to gather solid knowledge in their various fields. This problem has been flagged by several authors, who have proposed general competences that undergraduate students should acquire. These competences can be obtained through introducing complementary courses, such as software programming, in existing curricula, or by creating new academic degree courses. While these strategies have resulted in many new and successful graduates, the right balance between looking for strong expertise in a single topic, or medium expertise in many topics, is not always easy to find. Many researchers without training in software engineering have found themselves faced with the intricate task of building their own software solutions. Moreover, researchers and clinicians typically perceive software development as an auxiliary task to serve science, rather than a central goal. The result is sometimes code-difficult and costly to maintain and re-use. This software dependency is indeed a problem across all science, where concerns about the reproducibility of research have raised the need for robust, open access and open source software. The development of software projects requires effective collaboration between users and software developers, and also between the users themselves. Another common drawback of current bioinformatics applications is the lack of user-friendly interfaces, making them difficult to use and navigate. User-centered design has also been proposed as a way to minimize this problem. The development of open source solutions has promoted software quality in the field, since it encourages public review, reuse, correction and continuous extension. Most bioinformatics software is written by researchers who use it for their own individual purposes, a process long-identified as end-user programming. However, these “new” programmers face many software engineering challenges, such as making decisions about design, reuse, integration, testing, and debugging. Several authors have tried to introduce software engineering approaches in bioinformatics programming to address this problem. Hastings et al. compiled several recommendations that should be used to ensure the usability and sustainability of research software. Most of these suggestions are part of fundamental programming principles; e.g. keep simple, avoid repetitions, avoid spaghetti code. By examining a group of software projects, Rother et al. also identified a set of techniques that facilitate the introduction of software engineering approaches in academic projects. This work, which came from the authors’ own experience in conducting software projects, provided readers with a toolbox consisting of several steps, starting with traditional ones such as user stories and CRC cards. In a more specific study, Kamali et al. discussed several software testing methodologies that can be used in bioinformatics, such as simulators, testing in operational environment and cloud based software testing. Artaza et al. proposed 10 metrics for life science software development, identified as the most relevant by a group of experts. They include topics such as version control and software discoverable or automatic building. In a similar approach, Wilson et al. described a set of “good enough” principles that should be followed to better organize scientific computing projects, starting at the data gathering phase and continuing up to the writing of the manuscript. In this paper, leveraging on the experience of the MedBioinformatics project, we present a set of recommendations for biomedical software development, with an explanation of how they can be implemented and what kind of open-source tools can be used for each specific topic. Why should we care about software development recommendations? Many research organizations and teams can create biomedical software, but far too often, they are not sufficiently developed to be used by most clinical researchers and practitioners, because they are incomplete, lack user-friendly interfaces and software maintenance is not guaranteed after project completion. So, the main question we asked ourselves was how to ensure that the biomedical software development process in research institutes remained reliable and repeatable without them having to undertake major organizational changes. Developing high quality biomedical software that accomplishes end-users’ expectations implies following a minimal set of software engineering guidelines. We propose the following: - Team and project management - Tracking the development process - Software integration and interoperability - Test-Driven Development (TDD) and continuous integration (CI) - Documentation - Software distribution - Licensing Figure 1 presents a software development process that is following this general set of key steps. The first, team and project management allows team members to keep track of group tasks and schedules, and be involved in development decisions. This encourages involvement of other users besides developers, who can point out missing features, give feedback and report bugs, helping communication between the whole team. Tracking the software development process consists of a combination of technologies and practices mostly used for source code management, but applicable to other collaborative tasks such as writing papers, product documentation, web site content, internal guidelines, and many more. Next, we have a cyclical pipeline between software integration and interoperability, which starts with the software specification phase and proceeds to the distribution phase, consisting of development, validation and deployment stages. The licensing of the software is one step that should be defined as early as possible, because during the development process it is often needed to include third-party dependency libraries, and the licenses should be compatible. This test-driven development process can be used throughout the entire workflow, so that each unit is tested and the components’ integration is validated. Moreover, the documentation of each software module is important, and should be updated during all development phases. Finally, after the software application is distributed, appropriate maintenance and support is needed to assure end-users can rely on someone to handle their requests and help solve any problems. To help the reader navigate through each of the following guidelines, we have divided each one into three sub-sections: 1) A summary that describes what it is intended for 2) The process description that explains what benefits it provides 3) Examples of tools and services that help to implement the guideline. Team and project management Summary: Team and/or project management tools are essential for many organizations, to help in planning and organizing teams, tasks, and schedules. Implementing them during software development allows teams to stay synchronized about task scheduling and milestones, and helps track individual and general progress, identifying difficulties early on so that the necessary adjustments can be made. There are various software applications available that manage the development process; they typically include a variety of features for planning, scheduling, controlling costs, managing budgets, allocating resources, collaborating, and making decisions. Process description: Tracking and organizing the development process typically involves the following main features: - Task management – To prioritize what functionality is developed over the different phases of project. It is often provided as a graphical user interface tool that uses the drag and drop functionality to facilitate project management, such as Kanban boards – a method to visualize and manage the workflow; - Code reviewing – This important practice is often used to support teams of multiple developers, despite also being very useful to track the progress of a single developer. These tools allow the code to be audited by providing differential views of code changes, normally web-based interfaces where reviewers/auditors inspect the code independently, from their own machines, as opposed to synchronous review sessions where authors and reviewers meet to discuss changes; - Source code repositories – A source code repository is a web hosting facility to store and manage source code and which normally supports version control; - Bug tracking – Keeps track of all defects and problems with the source code, using a predefined nomenclature to describe each issue. The process typically also includes document repositories, wikis, discussion forums, time-tracking, Gantt mapping, file storage, calendars and versioning control. The principles behind team and project management tools have been implemented in several software development methodologies, such as Lean and Agile, and are important aspects of Scrum methodology, Kanban and extreme programming (XP). Here, team management relies on several types of meetings, such as sprint planning meetings, daily Scrum meetings, sprint review meetings, sprint retrospective meetings and backlog refinement meetings. The Scrum Master is responsible for planning what will be discussed. Developers also need to be prepared to analyse their development process, and negotiate future plans and potential deadlines. While Agile methodologies can lead to too many meetings, it is highly recommended to meet periodically to coordinate the development process. Examples: Depending on the type of financial resources available, free or open source management applications can be adopted, installed locally or used as a service in the cloud. Some examples of management applications are: Phabricator, Redmine or JIRA, Github and Bitbucket. Tracking the development process Summary: A source control management system (SCM) provides coordination and management services between members of a software development team. It could be implemented in many different ways, and the most basic level, it could be a shared folder, and only the newest versions of files are available for use. In software programming, when there are several team members, the concept of branches is very important. Quite often, projects are only supported by a single researcher, but this is also very important for these small projects. To correctly support the concept of branch, more complex software is required. Process description: The more recent versions of SCMs allows developers to work simultaneously on the same file, merge changes with other developers’ changes and track and audit changes that were pull requested. Nowadays, SCMs often include components to assist the code revision and also to manage software process milestones and roadmaps. The development process includes two branches: master and dev. Master will be the most stable branch. Only bug fixing is allowed and it should always be pull requested to master. Dev contains the new features and their unstable branches. This is where the developers are creating the next releases of the software. Figure 2 shows an example of the bug fixing flux that occurs while a new branch is created from the master. The process usually starts with an issue being reported, and after a decision has been made, it is assigned to a developer. Before going to production, it needs to pass internal tests overseen by an internal testing team. If the bug has been fixed according to requests, the case is closed, or a report is sent back to the developer with a new set of issues. New features are developed according to users’ feedback. It is a complex task that often involves re-engineering the applications. This process may break some other features already in place. Thus, the new features are implemented in a development branch, passing through several analyses, tests and user feedback stages. Finally, release management is also performed within the SCM. Generally, it uses an incremental numbering schema to tag each version. In this way, it is always possible to track older versions and roll back to a previous version, which is mainly required to compare the behaviour of different versions. The following best practices should be applied to software version control: - Before committing, check for possible changes in the repository - When committing a change to the repository, make sure the change reflects a single purpose (E.g. Fixing a bug, adding a new feature); - If possible, try to create change sets linked to the issue tracker. Use the issue ID in the commit message; - After merging, run the unit tests to ensure that the merge was successful; - After creating a tag, do not commit to it any more. Visualize the tag as read-only. If it is necessary to resolve an issue in that specific version, create a branch from that tag and commit the changes to it; Figure 2. Example of a strategy for SCM workflow based on Git. - Try not to merge a large number of changes between the trunk and the branches. Use atomic commits; - Make at least one commit per day with all the day’s work. Examples: Several version control systems (VCS) can manage code development, such as Git or Mercurial. Github or Bitbucked are some examples of ready-to-use SCM. Software integration and interoperability Summary: Software integration and interoperability with external systems is a very important requirement in the biomedical domain, due to the reusability of existing repositories, services, algorithms, components and even applications. Designing an application programming interface (API) is crucial in distributed system development, so that the final solution can interconnect and interoperate with other systems. Process description: A programming interface exposes part of a system behaviour, and it is sometimes difficult to implement when different platforms and programming languages are required. Since creating a new interface for each specific service could be tiresome and error-prone, it is often preferred to take a generic interface and express application-specific semantics to them. This is often a trade-off between performance, extensibility and stability of the API. To collaborate with specifying new semantics and the development of systems complying with such interfaces, Interface Description Languages (IDLs) emerged as formal definition languages for describing software interfaces, often coupled with facilities for documenting the API and generating consumer and provider code stubs for multiple platforms or programming languages. Two of the most used types of API are SOAP and REST: - The Simple Object Access Protocol (SOAP) is an Internet protocol for messaging and remote procedure calls, using Extended Markup Language (XML) as the base message format and usually (although not necessarily) HTTP as the transport protocol. Web Service Definition Language (WSDL) is a commonly used IDL for describing a web service using SOAP. This protocol was very popular in its conception but is nowadays becoming replaced by other solutions such as REST. • Representational State Transfer (REST) is an architectural style that defines an interface as a means of accessing and manipulating well-identified resources, using HTTP as the transport protocol and a set of methods for reading and writing resource state. REST is praised for its simplicity, performance, scalability and reliability. In the scope of web applications, client modules for consuming RESTful services can be easily implemented without the need for complex external libraries. Defining an API is very important for software reusability, to ensure that developers allow their services to be integrated in third-party applications. In the biomedical domain, besides the existence of REST web services, use of well-defined standards and vocabulary is also crucial. Examples: Web service facilities are generally included in software development toolkits and for several programming languages. Test-Driven Development (TDD) and Continuous Integration (CI) Summary: The Test-Driven Development methodology is a software development technique based on short cycles. The basic idea is that the developer creates a set of test cases and writes those test cases to ensure a specific use case. A set of assertions should be established in each test, helping developers to better identify the requirements for each component of the software. As a complement to TDD, Continuous Integration (CI) is a development practice that automates the build, allowing teams to detect early problems. Process description: In a software development journey, there are often several strategies to bug fixing, and changing the behaviour of modules may introduce problems in other parts of the software. There are three strategies that could be used to tackle the issue: - Unit and integration tests - Tests written by the programmer to verify if that particular part of the code respects the contract, i.e. what the input and the output is. Integration tests are often built to verify if the different pieces of system work together. - Continuous integration – A practice that incorporates automatic builds, and allows the teams to detect problems earlier. - TDD – The practice of writing the tests before writing the code. TDD can be applied not only with unit tests but also with interfaces. To develop unit tests for the core of the application, it depends on the programming language. The methodology is simple, but application might be more complex. There is always a trade-off between the overhead it introduces and its benefit, so it can be adapted according to specific needs, e.g. validation of critical processes, as is common in the biomedical domain. TDD allows writing of code that automatically verifies if the produced output of an algorithm is as expected. These tests can be used at any time, allowing to better deal with future changes in code, and saving time in future updates. TDD and CI make the development process smoother, more predictable and less risky, even in advanced stages of the software lifecycle. Additionally, bugs can be traced and solved sooner, as they are continuously introduced into the project code. CI proposes the following set of development guidelines: - Do not check in on a broken build; - Always run all commit tests locally before committing; - Commit your changes frequently (at least once a day); - Never go home with changes to commit; - Never go home on a broken build; - Always be prepared to revert to the previous revision; - Take responsibility for all breakages that result from your changes; - Fix broken builds immediately. Examples: An example of a tools that can be used for TDD is JUnit for java. To test web interfaces, there is the nightwatch.js tool, amongst others. For CI there are tools such as Jenkins, Travis-CI or TeamCity. Documentation Summary: Documentation is one of the most important aspects of long-term software development. Building comprehensive documentation is very important for software reusability and maintenance, helping to mitigate the arrival/departure of team members. Nevertheless, biomedical research software is often born based on experiments and scripts, and researchers are often not willing to document all processes and source code. Process description: High-level requirements intend to depict what the system “will be”, rather than what it “will do”. The emphasis is therefore on non-functional or business requirements. As the project evolves, these requirements will be progressively more detailed, and eventually converge with low-level requirements. Use case analysis is important for any development project, and it is a task usually shared with end-users. It is important to choose a simple and comprehensive use case template, and sometimes a first iteration with a key user can help refine it before distributing the template among all users. Other technical documentation needs are mostly related to the project set-up, where a wiki system can be used for storing dispersed information in a controlled environment where everyone is able to edit/comment. This repository can include use cases, architecture/database diagrams, user interface mock-ups, and any project-related documents. Last but not least, inline source code documentation is very important to define and explain the different parts of the source code, making it easier for the programmers when they need to add extra features or fix bugs. Nowadays specific and automatic API generation documentation tools allow creation of easy to read documentation based on inline source code documentation. Examples: For general documentation, Markdown or Sphinx (also used for Python) can be used. For Java language, there is Javadoc, while other languages have their own documentation strategy that can be followed. Software distribution Summary: Web-based solutions can be deployed in web servers, which makes life a lot easier for the application’s end-users who do not need to deal with local installation. It is essential to handle updates smoothly without disrupting the quality of service provided. Process description: The deployment stage of each new release must not be performed in the production environment. It should follow three release management steps: development, testing and production (Figure 3). These distinct stages have similar conditions and they are deployed over different servers. Also, the production data is replicated in these environments to guarantee that the deployment will be safely performed. Software engineers will often perform the development deployment and test the new features in this environment. When this milestone is reached, the release is performed and updated in the test stage. This version will be passed to a group responsible for testing, gathering feedback and feature enhancement. Once it has passed this stage, the final release will go into production to be used by the end-users. Examples: This is an organizational guideline, so no special tools are needed. Nevertheless, there are auxiliary tools that help the deployment and distribution process, mainly when the applications require complex setup tools. For example, it is possible to use software containers like Docker to distribute complex software and help deploy it, ensuring the whole community can run the software. Licensing Summary: Licensing and copyright attribution is a subject that should be addressed from the very beginning of the project. The goal is to clarify the terms that will regulate future use of the software – e.g. commercial, free use, open source. Open source software is currently a trend, even in bigger companies, as a way to credit the authors and promote work dissemination and collaborative development. Several kinds of licenses are available to regulate these relationships, although an individual disclaimer can be written. A commonly used license is the Free and Open Source Software (FOSS) license, which allows the product to be modified and redistributed without having to pay the original author. Process description: The license should be stated clearly on the project’s front page and in the root of the source code. The full license text can be included here in a file called COPYING or LICENSE, following the standard format. The copyrights should be assigned together with the license. The common nomenclature adds the year and the organization owning the copyright: Copyright (C) <year><name of organization>. The year specification may be a range, such as 2014–2016, to restrict the copyright to a period of time. This line should be included in the headers of all source code files, together with a short license. Examples: There are different types of open source licenses, that come with different conditions and restrictions. We will list the most commonly used open source licenses: - **BSD License** – It is the most permissive FOSS license. Users that re-use the code can do whatever they want, except in the case of redistributing source or binary, where they must always retain the copyright notice. - **Apache Public License 2.0** – This license is also very permissive. It allows the licensed source code to be used in open-source and also in closed-source software. - **GNU GPL** – This license is restrictive. The users of the licensed system are free to use the licensed system without any usage restrictions; analyze it and use the results of the analysis (the source code must be provided and cannot be hidden); redistribute unchanged copies of the licensed system, and also modify and redistribute modified copies of the licensed system - **GNU LGPL** – It is trade-off between the restrictive GNU GPL and the permissive BSD. LGPL assumes that a library licensed under LGPL can be used in a non-GPL application. All the changes applied to the LGPL library must remain under LGPL. It assumes copyleft (‘All copyrights reversed’) on source code files, and not on the whole program. Conclusion and future directions In the biomedical domain, many new code scripts, algorithms, tools and services are currently being developed on a worldwide scale. However, the reuse of some of these software solutions outside the research lab is being hindered by them not following consolidated software developing methodologies. Early adoption of these methodologies is important in the development of biomedical tools so that they can reach a greater number of users; not only researchers but also healthcare professionals. During the development and distribution processes it is very important to involve end-users, to collect as much feedback as possible and create effective solutions during the development process. We described a set of recommendations targeted at biomedical software developers aimed at achieving a good balance between fast prototyping, and robustness and long term maintenance. It is important to keep in mind that these recommendations are quite general and may not fit all cases, so adaptations may be required. We hope they can help biomedical researchers to reorganize their workflow, make their tools more visible, allow reproducibility of their research, and most importantly, that the outcome of that research can be more easily translated into daily clinical practice. Author contributions All authors participated in the discussions to achieve the software development recommendations. We believe all authors contributed equally to this work. All authors contributed to the writing and reviewing of this article. All authors read and approved the submitted manuscript. Competing interests No competing interests were disclosed. Grant information This work has partially received funding from the European Union’s Horizon 2020 Research and Innovation programme for 2014–2020 under Grant Agreement n. 634143 (MedBioinformatics) and from the EU/EFPIA Innovative Medicines Initiative Joint Undertaking (EMIF grant n° 115372). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. References Open Peer Review Current Peer Review Status: ✓ ✓ Version 1 Reviewer Report 05 April 2017 https://doi.org/10.5256/f1000research.11591.r21000 © 2017 Maojo V. This is an open access peer review report distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Victor Maojo Biomedical Informatics Group, Artificial Intelligence Department, Technical University of Madrid, Madrid, Spain This is a timely report, given the proliferation of all types of biomedical informatics applications (from medical apps to laboratory or even complex clinical ones) delivered by software developers that do not follow even simple criteria of solid software engineering. In fact, many of these applications are built to carry out quite simple computational tasks (even, many times, quite successfully) but without a sound rigorous computing basis, and then are prone to multiple subtle errors or they lack standardized approaches and interoperability capacities. Besides the interest of the topic, the paper is well written, with a solid analysis of the topic, useful recommendations and a selected reference section, which can be very helpful to a broad range of readers, from public health informaticians to bioinformaticians. Below are some comments. 1. Although the authors are usually careful with this issue, readers outside the field may have some problems to understand the differences between medical informatics, bioinformatics, computational biologists and biomedical informatics. Sometimes the words are used in an interchangeable way in the paper, but this may lead to confusion. Some explanation might be necessary. 2. When the authors refer to “focus on decision support, NLP, information retrieval and EHRs” they mix techniques and a concrete system (EHR). They should explicit what technique they refer for EHRs. 3. The authors begin to address, apparently, biomedical informatics (thus, including public health and clinical topics) but they focus later in bioinformatics, which I believe it is the best target for the paper. Differences are usually significant between clinical (for instance, EHRs, with many big software companies dedicated to this field). Some example may be useful. 4. Besides the provided hyperlink, some reference should be added for the MedBioinformatics project and a brief description. 5. The software engineering guidelines suggested are of interest, but some additional brief comparison and similarities/differences with established methodologies (besides what is presented for Test-driven) may provide additional insight. 6. As mentioned above, some brief, real example carried out by the authors may add some information of interest, pointing out actual problems and possible approaches. In fact, the paper is quite generic, but some specific case/application in the biomedical domain can be of interest. 7. Some differences may be pointed out when software developers work for an academic thesis or project, compared to a software company? Some comment may be of interest, too. In fact, many tools are quite simple, for a single task, and not intended for broader scenarios, where interoperability is necessary. Some recommendations could differentiate both cases. 8. For this reviewer, the paper may require some more explanation about design and prerequisites aspects, which are quite important, and some concrete example, but this is quite personal and the decision should be up to the authors. **Competing Interests:** I have worked, around a decade ago, with one of the authors (Jose Luis Oliveira). I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. --- **Author Response 07 Jul 2017** **Luis Bastiao Silva**, BMD Software, Aveiro, Portugal Thank you for the positive assessment and helpful recommendations. We will answer point by point for your comments: 1) We agree, this discussion is important. We have included in this revision two new references where the explanation of the different fields is well addressed. 2) Done 3) Indeed, this is true and we tried to make it more clear along the article. Moreover, we included new references that discuss this issue in detail. 4) Thank you for highlighting this. A brief introduction to the project is now provided. 5/6) Indeed, we agree with this remark. However, since these recommendations result from the experience of several software projects, where many concrete use cases were explored, we also feel that detailing those could be out of scope of the article. 7) Thank you for raising this, which is indeed a very important remark. We have now discussed this in more detail in the introduction section. 8) We agree with your remark. The design and prerequisites aspects are briefly addressed in the documentation process. We changed this section to highly better these issues. **Competing Interests:** No competing interests were disclosed. This review focuses on an important topic in modern bioinformatics: good practices for software development. As the authors note, there is a growing body of data derived from experimental studies that requires automated analysis. The analysis is often carried out using custom software, written by first-time or inexperienced programmers, and results in unsupported, sub-optimal, or duplicated code. As the authors also mention, several groups have tried to put forward a collection of tips and guidelines to help researchers in developing 'proper' software. This review offers a similar set of guidelines, targeted specifically at the field of biomedical informatics, and draws on the experience of the authors on building their own tools. The suggestions cover seven topics, from management to in-depth software development tips, and do a very good job at explaining their importance and their take on what constitutes a good approach. The authors also give very good examples of software tools to help readers setup a development environment. These range from the usual 'use GitHub' to TravisCI, Sphinx, and Docker. One suggestion would be to integrate some of these tools in Figure 1, to give readers a visual cue where these tools fit in each topic/step. The authors also provide a very nice summarized view of the release process, namely licensing and distribution (e.g. using Docker), and the follow-up maintainance. There is one less positive aspect of this review, which is anyway transversal to most such attempts at 'guidelines' for bioinformatics software development. As the authors note, most of these tools are created to solve one very specific problem, or process a very specific dataset. These are not amenable to test-driven development, or to continuous integration. More importantly, most of the authors of these tools/scripts are biologists, not programmers, which usually translates to a lack of interest in proper programming etiquette. Thus, I believe that it is important to show and teach such users very very simple programming rules, namely about how to make their code readable for others. For example, in the Python world, a simple recommendation to use 'flake8' to check for PEP8 coding standards and an editor (e.g. Atom, Sublime) that can do real-time code checks (typos, unused variables, indentation issues, etc). There is no need to suggest quasi-professional IDEs, as these will likely scare users away! All in all, as a biologist doing bioinformatics and doing his best to follow proper software guidelines, I find reviews like this one very important to the field. They should probably feature in a 'starting package' to new PhD students in many labs. As an added suggestion, the authors could think of putting these guidelines in practice and follow up with a simple workshop/tutorial series, a la software carpentry, even if in webinar format. **Competing Interests:** No competing interests were disclosed. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. **Author Response 07 Jul 2017** **Luis Bastiao Silva**, BMD Software, Aveiro, Portugal Much obliged for your assessment and recommendations. We have redrawn Figure 1 following your suggestion. Regarding the second point, we recognised the importance of the subject and how recommendations vary according each developer/research profile and even programming language. For beginners or sporadic developers, most of the recommendation may not apply. However, this type of review creates the awareness of developing for the community, not just for ourselves. Finally, regarding last comment, indeed, guidelines for a new comers is good idea. We think this can be done at the institution level, since different methodologies may be used locally. **Competing Interests:** No competing interests were disclosed.
{"Source-Url": "https://f1000researchdata.s3.amazonaws.com/manuscripts/11591/4c9a8d6b-0d7e-4185-8b35-35207cbf6031_10750_-_luis_bastiao_silva.pdf?doi=10.12688/f1000research.10750.1&numberOfBrowsableCollections=17&numberOfBrowsableInstitutionalCollections=4&numberOfBrowsableGateways=22", "len_cl100k_base": 7410, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 38162, "total-output-tokens": 9328, "length": "2e12", "weborganizer": {"__label__adult": 0.0005316734313964844, "__label__art_design": 0.00022268295288085935, "__label__crime_law": 0.0004341602325439453, "__label__education_jobs": 0.00218963623046875, "__label__entertainment": 5.263090133666992e-05, "__label__fashion_beauty": 0.0002288818359375, "__label__finance_business": 0.00036215782165527344, "__label__food_dining": 0.0005612373352050781, "__label__games": 0.0006256103515625, "__label__hardware": 0.0008807182312011719, "__label__health": 0.002712249755859375, "__label__history": 0.0001819133758544922, "__label__home_hobbies": 0.00011134147644042967, "__label__industrial": 0.0003800392150878906, "__label__literature": 0.00025534629821777344, "__label__politics": 0.0002434253692626953, "__label__religion": 0.000492095947265625, "__label__science_tech": 0.010528564453125, "__label__social_life": 0.00013017654418945312, "__label__software": 0.0052642822265625, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.0005016326904296875, "__label__transportation": 0.0004830360412597656, "__label__travel": 0.00021159648895263672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44042, 0.01618]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44042, 0.65849]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44042, 0.92559]], "google_gemma-3-12b-it_contains_pii": [[0, 1824, false], [1824, 1824, null], [1824, 8463, null], [8463, 11255, null], [11255, 16738, null], [16738, 18946, null], [18946, 24143, null], [24143, 27599, null], [27599, 33072, null], [33072, 35044, null], [35044, 37502, null], [37502, 40135, null], [40135, 43089, null], [43089, 44042, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1824, true], [1824, 1824, null], [1824, 8463, null], [8463, 11255, null], [11255, 16738, null], [16738, 18946, null], [18946, 24143, null], [24143, 27599, null], [27599, 33072, null], [33072, 35044, null], [35044, 37502, null], [37502, 40135, null], [40135, 43089, null], [43089, 44042, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44042, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44042, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44042, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44042, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44042, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44042, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44042, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44042, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44042, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44042, null]], "pdf_page_numbers": [[0, 1824, 1], [1824, 1824, 2], [1824, 8463, 3], [8463, 11255, 4], [11255, 16738, 5], [16738, 18946, 6], [18946, 24143, 7], [24143, 27599, 8], [27599, 33072, 9], [33072, 35044, 10], [35044, 37502, 11], [37502, 40135, 12], [40135, 43089, 13], [43089, 44042, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44042, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
a5277cdc942ec565667a0715c239d735e4bd9fea
AI-enabled Video Analytics Powered by DeeperLook™ Artificial Intelligence (AI) & Deep Learning (DL) Framework AI-ENABLED VIDEO ANALYTICS Powered by DeeperLook™ AI (Artificial Intelligence) & DL (Deep Learning) Framework Videonetics AI-enabled Video Analytics software analyses, extracts and generates actionable information from a humongous amount of video or image data. Powered by a novel Artificial Intelligence (AI) and Deep Learning (DL) framework (DeeperLook™), it is designed to serve as an intelligent decision support system for users. ■ Versatile, Futuristic and Offering Unmatched Precision DeeperLook™ is an AI & DL based framework for development and implementation of distributed video and data analytics solutions. The framework is highly customisable, compute efficient, and compatible with both on-premise, edge-based and cloud-based computing environments. It uses a collection of indigenously designed AI and DL engines, each computationally optimised for a specific set of tasks. The framework is reconfigurable with interconnection of these engines and hence suitable for domain-specific customised video analytics application development. DeeperLook™ is a perfect fit for Video Internet of Things (VIoT) applications, with its fog computing capabilities, in which computing load is judiciously distributed across edge and central computing resources. It is agnostic to operating systems and cloud platforms, thus providing maximum flexibility to the users and ensuring the lowest total cost of ownership (TCO). A patented and award-winning technology, DeeperLook™ powered solutions ensure the highest level of accuracy by detecting various patterns, features, intrinsic, object attributes, activities, actions, behaviours and events, using a novel continuous and self-learning mechanism. ■ With the power of AI Our framework offers over a hundred state-of-the-art use cases across industry segments. It provides accurate and timely alerts on detection of anomalies with highly optimised computing resources. Field-proven in diverse environments and challenging conditions, DeeperLook™ AI & DL powered Video Analytics is the reliable choice for smart/ safe cities, aviation, mass transportation, small to large enterprises, critical infrastructure projects, retail stores, correctional homes, BFSI, educational institutions and healthcare, to name a few. It supports a wide range of solutions as mentioned below: - People Analytics - Object Analytics - Crowd Analytics - Vehicular Analytics - Traffic Enforcement Analytics - Highway Traffic Analytics - Law Enforcement Analytics - Urban & Municipal Analytics - Industrial Safety & Health Analytics - Face Recognition Analytics - Women Safety Analytics - Retail Analytics - Tracking Analytics - Pandemic Management Analytics - Forensic Investigation & Smart Search Analytics People, Object and Crowd Analytics Videonetics AI-enabled Video Analytics can help users track and identify anomalies in a scene in real-time, whether they pertain to object, people, or crowd. These analytics are relevant and highly effective for diverse scenarios, be it a corporate office, airport, street, or mall. The insights provided by the analytics software give unprecedented control to the operators, enabling them to handle a developing situation in a timely manner. People Analytics - Intrusion/ Trespassing - Loitering, Dwell Time - People Tracking & Trajectory - ‘Laxman Rekha’ Violation - Perimeter/ Fence Jumping - People Count, Occupancy - Entry/ Exit Count - Walking/ Running - Gesture Detection - Tailgating/ Piggybacking - Person Collapse/ Fall & Slip Object Analytics - Moving Object Detection - Object Classification & Counting - Camera Sabotage Detection - Unattended Object Detection - Who left that Object - Artefact Protection & Theft Detection - Object Tampering - Colour Detection - Attribute Analysis & Search Crowd Analytics - Crowd Formation & Estimation Detection - Crowd Dispersion Detection - Crowd Anomaly Detection - Crowd Statistics & Heatmap - Queue Length Detection - Queue Limit Exceed Detection - Wait Time in a Queue/ Lift Lobby - Social Distancing in a Queue Integrations - Access Control & Physical Barriers - SCADA Systems - Fire Alarm System - Building Management System - Text, Voice, WhatsApp & Email - Audio Visual Annunciators - Intrusion & Break Detection Sensors - Elevator Controllers Intelligent Traffic Management System Videonetics Intelligent Traffic Management System (ITMS) offers various vehicular and traffic management analytics to detect and track vehicles, extract attributes of vehicles including registration number, colour, make and model, detect various types of rule violations, understand traffic patterns and traffic behaviours by generating multi-dimensional data, helping enforce traffic rules, and notifying violators. By inculcating traffic discipline among people, it contributes to making traffic management more robust, and to improving road safety. The system offers unique investigation features to detect wanted vehicles, traces the trajectory of vehicles, and identifies them with attributes, location and time. It supports ‘vernacular’ license plates in regions across the globe, such as the Indian Subcontinent, Southeast Asia, Western Europe, Gulf and Latin America. DeeperLook™ provides a versatile engine which easily adapts to the local requirements and syntax of license plates, vehicles types, traffic regulations and data requirements. It also offers a secure and flexible framework to integrate external systems, such as vehicle registries or motor vehicle database, electronic ticketing and violation ... prosecution systems, radar, automatic traffic control, toll ticketing, GIS maps, and command and control systems. Videonetics also supplies a versatile e-ticket management system to notify the violators, and also generate tickets embedded with evidence and related data, as per the guidelines of the concerned traffic enforcement agency. ### Vehicular Analytics - Vehicle Classification & Counting - Traffic Volume Estimation - Automated Number Plate Recognition - Vernacular ANPR - Dilapidated/ Missing Number Plate Detection - Non-standard Number Plate Detection - Vehicle Colour, Make & Model Detection - Congestion Detection - Parking Violation Detection - Smart Parking - Virtual Loop ### Traffic Enforcement & Safety Analytics - Red Light & Stop Line Violation Detection - Speed Violation Detection - No Helmet & Triple Ride Detection - No Seat Belt Detection - Detection of Cellphone Use While Driving - Wrong Way & Illegal Turn Detection - Free Left Lane Blocking - Driver Smoking Inside Vehicle Detection - Detection of Stopped or Broken-Down Vehicle on Road - Hot-listed & Stolen Vehicle Detection and Tracking - Traffic Volume Estimation ### Highway Analytics - Vehicle Classification & Counting - Video-based Spot & Average Speed Detection - Non-standard Number Plate Detection - Dilapidated/ Missing Number Plate Detection - Traffic Volume Estimation - Speed Limit Management (based on Vehicle Class) - Average Corridor Speed Management (by Vehicle Category) - Detection of Banned Vehicle Category - Zig Zag Driving Detection - Lane Monitoring/ Lane Violation Detection - Detection of Object Fallen on Road - Detection of Oversized Vehicles ### Integrations - Traffic Ticketing - Vehicle Registry - Motor Vehicle Database - Police FIR Systems - Toll Plaza Systems - Radar - Advanced Traffic Control Systems (ATCS) - Signal Controllers - GIS Maps - Incident Management & Command & Control System ### Law Enforcement Analytics Videonetics Law Enforcement Analytics detects abnormal/ illegal activities or anomalies, and provides intelligence to the authorities to take swift action. It provides multiple use cases, from detecting incidents of different kind of violations, fighting, rioting, illegal crowd gatherings and demonstrations, while also providing forensic investigation tools to identify the perpetrators. Combining law enforcement, traffic, forensics with Intelligent VMS enables officials to detect and investigate the incidents in a timely and efficient manner, and greatly helps in maintaining law and order. The law enforcement analytics have been found effective in automatic monitoring of correction homes and/ or prisons, to automatically detect occurrence of a riot, assault, suicide attempt etc. Videonetics pose estimation technology can be used to custom-develop other types of applications, as per user requirement. Industrial Safety and Health Analytics A major challenge businesses face is to enforce and reduce safety hazards, despite creating elaborate safety policies and procedures. Videonetics Industrial Safety & Health Analytics makes it possible for Occupational Safety and Health professionals to anticipate and control hazards arising within the factory and other service premises, that could affect workforce, workplace, surrounding communities, and the environment. Based on Artificial Intelligence and Deep Learning, it allows training of the software with users’ site-specific data, and helps them detect anomalies in real time – accurately, proactively, cost-effectively. The solution is highly adaptive and can be deployed in any industry vertical – from Food Processing to Pharma, Oil and Gas, Heavy Industries, Refineries, Automobiles, Cement and Chemicals. It also caters to the service sector including Warehousing, Transportation, Aviation, Construction, Mining, Ports and Hospitality. Urban and Municipal Analytics City administrations have to monitor municipal services to improve the quality of lives of citizens. Videonetics Urban and Municipal Analytics enables services such as detection of overflowing garbage bins and clearing them, maintenance of cleanliness on the roads, checking for encroachment on public places such as footpaths, detection of illegal construction, illegal hawking outside permitted areas, dumping of construction rubble, monitoring of road conditions, movement and tracking of garbage trucks, detection of graffiti and vandalism, and many more. Designed with urban planners and local self-governments in mind, the solution facilitates efficient management of services in a city or in a large residential complex. - Garbage Bin Detection - Garbage Overflow Detection - Garbage Bin Emptied Detection - Detection of Stray Animals on Road - Garbage Truck Tracking - Waste Collection Pattern Analysis - Detection of Debris/ Litter on Road - Encroachment Detection - Detection of Temporary Construction - Graffiti & Vandalism Detection - Pothole Detection - Detection of Road Condition - Polluting Vehicle Detection - No Parking Vehicle Detection - Handcart/ Pushcart Detection - Illegal Hawking Detection Safety of women and crime against women are growing concerns across many cities in a number of countries, including India. Videonetics Women Safety Analytics can detect unwanted incidents, or series of actions, involving women. For instance, it can generate an alert about a single woman surrounded by men, in certain areas, or during a certain time band, thereby providing an early warning of potential abuse. Law enforcement agencies can get insights from series of events such as day-to-day alerts, to prepare a profile of their city, in order to identify such hotspots. Incidents of chain or purse snatching can be detected based on the sequence of events that typically occur during such incidents. The solution detects the anomalies in a scene and generates timely alerts to help catch the perpetrators. Deployment of these technologies can bring a sense of security among women and their families. Facial Analytics Diverse demographics, in terms of facial features, skin tone etc., pose a major challenge when it comes to face detection and recognition technologies. Based on AI techniques, Videonetics Facial Analytics is a modern and robust solution that delivers accurate performance under various demographic conditions. It is well-trained with a large database of faces representing diverse demographics. It has been designed to address the needs of a range of verticals such as Law Enforcement, Hospitality, Retail, Immigration, Border Security, Cities, and more. It aims to not only improve surveillance and security, but also enhance operations with actionable intelligence. Built on modular architecture, it can be integrated with other devices such as access control, attendance management. Integrations - Access Control System - Sensor-based Perimeter Intrusion - Terminal Automation Systems - Thermal Cameras - SCADA Systems - Fire Alarm Systems - Gate Control Women Safety Analytics Safety of women and crime against women are growing concerns across many cities in a number of countries, including India. Videonetics Women Safety Analytics can detect unwanted incidents, or series of actions, involving women. For instance, it can generate an alert about a single woman surrounded by men, in certain areas, or during a certain time band, thereby providing an early warning of potential abuse. Law enforcement agencies can get insights from series of events such as day-to-day alerts, to prepare a profile of their city, in order to identify such hotspots. Incidents of chain or purse snatching can be detected based on the sequence of events that typically occur during such incidents. The solution detects the anomalies in a scene and generates timely alerts to help catch the perpetrators. Deployment of these technologies can bring a sense of security among women and their families. Retail Analytics Retail businesses constantly look for opportunities to enhance their operational excellence, customer satisfaction, and hence return on investment. It provides true business intelligence and generates several statistical reports such as footfall count, customer concentration, queue length and wait time analysis, customer dwell time, heat maps, customer path analysis data, and more. The solution also helps in customer profiling by detecting customer emotions, age and gender distribution, distribution of customers across multiple stores, to get insights into buying patterns and performance drivers for the business. Pilferage and theft prevention is an important key performance indicator of store operations and profitability. The solution offers multiple use cases to detect incidents of shop lifting, pilferage, unauthorised entry beyond store operational hours, and tracking of known shoplifters across a chain of stores, using powerful facial analytics. - Entry/ Exit & Footfall Count - Heatmap Generation - Merchandising Monitoring - Video Synopsis - Intrusion Detection - Customer Movement & Dominant Path Analysis - Dead Zone Identification - Queue Management - Monitoring of Service Time at POS - Monitoring of Social Distancing in a Queue - Face Mask Violation Detection - Recognition & Tracking of Known Shoplifter Tracking Analytics High security areas such as defence establishments, critical infrastructure, refineries, etc. have a great need of tracking people movement in their premises. Videonetics Tracking Analytics is the ideal solution for such establishments. By enabling PTZ tracking on perimeter tracks, it ensures that any intruding person can be captured in a close-up view, and also generates alerts at the command and control centre. Similarly, objects, people or vehicles can be tracked in a camera field of view, or they can be tracked across multiple cameras, based on attributes of the intruder, location and time. - People & Object Tracking - Colour & Attribute-Based Tracking - Auto PTZ Tracking - Fixed Camera to PTZ Handoff - ANPR-based Vehicle Tracking - Person Tracking Based on Face Recognition - Multi-camera Person Tracking Forensic Investigation and Smart Search Analytics Today, forensic investigations, including compilation of robust video evidence, is regarded as the most scientific and effective way of investigating and building a case. Videonetics Forensic Investigation and Smart Search Analytics comes with powerful and intuitive in-built tools. It enables investigation into an incident by identifying suspects using attributes such as face, and tracking based on attire, accessories and gender. A particular activity can be easily pinpointed in a long evidence video, and the storyline of events created, through the use of video summarisation tool, or various smart video search options. - Attribute Search – Attire Type (Shirt, Pant, Saree, Salwar-Kameez), Colour etc. - Face Detection, Identification & Recognition - Thumbnail Generation & Detection of Time Segment & Location - Event Search - Speed & Direction Detection - Video Summarisation - Who left that Object with Person Identification Pandemic Management Analytics The COVID-19 pandemic has changed the world. People and businesses have to reconcile to the ‘new normal’ during and after the pandemic period. Keeping a pandemic under control requires strict adherence to epidemic control norms such as wearing masks, maintaining social or physical distancing, and detecting crowd formation. Videonetics Pandemic Management Analytics provides multiple use cases and serves as an efficient decision support system for authorities in identifying and concentrating on areas where people are violating the norms. Videonetics social or physical distancing technology is efficient and more effective, based on the accurate distance measured from head to head of persons, not just body or leg distance. Businesses in sectors such as hospitality, healthcare, food processing and pharma can deploy the solution to automatically check if their employees are wearing the mandatory Personal Protective Equipment (PPE) such as aprons, masks, caps, hand gloves, face shields. The solution can generate alerts in case of violations. - Social Distancing Monitoring - Mask/ No-Mask Detection - Personal Protection Equipment Detection (Head Cover, Mask, Uniform Detection) - Queue Management - Detection of Anomaly in Body Temperature - Detection of Crowd Formation in Violation of Social Distancing Norms - Safety Heatmap Generation - Pandemic Violation Dashboard Integrations - Perimeter Intrusion Detection Systems - Radar & LiDAR Systems - Guard Tour Monitoring Systems - PA Systems - Access Control Systems - Integrations - Perimeter Intrusion Detection Systems - Radar & LiDAR Systems - Guard Tour Monitoring Systems - PA Systems - Access Control Systems Key Highlights - **Based on Artificial Intelligence**: Videonetics AI-based Video Analytics is enabled with a collection of indigenously developed AI techniques, based on advanced image and video processing, computer vision and pattern recognition. Leveraging our numerous proprietary models, AI and underlying deep neural networks (DNNs), enables the platform to efficiently compute automatic object/pattern detection, multi-level classification, pose estimation, semantic segmentation etc. These models have been generated with a wide range of real-life visual datasets, to provide unparalleled accuracy, using optimal computing bandwidth suitable for various application domains. - **Continuous Self-learning Approach**: Proprietary self-configure, self-calibrate and self-learning approach provides automatic continuous learning capability from the field data, and hence enhanced detection, classification and recognition accuracy over time. - **Camera Agnostic**: Agnostic to any make and model of camera. It is built on robust technology and can work with any video data, whether real-time or archived, and generated from any source. - **OS Agnostic**: Works in Windows, Linux, Unix, macOS. - **Unparalleled Channel Support**: More channels per server (highly optimised for CPU and GPU), to ensure the lowest total cost of ownership (TCO) and better ROI. - **Unprecedented Scalability**: Highly scalable software architecture. Highly optimised AI codebase that supports multi-threaded processing of the algorithms, and multiple analytics functions to run in parallel, in each camera in the surveillance system. The footprint is very small with reduced computational and memory requirement, because of its indigenous design and innovative architecture. - **Integrated with Intelligent VMS**: Integrated with Videonetics enterprise-class Intelligent Video Management Software (IVMS). This homogenous, unified video computing architecture helps both the VMS and VA platforms to share common computing, data path and IT infrastructure resources, with efficient utilisation of compute and memory resources. Resulting in a highly optimised cost of IT infrastructure, hence lower total cost of ownership (TCO), and higher ROI. It is also extremely important for ease of maintenance after deployment. - **Deployment Flexibility**: Flexible deployment options across the edge, on-premise, or on-cloud. Deployable in hybrid and fog computing architecture, hence adaptable to 'Video Analytics as a Service' computing paradigm. - **Software Interface**: Easy-to-use, intuitive and feature-rich desktop, handheld and web user interface. - **Alert Management**: Ability to prioritise alerts according to criticality. The automatic alert manager built into the platform notifies the operators instantly by different alert mechanisms such as email, SMS, WhatsApp, chat etc. The alert handling mechanism can also trigger other devices such as Public Address System, audio-visual annunciators etc. A rich API is available to integrate with any other devices, as per user requirement. - **Intelligent Search**: Intelligent video and event search features to expedite investigations. - **Statistical Reports and Visualisation Dashboard**: Various forms of statistical reports of the events can be automatically generated using analytics-based tools. These reports can be viewed in dashboards for easy understanding, and to serve as a decision support tool. - **Rich API**: Pre-integrated with Videonetics in-house enterprise-class VMS, yet a rich API is also available to integrate the VA platform with an existing surveillance installation, or a third-party VMS, as per the choice of the user. - **Field-proven Technology**: The AI-based analytics framework and the applications are field-tested under a wide range of environmental and lighting conditions. Proven to work more reliably in high population density conditions, compared to other competing solutions. Field-proven with real-life deployments across various domains, in more than 100 cities, enterprises and other critical installations. Videonetics’s Unified Video Computing Platform™ helps you make sense of surveillance, by providing you with an end-to-end solution for a wide range of applications. The platform is powered by our Artificial Intelligence and Deep Learning engine, which is trained on humongous data sets, making our solutions incredibly robust and smart. All our products and solutions are integrated yet modular, ONVIF compliant, OS and hardware agnostic, scalable and interoperable. Videonetics has been ranked #1 Video Management Software provider in India, and among the top 5 in Asia (IHS/Informa Tech Research). We remain driven by innovation, and committed to making the world a safer, smarter, happier place.
{"Source-Url": "https://www.videonetics.com/public/media/datasheet/AI-enabled-Video-Analytics.pdf", "len_cl100k_base": 4465, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25301, "total-output-tokens": 4928, "length": "2e12", "weborganizer": {"__label__adult": 0.001361846923828125, "__label__art_design": 0.00211334228515625, "__label__crime_law": 0.027862548828125, "__label__education_jobs": 0.000762939453125, "__label__entertainment": 0.0006947517395019531, "__label__fashion_beauty": 0.0005526542663574219, "__label__finance_business": 0.0019969940185546875, "__label__food_dining": 0.0004224777221679687, "__label__games": 0.0022068023681640625, "__label__hardware": 0.01250457763671875, "__label__health": 0.0006589889526367188, "__label__history": 0.0005865097045898438, "__label__home_hobbies": 0.0002796649932861328, "__label__industrial": 0.0036220550537109375, "__label__literature": 0.0003635883331298828, "__label__politics": 0.0010395050048828125, "__label__religion": 0.0008134841918945312, "__label__science_tech": 0.1209716796875, "__label__social_life": 0.000274658203125, "__label__software": 0.31640625, "__label__software_dev": 0.50146484375, "__label__sports_fitness": 0.0005598068237304688, "__label__transportation": 0.00215911865234375, "__label__travel": 0.0003204345703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23266, 8e-05]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23266, 0.09466]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23266, 0.90146]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 110, false], [110, 2853, null], [2853, 5660, null], [5660, 8516, null], [8516, 10762, null], [10762, 13574, null], [13574, 15769, null], [15769, 18468, null], [18468, 21550, null], [21550, 22567, null], [22567, 23266, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 110, true], [110, 2853, null], [2853, 5660, null], [5660, 8516, null], [8516, 10762, null], [10762, 13574, null], [13574, 15769, null], [15769, 18468, null], [18468, 21550, null], [21550, 22567, null], [22567, 23266, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23266, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23266, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23266, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23266, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23266, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23266, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23266, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23266, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23266, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23266, null]], "pdf_page_numbers": [[0, 0, 1], [0, 110, 2], [110, 2853, 3], [2853, 5660, 4], [5660, 8516, 5], [8516, 10762, 6], [10762, 13574, 7], [13574, 15769, 8], [15769, 18468, 9], [18468, 21550, 10], [21550, 22567, 11], [22567, 23266, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23266, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
b8922f7dcf8eaaa381a620de446a56ffaf776bdd
CASM: Implementing an Abstract State Machine based Programming Language * Roland Lezuo, Gergő Barany, Andreas Krall Institute of Computer Languages (E185) Vienna University of Technology Argentinierstraße 8 1040 Vienna, Austria {rlezuo,gergo,andi}@complang.tuwien.ac.at Abstract: In this paper we present CASM, a general purpose programming language based on abstract state machines (ASMs). We describe the implementation of an interpreter and a compiler for the language. The demand for efficient execution forced us to modify the definition of ASM and we discuss the impact of those changes. A novel feature for ASM based languages is symbolic execution, which we briefly describe. CASM is used for instruction set simulator generation and for semantic description in a compiler verification project. We report on the experience of using the language in those two projects. Finally we position ASM based programming languages as an elegant combination of imperative and functional programming paradigms which may liberate us from the von Neumann style as demanded by John Backus. 1 Introduction Most of the well known programming languages have a sequential execution semantics. Statements are executed in the order they appear in the source code of the program (imperative programming). The effects of a statement – changes to the program’s state – are evaluated before the next statement is considered. Examples of such programming languages are assembly languages, C, Java, Python and many more. Actually there are so many languages based on the imperative paradigm that it may even feel "natural". However, there seems to be a demand for alternative programming systems. John Backus even asked whether programming can be liberated from the von Neumann style [Bac78]. John Hughes tries to convince the world that functional programming matters [Hug89]. Hudak et al. conclude that, although not used by the masses, aspects of functional programming are being incorporated into main stream imperative languages [HHPJW07]. Functional languages describe side-effect free application of functions to their arguments which makes them presumably easier to understand. They struggle to gain real-world *This work is partially supported by the Austrian Research Promotion Agency (FFG) under contract 827485, Correct Compilers for Correct Application Specific Processors and Catena DSP GmbH. acceptance however. While not claiming to be a representative source, but merely an indi- cator, the popular source code repository hoster github¹ (more than 4 million repositories) lists no functional language amongst its 10 most popular ones. Actually, even assembly languages (17th most popular) are more popular than Haskell (18th most popular with approx. 16000 projects using it) according to github’s language statistics. In this paper we report on CASM a general purpose programming language based on Ab- stract State Machines (ASMs). ASMs were introduced by Gurevich (originally named evolving algebras) in the Lipari Guide [Gur95]. One of the core concepts of ASM (as the name indicates) is the state. Another core concept of ASM is the concept of a rule, which describes exactly how the state is changed by means of updates applied to the state. Application of a rule itself is side-effect free, one of the core concepts of functional pro- gramming. The CASM language was originally designed to describe programming language seman- tics, a purpose for which ASMs are known to be well suited [SSB01, ZG97, Gau95, KP97, Gle04, HS00, GH93]. We perform correctness proofs in a compiler verification project using symbolic execution. Having the ASM models of a machine language at hands it suggests itself to reuse them for implementing an instruction set simulator [BHK10]. To create a fast simulator we added a type system and developed a compiler for the CASM language. The remainder of this paper is structured as follows. Section 2 discusses previous work on ASM based programming languages, in section 3 we describe the CASM language, section 4 briefly introduces the implemented type system, section 5 explains implementa- tion details of the interpreter and compiler, in section 6 we report on our experience using CASM in various projects and section 7 finally concludes the paper and gives directions for further work. 2 Related Work The ASM method is well known and there have been quite a few efforts to create a widely accepted framework for ASM tools. However, most of the projects ceased to exist. One reason it seems to be so difficult to provide a generally accepted framework and language for all ASM users may be the fact that ASM’s applications are so broad. On the one hand it is used to create very high level models [Bo03], and on the other it is used to model very low level aspects of hardware (like we do). The demands of the users vary greatly and this may be the reason there is quite a number of attempts to create ASM tools. In a sense this work adds to the misery, but it is driven by the very specific needs of compiler verifi- cation (symbolic execution) and simulator synthesis (very fast performance). This section tries to distinguish the CASM language from other ASM based languages. Schmid introduced AsmGofer in [Sch01b]. AsmGofer is an interpreter for an ASM based language. It is written in the Gofer² language (a subset of Haskell) and covers most of the ¹http://github.com ²http://web.cecs.pdx.edu/~mpj/goferarc/index.html features described in the Lipari guide. The author notes however that the implementation is aimed at prototype modeling and too slow for performance critical applications. Castillo describes the ASM Workbench in [Cas01]. Similar to CASM he added a type system to his language. The ASM Workbench is implemented in ML3 in an extensible way. Castillo describes an interpreter and a plugin for a model checker, which allows to translate certain restricted classes of abstract state machines to models for the SMV4 model checker. Schmid also describes compiling ASM to C++ [Sch01a]. His compiler uses the ASM Workbench language as input. He proposes a double buffering technique avoiding collection update sets to increase runtime performance. There is no report of the achieved performance. CASM uses a so called pseudo state (more details are in section 5.1.3) to implement update sets efficiently. Anlauf introduces XASM, a component based ASM language compiled to C [Anl00]. The novel feature of XASM is the introduction of a component model, allowing implementation of reusable components. XASM supports functions implemented in C using an extern keyword. CASM does not feature modularization, but can be extended using C code as well. XASM was used as the core of the gem-mex system, a graphical language for ASMs. Farahbod designed CoreASM, an extensible ASM execution engine [FGG05]. The CoreASM project is actively maintained and early prototypes of our compiler verification proofs even used CoreASM. Unfortunately performance of the CoreASM interpreter is very poor, which ultimately lead to the development of CASM. Execution speed was increased by a factor of 3000 [LK12]. The CASM language is heavily inspired by the CoreASM language, but over time they have diverged. 3 The CASM language In this section we introduce the most important aspects of the CASM language. The Lipari guide [Gur95] contains a formal definition of the constructs presented in this section. Also the CoreASM handbook may be a useful reference [Far] as CASM roots in the CoreASM language. In section 3.7 the most important differences between the two languages are pointed out. 3.1 State and Execution model The state of an ASM is a so called static algebra over a set of universes (types) [Gur95]. These universes form a superuniverse X in which at least 3 distinct elements (true, false and undef) are defined. The state contains r-ary functions (mapping Xr to X) and relations 4http://www.cs.cmu.edu/~modelcheck/smv.html (mapping \(X\) to true, false). For all locations of a function, unless not explicitly defined otherwise, the value undef is assigned. CASM programs describe a (potentially infinite) sequence of state changes by alternating calculation and application of update sets. The state of a CASM program is formed by a number of functions (possibly 0). An example is given in listing 1. ``` function foo : -> Boolean function bar : Int -> Int ``` Listing 1: A state Function foo is a 0-ary function, and is very similar to a global boolean variable in common programming languages. Function bar is a 1-ary function, mapping an Int to another Int. This can be interpreted as an array, although there are no bounds and the function may be only partially defined. The special value undefined will be returned for all locations where bar is undefined. In that sense a 1-ary CASM function is more like a hash-map. The arguments to a function are called a location. CASM programs are formulated as a set of rules. There is a top-level rule which will be executed repeatedly until the program terminates itself. A rule can’t change the state while being executed, which implies that all calculations are side-effect free. The rule returns the changes to the state, as a so called update set. An update set is a set of updates, each of which describes a function, a location and the new value. Update sets are applied to the state whenever the top-level rule concludes (returns). Listing 2 shows an example of a rule (and the state it operates on). ``` function x : -> Int function y : -> Int function z : -> Int rule swap = { x := y y := x } ``` Listing 2: Parallel swap Assuming an initial state of \(\{x = 3, y = 2, z = 1\}\) the calculated update set of a (single) invocation of swap is \(\{(x, 2), (y, 3)\}\). The update is then applied to the state, yielding \(\{x = 2, y = 3, z = 1\}\). When a function is assigned different values (for the same location) the update set is said to be inconsistent. In CASM a non-revocable runtime error is raised when an update set becomes inconsistent. Listing 3 shows an example for a rule triggering a runtime error. ``` function b : -> Boolean rule inconsistent = { b := true b := false } ``` Listing 3: Inconsistent update set - runtime error 3.1.1 Update Rule and Function Signatures The previous examples informally introduced the update rule (:=) and function signatures. A function signature defines the types of the function arguments (comma-separated between : and ->) and the type the function maps to (after ->). The general form of an update rule is \( f(l) := v \), where \( f \) is a function and \( l \) a location. A location has to match the argument types of the function signature. \( v \) is an expression of the function’s type which will be evaluated and assigned to the function at the given location. The state will not be changed when the update rule is executed, but the effects of the update will be added to the update set of the environmental rule. Only the update rule can change the state (or more precisely, add updates to an update set). 3.1.2 Types, Composition and Enumerations CASM offers 3 built-in primitive types. They are Boolean for boolean values (true, false), Int for integer values and String for character strings. An enumeration can be defined using the enum keyword. Each enumeration defines a new type with the same name as the enumeration. Each member of the enumeration becomes a new globally unique identifier of the enumeration’s type. An example is given in listing 4. ```plaintext enum MyEnum = { one, two, three } function x : -> MyEnum rule example = x := three ``` Listing 4: Enumeration Type CASM offers two kinds of type composition. A List is a sequence of zero or more elements of equal type. The Tuple is a sequence with a fixed number of elements of possibly different type. CASM disallows user-provided recursive data types in the current version. An example is given in listing 5. ```plaintext function stack : -> List(Int) function aMapping : -> List(Tuple(String,Int)) ``` Listing 5: Composition Type 3.2 Expressions, Variables and Derived Values For Int types the usual set of operators is provided via built-ins. Expressions are very similar to the C programming language (without any side-effects however). CASM does not offer a notion of local state, so there are no local variables in the common sense. There is however the possibility to bind expressions to a local name using the let rule. In contrast to CoreASM let rules are statically scoped. Some expressions may be used extensively in a program and in more than one rule. Such expressions can be declared globally using the derived keyword. A derived accepts typed arguments, just like functions, and is a single expression. Listing 6 gives an example of a derived and nested let rules (assigning true to function foo at location 18). 3.3 Block Rule and Control Flow The block rule, syntactically expressed by curly brackets, combines the update sets of all enclosed rules into one single update set. Each of the enclosed rules is invoked on the same state and because all rules are side-effect free the order in which the rules are invoked does not matter. The resulting update set is formed by applying the union operator on all calculated update sets. When merging is performed inconsistent updates need to trigger a runtime error. Listing 2 already made use of the block rule. The CASM language offers an if-then-else rule (an example can be seen in listing 11). In the context of an ASM based language it is an if-then-else rule (not a statement). Thinking of if-then-else as a rule also helps to comprehend the semantics. if-then-else produces an update set (like any other CASM rule does). The update set is either the update set produced by the rule in the if-branch or the one in the else-branch, depending on the boolean value of the expression following the if keyword. There also is a switch-case statement, with the usual semantics and an optional default case label. The call rule invokes another rule. There are two flavors of the call rule, a direct and an indirect one. In the direct case the rule to be invoked is known statically and directly coded in the source file, while in the indirect case the rule to be invoked is calculated by an expression of type RuleRef. A RuleRef can be produced by the @ operator (similar to C’s & operator). Listing 7 shows a direct and an indirect call (note the additional brackets around the expression). One can also see that rules can have typed arguments. CASM’s call rule differs from the definition given in the Lipari guide, see section 3.7 for more details. ### 3.4 Sequential Block Rule Some problems can naturally be described by imperative programming. CASM supports sequential programming by means of the seqblock rule. Statements enclosed by a seqblock are executed in exactly the specified order. Additionally the state change induced by previous rules is visible to subsequent ones. The update set of a seqblock rule is calculated by subsequently merging the update sets of the enclosed rules. Two updates to the same function and location from different rules do not conflict however, the later overwrites the previous one. Effectively the update set describes what would have happened if the rules were applied using a sequential semantics. It is important to note that the state is not actually changed. The rules are evaluated using the state which would result from applying the previous update sets. A temporary state is created and each calculated update set is applied to it. This temporary state is discarded at the end of the seqblock. Listing 8 shows an implementation of swap using a seqblock rule. Please note the use of the temporary variable \( t \), because the update to \( x \) is visible to the second update rule (compare to listing 2). Again assuming an initial state of \{ \( x = 3, y = 2, z = 1 \) \} the update set of the first update rule is \{ \( (x, 2) \) \}. This update set is applied to the initial state resulting in the temporary state \{ \( x = 2, y = 2, z = 1 \) \}. The second update rule results in the update set \{ \( (y, 3) \) \}. Applying to the temporary state the gives \{ \( x = 2, y = 3, z = 1 \) \}, this state is discarded however. Merging the update sets gives the update set of the seqblock rule itself, it is \{ \( (x, 2), (y, 3) \) \}. This final update set will be applied to the state when the swap rule concludes. ```plaintext function x : -> Int function y : -> Int function z : -> Int rule swap = let t = x in seqblock x := y y := t endseqblock ``` Listing 8: Sequential swap 3.5 Forall and Iterate The *forall* rule is used in combination with an iterate-able expression (e.g. a range of integers, an enumeration). For each element of the iterate-able expression the rule given in the body is invoked. The *update sets* produced by the body are understood to be executed in parallel. An example is given in listing 9 where it is used to create a list of 4 elements. ```plaintext function x : -> Int rule create = forall i in [0..3] do x(i) := i Listing 9: Forall ``` The *iterate* rule on the other hand repeatedly invokes its rule body until the produced *update set* is empty. Each invocation is understood to be executed in a sequential manner, so each rule is invoked using the *temporary state* produced by the previous iteration. An example is given in listing 10 where the rule is used to atomically perform a fold operation (using addition) on a list. The update set returned by the *iterate* rule is \{(i,10),(f,45)\}. ```plaintext function a : Int -> Int initially { 0->0, 1->1, /* skipped */ , 9->9 } function i : -> Int initially { 0 } function f : -> Int initially { 0 } rule fold = iterate if i < 10 then { i := i + 1 f := f + a(i) } Listing 10: Iterate ``` 3.6 Stacks and Lists There are rules and built-in functions which can be used with *List* types. Rules *pop* and *push* are used to implement stacks. The built-in functions *cons*, *peek* and *tail* construct and consume lists. There also is a *nth* function to access the nth element of a list (or tuple). These functions are (parametric) polymorphic in their nature and need to be handled correctly by the type system. Listing 11 gives an example. ```plaintext function list : -> List(Int) rule foo = seqblock push 3 into list if peek(list) != 3 then assert false let x = nth(list, 1) in list := cons(x, list) endseqblock Listing 11: List built-ins and rules 3.7 Differences to other ASM based languages CASM differs in two major aspects from other ASM based programming languages (i.e. CoreASM). The Lipari guide defines rule arguments to be passed-by-name. Passing by-name has some interesting features, but is difficult to be implemented efficiently [BG93]. In CASM the semantics of the call rule specify arguments to be passed by-value. Please note that pass-by-value is semantically equivalent to pass-by-name when all arguments are constants. Therefore the restricted call rule of CASM can be simulated by evaluating all argument expressions (e.g. using a let rule) before invoking an unrestricted call. We merely enforce this policy on the CASM programmer by performing this evaluation before invoking the called rule. The other difference is the scope of variables introduced by the let rule. CoreASM installs variable names into an environment passed to rules invoked by a call rule. So listing 12 is valid in CoreASM, but not in CASM. Environments are not passed to rules invoked by a call rule in CASM. The main reason for this is that the type inference and type checking would be unable to handle this. ```plaintext function y : -> Int rule callee = y := x rule caller = let x = 3 in call callee() ``` Listing 12: Only valid in CoreASM 4 CASM Type System CASM is a static\(^5\), strongly typed language and no implicit type conversion is performed. To reduce the often redundant notation of types the programmer is allowed to omit the type if it can be deduced automatically. CASM only demands types for the arguments of functions, rules and derived as well as for a function’s type. Optionally the programmer can provide type information for the type of a derived and a named expression type bound via a let rule. This may improve readability of the source code and may be needed to guide the type inference system in some corner cases. These corner cases often result from the special value of `undef` which is compatible to every type. ```plaintext function assign : -> List(Int) rule foo = let uList : List(Int) = undef in let uElem = nth(1, uList) in assign := [ uElem ] ``` Listing 13: Type Annotation needed \(^5\)except indirect calls Listing 13 shows an example the CASM type system implementation can not handle without annotation. The first let binds the name uList to the value undef. The second let binds the name uElem to the nth (1st in this case) element of a list or tuple. nth (a built-in function) returns undef if any of its arguments is undef. uElem’s type could not be computed (locally) when no additional type information would be provided by the programmer. Although limitations in the implementation of the type system exist, they very rarely occur in real world programs. To specify the argument types of rules and deriveds may be superfluous, but increases readability and documentation. But especially for derived it prevents parametric polymorphism, which may be a useful feature after all. We are considering to change this in a future version of CASM. 5 Interpreter and Compiler We have developed an interpreter and a compiler for CASM. The interpreter is capable of concrete and symbolic execution of CASM programs. We have also developed a compiler generating C++ code. The CASM interpreter is a simple abstract syntax tree (AST) interpreter. For creating the parser traditional compiler tools like lex and yacc have been used. Type inference and type checking is performed on the AST. The program is rejected if any types can’t be calculated or any type mismatch is detected. During symbolic execution some (or all) values of the state can hold symbolic instead of concrete values. Evaluation of operators then depends on whether all operands are concrete values or if there is at least one symbolic one. As long as all operands have concrete values the operator is evaluated as usual. But if there is at least one symbolic operand the operator itself returns a new symbol. This returned symbol is linked to the fact that it had been calculated by applying the operator to the specific operands. E.g. an addition of the symbol s3 and the concrete value 23 will result in a new symbol s4. s4 will be linked to the fact s4 := s3 + 23. Should s4 ever be used as an operand a new symbol will be created as result, linked to the fact that s4 was used as an operand. Any symbol appearing as the result of a program can that way be traced back to an initial symbol provided as input to the program. Things get interesting when control flow branches on a symbolic value [Kin76]. For the sake of brevity we only consider if-then-else rules here. The conditional expression must be of Boolean type in CASM, so it can only have two possible values (true, false). When the boolean value is symbolic, both possible values need to be considered. The program forks assuming the expression to be true in one case and false in the other one. These assumptions become part of the facts known about the symbols. The sum of all facts learnt about symbols along a path of execution is called path condition. Path conditions can be used to automatically generate test cases [VPK04]. The CASM interpreter does not directly support this, but it can easily be implemented by using the generated trace files. 5.1 Compilation scheme and efficient runtime The typed AST is also used to compile the CASM program to C++. Our current CASM compiler implementation performs only a limited set of optimizations to keep the generated C++ files small. The runtime system makes heavy use of C++ template mechanism and inlining and hence the generated machine code is therefore quite compact resulting in satisfying performance for common CASM code patterns. 5.1.1 Efficient Memory Allocation During evaluation of a rule a number of updates and update sets have to be generated dynamically. The lifetime of these objects is limited however. All update sets and the updates they contain can safely be deleted when the top-level rule has concluded and all updates have been committed to the state. We therefore allocate these objects in a dump memory area. When the top-level rule concludes, the dump memory is reset and all objects it contained are invalid. New updates are allocated overwriting the old ones. This technique reduces overhead for dynamic memory allocation to almost zero. 5.1.2 Optimization for very small update sets For the workloads we operate on we observed that most update sets are very small (\(\leq 4\) in most cases). Our current implementation for update sets therefore use a small number (4) of pre-allocated slots. Only after all pre-allocated slots of an update set are occupied a hash-map is used to store any additional updates part of the update set. A hash-map needs to be used as update sets are frequently tested for membership (to detect conflicting updates). 5.1.3 Pseudo State Handling of the update sets is crucial for the performance of the compiled code. CASM uses hash-maps to implement update sets. The underlying assumption justifying this design decision is that update sets are small and the global state is large. A concept called pseudo state is used to realize temporary states needed to implement sequential execution semantics (see section 3.4). When reading a state function from within a seqblock the update set is queried first. If there has been an update to that function (and location) by a preceding rule, the value from the update set is returned. Otherwise the value is read from the global state. All rules return the update set produced according to their semantics (described in section 3). Using this mechanism the update sets only needs to be applied to the state when the top-level rule concludes. Merging and querying hash-maps is efficient as long as they are reasonably small. 6 Evaluation of the CASM language 6.1 Hardware modeling We successfully used CASM in two projects to model CPU architectures and will briefly report the results here. Details can be found in the cited papers. One project focused on fast design space exploration synthesizing cycle-accurate simulators from a CASM model of a microprocessor. The microprocessor model is proven to be coherent to the CASM specification of its instruction set architecture. We were able to model the instruction set and two variants of pipelined MIPS processors in just a few hundred lines of code. This is in size similar to a MIPS model formulated in a specialized hardware description language and demonstrates the expressiveness of the CASM language. The synthesized simulator is capable of executing benchmark programs of the SPECInt suite with a very satisfying peak performance of 1 MHz. This roughly translates into 15 million (basic) CASM rules to be executed per second. More details are reported in [LK13]. In a compiler verification project CASM is used to model a complex (non-interlocking) DSP processor. Combining parallel and sequential execution modes emerged as the crucial feature to describe cycled circuits. All hardware blocks are described in a parallel execution block, whereas their internal operations are described by a sequential execution block. We primarily use symbolic execution to perform simulation proofs in a translation validation approach. Some details are reported in [LK12]. 6.2 Functional and Imperative programming style In the introduction it was claimed that ASM based languages in a way combine imperative and functional programming styles. This property of the CASM language proved to be very useful in our experience. In this section we want to point out this property giving an illustrative example. Börger and Bolognesi give a recursive ASM version of quicksort in [BB03]. Their implementation is very short and concise, like it is in most functional languages, but is not in-place as well. Imperative implementations swap elements directly (in-place) in the array avoiding copies of the whole input data. Functional languages need to construct a new list containing the resulting array (they can not destroy the input data) which induces $O(n)$ additional space requirements. ASM based languages calculating quicksort in one computation step return an update set describing the new state of the array. This update set also uses $O(n)$ additional space. We present an in-place, non-recursive version of quicksort. Listing 14 shows the state needed to perform the algorithm. It uses a `stack` to keep track of the parts of the `array` still to be sorted. The `quicksort` rule pops a new part to be sorted of the `stack` and stores the left and right indices into `l` and `r`. Line 48 tests if there is still further work to do and if so lines 49-45 contain the necessary initializations. The CASM idiom in line 57 terminates the whole computation. As long as a part of the array still needs to be sorted the rule `quicksort_one_step` will be executed. The rule `partition` will be called until a partition has been found. If the remaining parts are not trivial small they are pushed onto the `stack` and a new part needs to be popped from the stack `need_pop`. The `partition` rule initially determines a pivot element (the last element of the part to be sorted here) and either swaps two elements of the array (rule partition_one_step) or swaps the pivot element to its final position $p$. Because the partition_one_step rule concludes after swapping two elements of the array the resulting update set is size bound and not dependent on the input data. Otherwise the update sets would need up to $O(n)$ memory (i.e. on input $y_0, y_1, \ldots, y_n, x_0, x_1, \ldots, x_m, p : y_i < p \land x_j > p : 0 \leq i \leq n, 0 \leq j \leq m$). To keep the observable computations small and the program simpler partition_one_step utilizes the iterate rule in lines 4 and 7 when searching for elements to be swapped. Oth- erverswise it would need to keep track of the search similar to need_partition. An ASM based language shares many features of a functional programming language. Large parts of the program are executed side-effect free, and while executing them the global state is read-only. One could think of the state as an explicit argument to each rule, the returned update set would then be the result of a functional application. The state only changes between invocations of the top-level rule, so the assumed implicit state argument changes for the next invocation. In that sense each single application of an ASM top-level rule is functional. By combining parallel and sequential execution modes a programmer can chose the gran- ularity of the computations steps seen by an observer of the program. This can, as demon- strated, be used to implement an in-place version of quicksort while basically program- mapping a functional style. The beauty of ASM based languages is the clear separation of invoking rules and applying changes to the state. They closely resemble what Backus called an applicative state transi- tion system (AST system), on a much smaller scale though. What he calls a formal system for functional programming (FFP system) correspond to a rule and his SYSTEM state is just the state of the program. 7 Conclusion and Further Work In this paper we presented CASM, a general purpose programming language based on the abstract state machine (ASM) method. We presented core features of the language and described how we are able to efficiently compile CASM to C++. The CASM compiler was successfully used to generate an instruction set simulator for the MIPS architecture capable of executing SPECInt benchmark with a peak performance of 1 MHz. We also developed an interpreter capable of symbolic execution which is successfully used in an ongoing compiler verification project. The CASM language in a way combines functional and imperative programming aspects. We found this feature very useful and convenient and tried to showcase it using a small example. While using the language common programming pattern arise which lead to new rules be- ing implemented and new built-in functions being added. We are also working on a faster compilation and runtime implementation further increasing the performance of generated simulators. References
{"Source-Url": "https://publik.tuwien.ac.at/files/PubDat_220246.pdf", "len_cl100k_base": 7171, "olmocr-version": "0.1.51", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 40396, "total-output-tokens": 9530, "length": "2e12", "weborganizer": {"__label__adult": 0.0003762245178222656, "__label__art_design": 0.00027632713317871094, "__label__crime_law": 0.0002651214599609375, "__label__education_jobs": 0.00035762786865234375, "__label__entertainment": 5.59687614440918e-05, "__label__fashion_beauty": 0.00015485286712646484, "__label__finance_business": 0.0001748800277709961, "__label__food_dining": 0.00035881996154785156, "__label__games": 0.0004944801330566406, "__label__hardware": 0.0013303756713867188, "__label__health": 0.00044155120849609375, "__label__history": 0.0002135038375854492, "__label__home_hobbies": 8.940696716308594e-05, "__label__industrial": 0.0004489421844482422, "__label__literature": 0.0001983642578125, "__label__politics": 0.0002372264862060547, "__label__religion": 0.0005316734313964844, "__label__science_tech": 0.0135650634765625, "__label__social_life": 6.115436553955078e-05, "__label__software": 0.003665924072265625, "__label__software_dev": 0.9755859375, "__label__sports_fitness": 0.00030541419982910156, "__label__transportation": 0.0005574226379394531, "__label__travel": 0.0001957416534423828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37655, 0.02069]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37655, 0.48129]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37655, 0.89204]], "google_gemma-3-12b-it_contains_pii": [[0, 2394, false], [2394, 5473, null], [5473, 8031, null], [8031, 10325, null], [10325, 12625, null], [12625, 14574, null], [14574, 16759, null], [16759, 18712, null], [18712, 20929, null], [20929, 24010, null], [24010, 26541, null], [26541, 29090, null], [29090, 29931, null], [29931, 32938, null], [32938, 35929, null], [35929, 37655, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2394, true], [2394, 5473, null], [5473, 8031, null], [8031, 10325, null], [10325, 12625, null], [12625, 14574, null], [14574, 16759, null], [16759, 18712, null], [18712, 20929, null], [20929, 24010, null], [24010, 26541, null], [26541, 29090, null], [29090, 29931, null], [29931, 32938, null], [32938, 35929, null], [35929, 37655, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37655, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37655, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37655, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37655, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37655, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37655, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37655, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37655, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37655, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37655, null]], "pdf_page_numbers": [[0, 2394, 1], [2394, 5473, 2], [5473, 8031, 3], [8031, 10325, 4], [10325, 12625, 5], [12625, 14574, 6], [14574, 16759, 7], [16759, 18712, 8], [18712, 20929, 9], [20929, 24010, 10], [24010, 26541, 11], [26541, 29090, 12], [29090, 29931, 13], [29931, 32938, 14], [32938, 35929, 15], [35929, 37655, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37655, 0.0]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
199d825f2e678a358b8882cb5e56ff6cb4c2c326
[REMOVED]
{"Source-Url": "https://inria.hal.science/hal-01674910v1/file/455531_1_En_21_Chapter.pdf", "len_cl100k_base": 6061, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 67511, "total-output-tokens": 8140, "length": "2e12", "weborganizer": {"__label__adult": 0.0003859996795654297, "__label__art_design": 0.0004475116729736328, "__label__crime_law": 0.0003218650817871094, "__label__education_jobs": 0.00376129150390625, "__label__entertainment": 9.763240814208984e-05, "__label__fashion_beauty": 0.00012946128845214844, "__label__finance_business": 0.0007276535034179688, "__label__food_dining": 0.0003380775451660156, "__label__games": 0.0007166862487792969, "__label__hardware": 0.0006127357482910156, "__label__health": 0.0004401206970214844, "__label__history": 0.00032591819763183594, "__label__home_hobbies": 0.00015842914581298828, "__label__industrial": 0.0002701282501220703, "__label__literature": 0.0003843307495117187, "__label__politics": 0.00034356117248535156, "__label__religion": 0.0003485679626464844, "__label__science_tech": 0.027984619140625, "__label__social_life": 0.0006365776062011719, "__label__software": 0.021331787109375, "__label__software_dev": 0.939453125, "__label__sports_fitness": 0.00025343894958496094, "__label__transportation": 0.00042724609375, "__label__travel": 0.0002130270004272461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33341, 0.05213]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33341, 0.16138]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33341, 0.89493]], "google_gemma-3-12b-it_contains_pii": [[0, 1180, false], [1180, 3734, null], [3734, 7135, null], [7135, 10276, null], [10276, 13411, null], [13411, 14376, null], [14376, 16283, null], [16283, 19589, null], [19589, 20693, null], [20693, 22167, null], [22167, 24782, null], [24782, 26483, null], [26483, 29271, null], [29271, 32706, null], [32706, 33341, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1180, true], [1180, 3734, null], [3734, 7135, null], [7135, 10276, null], [10276, 13411, null], [13411, 14376, null], [14376, 16283, null], [16283, 19589, null], [19589, 20693, null], [20693, 22167, null], [22167, 24782, null], [24782, 26483, null], [26483, 29271, null], [29271, 32706, null], [32706, 33341, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33341, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33341, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33341, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33341, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33341, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33341, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33341, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33341, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33341, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33341, null]], "pdf_page_numbers": [[0, 1180, 1], [1180, 3734, 2], [3734, 7135, 3], [7135, 10276, 4], [10276, 13411, 5], [13411, 14376, 6], [14376, 16283, 7], [16283, 19589, 8], [19589, 20693, 9], [20693, 22167, 10], [22167, 24782, 11], [24782, 26483, 12], [26483, 29271, 13], [29271, 32706, 14], [32706, 33341, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33341, 0.14815]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
91aeba70b1b985ca246aa7e8c39f8cd35c5c8dca
Software Metrics A Rigorous And Practical Approach Third Though an individual can process a limitless amount of information, the human brain can only comprehend a small amount of data at a time. Using technology can improve the process and comprehension of information, but the technology must learn to behave more like a human brain to employ concepts like memory, learning, visualization ability, and decision making. Emerging Trends and Applications in Cognitive Computing is a fundamental scholarly source that provides empirical studies and theoretical analysis to show how learning methods can solve important application problems throughout various industries and explain how machine learning research is conducted. Including innovative research on topics such as deep neural networks, cyber-physical systems, and pattern recognition, this collection of research will benefit individuals such as IT professionals, academicians, students, researchers, and managers. The idea that “measuring quality is the key to developing high-quality software systems” is gaining relevance. Moreover, it is widely recognised that the key to obtaining better software systems is to measure the quality characteristics of early artefacts, produced at the conceptual modelling phase. Therefore, improving the quality of conceptual models is a major step towards the improvement of software system development. Since the 1970s, software engineers had been proposing high quantities of metrics for software products, processes and resources but had not been paying any special attention to conceptual modelling. By the mid-1990s, however, the need for metrics for conceptual modelling had emerged. This book provides an overview of the most relevant existing proposals of metrics for conceptual models, covering conceptual models for both products and processes. Contents: Towards a Framework for Conceptual Modelling Quality (M Piattini et al.) A Proposal of a Measure of Completeness for Conceptual Models (O Dieste et al.) Metrics for Use Cases: A Survey of Current Proposals (B Bernádez et al.) Defining and Validating Metrics for UML Class Diagrams (M Genero et al.) Measuring OCL Expressions: An Approach Based on Cognitive Techniques (L Reynoso et al.) Metrics for Databases Conceptual Models (M Serrano et al.) Metrics for UML Statechart Diagrams (J A Cruz-Lemus et al.) Metrics for Software Process Models (F García et al.) Readership: Senior undergraduates and graduate students in software engineering; PhD students, researchers, analysts, designers, software engineers and those responsible for quality and auditing. Key Features: Presents the most relevant existing proposals of metrics for conceptual models, covering conceptual models for both products and processes Provides the most current bibliography on this subject The only book to focus on the quality aspects of conceptual models Keywords: Conceptual Model, Quality, Metrics, UML, OCL, Empirical Research This book constitutes the refereed proceedings of the Third International Conference on Product Focused Software Process Improvement, PROFES 2001, held in Kaiserslautern, Germany, in September 2001. The 27 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on quality of software, software process assessment and improvement, organizational learning and experience factory, industrial experiences and case studies, software and process modeling, and empirical software engineering. C. Amtling Directorate General Information Society, European Commission, Brussels Under the 4th Framework of European Research, the European Systems and Soft ware Initiative (ESSI) was part of the ESPRIT Programme. This initiative funded more than 470 projects in the area of software and system process improvements. The majority of these projects were process improvement experiments carrying out and taking up new development processes, methods and technology within the software development process of a company. In addition, nodes (centres of exper tise), European networks (organisations managing local activities), training and dissemination actions complemented the process improvement experiments. ESSI aimed at improving the software development capabilities of European enterprises. It focused on best practice and helped European companies to develop world class skills and associated technologies to build the increasingly complex and varied systems needed to compete in the marketplace. The dissemination activities were designed to build a forum, at European level, to exchange information and knowledge gained within process improvement ex periments. Their major objective was to spread the message and the results of experiments to a wider audience, through a variety of different channels. The European Experience Exchange (E–X) project has been one of these dissemination activities within the European Systems and Software Initiative. E–X has collected the results of practitioner reports from numerous workshops in Europe and presents, in this series of books, the results of Best Practice achieve ments in European Companies over the last few years. Covers important concepts, issues, trends, methodologies, and technologies in quality assurance for model-driven software development, A Framework for Managing, Measuring, and Predicting Attributes of Software Development Products and Processes Reflecting the immense progress in the development and use of software metrics in the past decades, Software Metrics: A Rigorous and Practical Approach, Third Edition provides an up-to-date, accessible, and comprehensive introduction to software metrics. Like its popular predecessors, this third edition discusses important issues, explains essential concepts, and offers new approaches for tackling long-standing problems. New to the Third Edition This edition contains new material relevant to object-oriented design, design patterns, model-driven development, and agile development processes. It includes a new chapter on causal models and Bayesian networks and their application to software engineering. This edition also incorporates recent references to the latest software metrics activities, including research results, industrial case studies, and standards. Suitable for a Range of Readers With numerous examples and exercises, this book continues to serve a wide audience. It can be used as a textbook for a software metrics and quality assurance course or as a useful supplement in any software engineering course. Practitioners will appreciate the important results that have previously only appeared in research-oriented publications. Researchers will welcome the material on new results as well as the extensive bibliography of measurement-related information. The book also gives software managers and developers practical guidelines for selecting metrics and planning their use in a measurement program. Due to the role of software systems in safety-critical applications and in the satisfaction of customers and organizations, the development of efficient software engineering is essential. Designing, Engineering, and Analyzing Reliable and Efficient Software discusses and analyzes various designs, systems, and advancements in software engineering. With its coverage on the integration of mathematics, computer science, and practices in engineering, this book highlights the importance of ensuring and maintaining reliable software and is an essential resource for practitioners, professors and students in these fields of study. Although there are countless books on statistics, few are dedicated to the application of statistical methods to software engineering. Simple Statistical Methods for Software Engineering: Data and Patterns fills that void. Instead of delving into overly complex statistics, the book details simpler solutions that are just as effective and connect with the intuition of problem solvers. Sharing valuable insights into software engineering problems and solutions, the book not only explains the required statistical methods, but also provides many examples, review questions, and case studies that provide the understanding required to apply those methods to real-world problems. After reading this book, practitioners will possess the confidence and understanding to solve day-to-day problems in quality, measurement, performance, and benchmarking. By following the examples and case studies, students will be better prepared able to achieve seamless transition from academic study to industry practices. Includes boxed stories, case studies, and illustrations that demonstrate the nuances behind proper application. Supplies historical anecdotes and traces statistical methods to inventors and gurus. Applies basic statistical laws in their simplest forms to resolve engineering problems. Provides simple techniques for addressing the issues software engineers face. The book starts off by reviewing the essential facts about data. Next, it supplies a detailed review and summary of metrics, including development, maintenance, test, and agile metrics. The third section covers the fundamental laws of probability and statistics and the final section presents special data patterns in the form of tailored mathematical distributions. In addition to selecting simpler and more flexible tools, the authors have also simplified several standard techniques to provide you with the set of intellectual tools all software engineers and managers require. This book provides an up-to-date and rigorous framework for controlling, managing, and predicting software development processes. Emphasizing real-world applications, the authors apply basic ideas in measurement theory to quantify software development resources, processes, and products. The text offers an accessible and comprehensive introduction to software metrics. It features extensive case studies in addition to worked examples and exercises. This new edition covers current research and practical applications of cost estimation methods in practice—Presents a novel metrics-based approach for detecting design problems in object-oriented software. Introduces an important suite of detection strategies for the identification of different well-known design flaws as well as some rarely mentioned ones. Innovative Techniques in Instruction Technology, E-Learning, E-Assessment and Education is a collection of world-class paper articles addressing the following topics: (1) E-Learning including development of courses and systems for technical and liberal studies programs; online laboratories; intelligent testing using fuzzy logic; evaluation of on-line courses in comparison to traditional courses; mediation in virtual environments; and methods for speaker verification. (2) Instruction Technology including internet textbooks; pedagogy-oriented markup languages; graphic design possibilities; open source classroom management software; automatic email response systems; tablet-pcs; personalization using web mining technology; intelligent digital chalkboards; virtual room concepts for cooperative scientific work; and network technologies, management, and architecture. (3) Science and Engineering Research Assessment Methods including assessment of K-12 and university level programs; adaptive assessments; auto assessments; assessment of virtual environments and e-learning. (4) Engineering and Technical Education including capstone and case study course design; virtual laboratories; bioinformatics; robotics; metallurgy; building information modeling; statistical mechanics; thermodynamics; information technology; occupational stress and stress prevention; web enhanced courses; and promoting engineering careers. (5) Pedagogy including benchmarking; group-learning; active learning; teaching of multiple subjects together; ontology; and knowledge representation. (6) Issues in K-12 Education including 3D virtual learning environment for children; e-learning tools for children; game playing and systems thinking; and tools to learn how to write foreign languages. handbook divides into 13 sections, each containing chapters related to that specific discipline. Up-to-date, non-abstract information provides the reader with practical, useful knowledge - directly applicable to the understanding and improvement of the reader's job or the area of interest related to this technology. Handbook of Object Technology discusses: the processes, notation, and tools for classical OO methodologies as well as information on future methodologies prevalent and emerging OO languages standards and specifications frameworks and patterns databases metrics business objects intranets analysis/design tools client/server application development environments Enterprise resource planning (ERP) is a class of integrated software that uses software technologies to implement real-time management of business processes in an organization. ERPs normally cut across organizations, making them large and complex. Software researchers have for many years established that complexity affects software quality negatively and must therefore be controlled with novel metrics and models of evaluation that can determine when the software is at acceptable levels of quality and when not. Metrics and Models for Evaluating the Quality and Effectiveness of ERP Software is a critical scholarly publication that examines ERP development, performance, and challenges in business settings to help improve decision making in organizations that have embraced ERPs, improve the efficiency and effectiveness of their activities, and improve their return on investments (ROI). Highlighting a wide range of topics such as data mining, higher education, and security, this book is essential for professionals, software developers, researchers, academicians, and security professionals. This Three-Volume-Set constitutes the refereed proceedings of the Second International Conference on Software Engineering and Computer Systems, ICSECS 2011, held in Kuantan, Malaysia, in June 2011. The 190 revised full papers presented together with invited papers in the three volumes were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on software engineering; network; bioinformatics and e-health; biometrics technologies; Web engineering; neural network; parallel and distributed; e-learning; ontology; image processing; information and data management; engineering; software security; graphics and multimedia; databases; algorithms; signal processing; software design/testing; e- technology; ad hoc networks; social networks; software process modeling; miscellaneous topics in software engineering and computer systems. On behalf of the PROFES organizing committee we would like to welcome you to the 4th International Conference on Product Focused Software Process Improvement (PROFES 2002) in Rovaniemi, Finland. The conference was held on the Arctic Circle in exotic Lapland under the Northern Lights just before Christmas time, when Kaamos (the polar night is known in Finnish as "Kaamos") shows its best characteristics. PROFES has established itself as one of the recognized international process improvement conferences. Despite the current economic downturn, PROFES has attracted a record number of submissions. A total of 70 full papers were submitted and the program committee had a difficult task in selecting the best papers to be presented at the conference. The main theme of PROFES is professional software process improvement (SPI) motivated by product and service quality needs. SPI is facilitated by software process assessment, software measurement, process modeling, and technology transfer. It has become a practical tool for quality software engineering and management. The conference addresses both the solutions found in practice and the relevant research results from academia. Most of the software measures currently proposed to the industry bring few real benefits to either software managers or developers. This book looks at the classical metrology concepts from science and engineering, using them as criteria to propose an approach to analyze the design of current software measures and then design new software measures (illustrated with the design of a software measure that has been adopted as an ISO measurement standard). The book includes several case studies analyzing strengths and weaknesses of some of the software measures most often quoted. It is meant for software quality specialists and process improvement analysts and managers. The modern field of software metrics emerged from the computer modeling and "statistical thinking" services of the 1980s. As the field evolved, metrics programs were integrated with project management, and metrics grew to be a major tool in the managerial decision-making process of software companies. Now practitioners in the software industry have Software projects today are often characterized by poor quality, schedule overruns and high costs. One of the approaches to address the poor success rate is to track the project progress with a stakeholder driven measurement model that is objective and validated - theoretically and empirically. In this backdrop, based on the Goal-Question-Metric (GQM) model this book proposes a generic and objective measurement model for a software project with eight key measures based on the value propositions of the stakeholders. The measurement model is validated (i) theoretically with measurement theory criteria and (ii) empirically with case studies and a global survey representing IT industry practitioners. Following an introductory chapter that provides an exploration of key issues in requirements engineering, this book is organized in three parts. It presents surveys of requirements engineering process research along with critical assessments of existing models, frameworks and techniques. It also addresses key areas in requirements engineering. This book seeks to promote the structured, standardized and accurate use of software measurement at all levels of modern software development companies. To do so, it focuses on seven main aspects: sound scientific foundations, cost-efficiency, standardization, value-maximization, flexibility, combining organizational and technical aspects, and seamless technology integration. Further, it supports companies in their journey from manual reporting to automated decision support by combining academic research and industrial practice. When scientists and engineers measure something, they tend to focus on two different things. Scientists focus on the ability of the measurement to quantify whatever is being measured; engineers, however, focus on finding the right qualities of measurement given the designed system (e.g. correctness), the system's quality of use (e.g. ease of use), and the efficiency of the measurement process. In this book, the authors argue that both focuses are necessary, and that the two are complementary. Thus, the book is organized as a gradual progression from theories of measurement (yes, you need theories to be successful!) to practical, organizational aspects of maintaining measurement systems (yes, you need the practical side to understand how to be successful). The authors of this book come from academia and industry, where they worked together for the past twelve years. They have worked with both small and large software development organizations, as researchers and as measurement engineers, measurement program leaders and even teachers. They wrote this book to help readers define, implement, deploy and maintain company-wide measurement programs, which consist of a set of measures, indicators and roles that are built around the concept of measurement systems. Based on their experiences introducing over 40,000 measurement systems at over a dozen companies, they share essential tips and tricks on how to do it right and how to avoid common pitfalls. This unique volume is the first publication on software engineering and computational intelligence (CI) viewed as a synergistic interplay of neurocomputing, granular computation (including fuzzy sets and rough sets), and evolutionary methods. It presents a unified view of CI in the context of software engineering. The book addresses a number of crucial issues: what is CI, what role does it play in software development, how are CI elements built into successive phases of the software life cycle, and what is the role played by CI in quantifying fundamental features of software artifacts? With contributions from leading researchers and practitioners, the book provides the reader with a wealth of new concepts and approaches, complete algorithms, in-depth case studies, and thought-provoking exercises. The topics coverage include neurocomputing, granular as well as evolutionary computing, object-oriented analysis and design in software engineering. There is also an extensive bibliography. This book presents a coherent and well-balanced survey of recent advances in software engineering approaches to the development of realistic multi-agent systems (MAS). In it, the concept of agent-based software engineering is demonstrated through examples that are relevant to and representative of real-world applications. The 15 thoroughly reviewed and revised full papers are organized in topical sections on requirements engineering, software architecture and design, modeling, dependability, and MAS frameworks. Most of the papers were initially presented at the Second International Workshop on Software Engineering for Large-Scale Multi-Agent Systems, SELMAS 2003, held in Portland, Oregon, USA, in May 2003; three papers were added in order to complete the coverage of the relevant topics. For over 20 years, Software Engineering: A Practitioner's Approach has been the best selling guide to software engineering for students and industry professionals alike. The sixth edition continues to lead the way in software engineering. A new Part 4 on Web Engineering presents a complete engineering approach for the analysis, design, and testing of Web Applications, increasingly important for today's students. Additionally, the UML coverage has been enhanced and significantly increased in this new edition. The pedagogy has also been improved in the new edition to include sidebars. They provide information on relevant software tools, specific work flow for specific kinds of projects, and additional information on various topics. Additionally, Pressman provides a running case study called "Safe Home" throughout the book, which provides the application of software engineering to an industry project. New additions to the book also include chapters on the Agile Process Models, Requirements Engineering, and Design Engineering. The book has been completely updated and contains hundreds of new references to software tools that address all important topics in the book. The ancillary material for the book includes an expansion of the case study, which illustrates it with UML diagrams. The On-Line Learning Center includes resources for both instructors and students such as checklists, 700 categorized web references, Powerpoints, a test bank, and a software engineering library-containing over 500 software engineering papers.TAKEAWAY HERE IS THE FOLLOWING:1. AGILE PROCESS METHODS ARE COVERED EARLY IN CH. 42. NEW PART ON WEB APPLICATIONS --5 CHAPTERS This book constitutes the thoroughly refereed post-proceedings of 11 international workshops held as satellite events of the 9th International Conference on Model Driven Engineering Languages and Systems, MoDELS 2006, in Genoa, Italy, in October 2006 (see LNCS 4199). The 32 revised full papers were carefully selected for inclusion in the book. They are presented along with a doctoral and an educators’ symposium section. The book presents a comprehensive discussion on software quality issues and software quality assurance (SQA) principles and practices, and lays special emphasis on implementing and managing SQA. Primarily designed to serve three audiences; universities and college students, vocational training participants, and software engineers and software development managers, the book may be applicable to all personnel engaged in a software projects Features: A broad view of SQA. The book delves into SQA issues, going beyond the classic boundaries of custom-made software development to also cover in-house software development, subcontractors, and readymade software. An up-to-date wide-range coverage of SQA and SQA related topics. Providing comprehensive coverage on multifarious SQA subjects, including topics, hardly explored till in SQA texts. A systematic presentation of the SQA function and its tasks: establishing the SQA processes, planning, coordinating, follow-up, review and evaluation of SQA processes. Focus on SQA implementation issues. Specialized chapter sections, examples, implementation tips, and topics for discussion. Pedagogical support: Each chapter includes a real-life mini case study, examples, a summary, selected bibliography, review questions and topics for discussion. The book is also supported by an Instructor’s Guide. Software engineering is widely recognized as one of the most exciting, stimulating, and profitable research areas, with a significant practical impact on the software industry. Thus, training future generations of software engineering researchers and bridging the gap between academia and industry are vital to the field. The International Summer School on Software Engineering (ISSSE), which started in 2003, aims to contribute both to training future researchers and to facilitating the exchange of knowledge between academia and industry. This volume consists of chapters originating from a number of tutorial lectures given in 2009, 2010, and 2011 at the International Summer School on Software Engineering. Engineering, ISSSE, held in Salerno, Italy. The volume has been organized into three parts, focusing on software measurement and empirical software engineering, software analysis, and software management. The topics covered include software architectures, software product lines, model driven software engineering, mechatronic systems, aspect oriented software development, agile development processes, empirical software engineering, software maintenance, impact analysis, traceability management, software testing, and search-based software engineering. "This book provides integrated chapters on software engineering and enterprise systems focusing on parts integrating requirements engineering, software engineering, process and frameworks, productivity technologies, and enterprise systems"--Provided by publisher. Copyright: b740bd51b9ccabf0f68d377f351684b8
{"Source-Url": "https://hmshoppingmorgen.hm.com/lifecoach/function/software_metrics_a_rigorous_and_practical_approach_third_pdf", "len_cl100k_base": 4704, "olmocr-version": "0.1.51", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 10725, "total-output-tokens": 4974, "length": "2e12", "weborganizer": {"__label__adult": 0.0004010200500488281, "__label__art_design": 0.00036787986755371094, "__label__crime_law": 0.0002856254577636719, "__label__education_jobs": 0.003986358642578125, "__label__entertainment": 8.487701416015625e-05, "__label__fashion_beauty": 0.00015461444854736328, "__label__finance_business": 0.0003921985626220703, "__label__food_dining": 0.0003879070281982422, "__label__games": 0.0006437301635742188, "__label__hardware": 0.0004067420959472656, "__label__health": 0.0004553794860839844, "__label__history": 0.00021004676818847656, "__label__home_hobbies": 7.826089859008789e-05, "__label__industrial": 0.00025534629821777344, "__label__literature": 0.0008573532104492188, "__label__politics": 0.00016939640045166016, "__label__religion": 0.00046753883361816406, "__label__science_tech": 0.00746917724609375, "__label__social_life": 0.00012034177780151369, "__label__software": 0.007659912109375, "__label__software_dev": 0.974609375, "__label__sports_fitness": 0.00022351741790771484, "__label__transportation": 0.0003459453582763672, "__label__travel": 0.00015997886657714844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26973, 0.02014]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26973, 0.39328]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26973, 0.93749]], "google_gemma-3-12b-it_contains_pii": [[0, 8641, false], [8641, 12226, null], [12226, 19018, null], [19018, 26108, null], [26108, 26973, null]], "google_gemma-3-12b-it_is_public_document": [[0, 8641, true], [8641, 12226, null], [12226, 19018, null], [19018, 26108, null], [26108, 26973, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26973, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26973, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26973, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26973, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26973, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26973, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26973, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26973, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26973, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26973, null]], "pdf_page_numbers": [[0, 8641, 1], [8641, 12226, 2], [12226, 19018, 3], [19018, 26108, 4], [26108, 26973, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26973, 0.0]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
d7c66573c27835fd9551c92d0804610927634823
Chapter 5 Input/Output I/O Devices Figure 5-1. Some typical device, network, and bus data rates. <table> <thead> <tr> <th>Device</th> <th>Data rate</th> </tr> </thead> <tbody> <tr> <td>Keyboard</td> <td>10 bytes/sec</td> </tr> <tr> <td>Mouse</td> <td>100 bytes/sec</td> </tr> <tr> <td>56K modem</td> <td>7 KB/sec</td> </tr> <tr> <td>Scanner</td> <td>400 KB/sec</td> </tr> <tr> <td>Digital camcorder</td> <td>3.5 MB/sec</td> </tr> <tr> <td>802.11g Wireless</td> <td>6.75 MB/sec</td> </tr> <tr> <td>52x CD-ROM</td> <td>7.8 MB/sec</td> </tr> <tr> <td>Fast Ethernet</td> <td>12.5 MB/sec</td> </tr> <tr> <td>Compact flash card</td> <td>40 MB/sec</td> </tr> <tr> <td>FireWire (IEEE 1394)</td> <td>50 MB/sec</td> </tr> <tr> <td>USB 2.0</td> <td>60 MB/sec</td> </tr> <tr> <td>SONET OC-12 network</td> <td>78 MB/sec</td> </tr> <tr> <td>SCSI Ultra 2 disk</td> <td>80 MB/sec</td> </tr> <tr> <td>Gigabit Ethernet</td> <td>125 MB/sec</td> </tr> <tr> <td>SATA disk drive</td> <td>300 MB/sec</td> </tr> <tr> <td>Ultrium tape</td> <td>320 MB/sec</td> </tr> <tr> <td>PCI bus</td> <td>528 MB/sec</td> </tr> </tbody> </table> Memory-Mapped I/O (1) Figure 5-2. (a) Separate I/O and memory space. (b) Memory-mapped I/O. (c) Hybrid. Memory-Mapped I/O (2) Figure 5-3. (a) A single-bus architecture. (b) A dual-bus memory architecture. Direct Memory Access (DMA) Figure 5-4. Operation of a DMA transfer. 1. CPU programs the DMA controller 2. DMA requests transfer to memory 3. Data transferred 4. Ack CPU → DMA controller: Address, Count, Control Buffer Drive Main memory Interrupt when done Bus Interrupts Revisited Figure 5-5. How an interrupt happens. The connections between the devices and the interrupt controller actually use interrupt lines on the bus rather than dedicated wires. Precise and Imprecise Interrupts (1) Properties of a precise interrupt 1. PC (Program Counter) is saved in a known place. 2. All instructions before the one pointed to by the PC have fully executed. 3. No instruction beyond the one pointed to by the PC has been executed. 4. Execution state of the instruction pointed to by the PC is known. Figure 5-6. (a) A precise interrupt. (b) An imprecise interrupt. Figure 5-7. Steps in printing a string. Figure 5-8. Writing a string to the printer using programmed I/O. copy_from_user(buffer, p, count); for (i = 0; i < count; i++) { while (*printer_status_reg != READY); /* loop until ready */ *printer_data_register = p[i]; /* output one character */ } return_to_user(); Programmed I/O (2) Figure 5-9. Writing a string to the printer using interrupt-driven I/O. (a) Code executed at the time the print system call is made. (b) Interrupt service procedure for the printer. I/O Using DMA copy_from_user(buffer, p, count); set_up_DMA_controller(); scheduler(); (a) acknowledge_interrupt(); unblock_user(); return_from_interrupt(); (b) Figure 5-10. Printing a string using DMA. (a) Code executed when the print system call is made. (b) Interrupt service procedure. I/O Software Layers Figure 5-11. Layers of the I/O software system. Interrupt Handlers (1) 1. Save registers not already been saved by interrupt hardware. 2. Set up a context for the interrupt service procedure. 3. Set up a stack for the interrupt service procedure. 4. Acknowledge the interrupt controller. If there is no centralized interrupt controller, reenable interrupts. 5. Copy the registers from where they were saved to the process table. Interrupt Handlers (2) 6. Run the interrupt service procedure. 7. Choose which process to run next. 8. Set up the MMU context for the process to run next. 9. Load the new process’ registers, including its PSW. 10. Start running the new process. Device Drivers Figure 5-12. Logical positioning of device drivers. In reality all communication between drivers and device controllers goes over the bus. Figure 5-13. Functions of the device-independent I/O software. <table> <thead> <tr> <th>Function Description</th> </tr> </thead> <tbody> <tr> <td>Uniform interfacing for device drivers</td> </tr> <tr> <td>Buffering</td> </tr> <tr> <td>Error reporting</td> </tr> <tr> <td>Allocating and releasing dedicated devices</td> </tr> <tr> <td>Providing a device-independent block size</td> </tr> </tbody> </table> Uniform Interfacing for Device Drivers Figure 5-14. (a) Without a standard driver interface. (b) With a standard driver interface. Buffering (1) Figure 5-15. (a) Unbuffered input. (b) Buffering in user space. (c) Buffering in the kernel followed by copying to user space. (d) Double buffering in the kernel. Figure 5-16. Networking may involve many copies of a packet. User-Space I/O Software Figure 5-17. Layers of the I/O system and the main functions of each layer. Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639 ### Magnetic Disks (1) <table> <thead> <tr> <th>Parameter</th> <th>IBM 360-KB floppy disk</th> <th>WD 18300 hard disk</th> </tr> </thead> <tbody> <tr> <td>Number of cylinders</td> <td>40</td> <td>10601</td> </tr> <tr> <td>Tracks per cylinder</td> <td>2</td> <td>12</td> </tr> <tr> <td>Sectors per track</td> <td>9</td> <td>281 (avg)</td> </tr> <tr> <td>Sectors per disk</td> <td>720</td> <td>35742000</td> </tr> <tr> <td>Bytes per sector</td> <td>512</td> <td>512</td> </tr> <tr> <td>Disk capacity</td> <td>360 KB</td> <td>18.3 GB</td> </tr> <tr> <td>Seek time (adjacent cylinders)</td> <td>6 msec</td> <td>0.8 msec</td> </tr> <tr> <td>Seek time (average case)</td> <td>77 msec</td> <td>6.9 msec</td> </tr> <tr> <td>Rotation time</td> <td>200 msec</td> <td>8.33 msec</td> </tr> <tr> <td>Motor stop/start time</td> <td>250 msec</td> <td>20 sec</td> </tr> <tr> <td>Time to transfer 1 sector</td> <td>22 msec</td> <td>17 μsec</td> </tr> </tbody> </table> **Figure 5-18.** Disk parameters for the original IBM PC 360-KB floppy disk and a Western Digital WD 18300 hard disk. Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639 Figure 5-19. (a) Physical geometry of a disk with two zones. (b) A possible virtual geometry for this disk. Figure 5-20. RAID levels 0 through 5. Backup and parity drives are shown shaded. Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639 Figure 5-20. RAID levels 0 through 5. Backup and parity drives are shown shaded. CD-ROMs (1) Figure 5-21. Recording structure of a compact disc or CD-ROM. Figure 5-22. Logical data layout on a CD-ROM. Figure 5-23. Cross section of a CD-R disk and laser. A silver CD-ROM has similar structure, except without dye layer and with pitted aluminum layer instead of gold layer. DVD Improvements on CDs 1. Smaller pits (0.4 microns versus 0.8 microns for CDs). 2. A tighter spiral (0.74 microns between tracks versus 1.6 microns for CDs). 3. A red laser (at 0.65 microns versus 0.78 microns for CDs). DVD (2) DVD Formats 2. Single-sided, dual-layer (8.5 GB). 3. Double-sided, single-layer (9.4 GB). 4. Double-sided, dual-layer (17 GB). Figure 5-24. A double-sided, dual-layer DVD disk. Disk Formatting (1) Figure 5-25. A disk sector. Disk Formatting (2) Figure 5-26. An illustration of cylinder skew. Figure 5-27. (a) No interleaving. (b) Single interleaving. (c) Double interleaving. Disk Arm Scheduling Algorithms (1) Read/write time factors 1. Seek time (the time to move the arm to the proper cylinder). 2. Rotational delay (the time for the proper sector to rotate under the head). 3. Actual data transfer time. Disk Arm Scheduling Algorithms (2) Figure 5-28. Shortest Seek First (SSF) disk scheduling algorithm. Disk Arm Scheduling Algorithms (3) Figure 5-29. The elevator algorithm for scheduling disk requests. Error Handling Figure 5-30. (a) A disk track with a bad sector. (b) Substituting a spare for the bad sector. (c) Shifting all the sectors to bypass the bad one. Stable Storage (1) Operations for stable storage using identical disks: 1. Stable writes 2. Stable reads 3. Crash recovery Figure 5-31. Analysis of the influence of crashes on stable writes. Clock Hardware Figure 5-32. A programmable clock. Clock Software (1) Typical duties of a clock driver 1. Maintaining the time of day. 2. Preventing processes from running longer than they are allowed to. 3. Accounting for CPU usage. 4. Handling alarm system call made by user processes. 5. Providing watchdog timers for parts of the system itself. Figure 5-33. Three ways to maintain the time of day. Figure 5-34. Simulating multiple timers with a single clock. Soft Timers Soft timers succeed according to rate at which kernel entries are made because of: 1. System calls. 2. TLB misses. 4. I/O interrupts. 5. The CPU going idle. <table> <thead> <tr> <th>Character</th> <th>POSIX name</th> <th>Comment</th> </tr> </thead> <tbody> <tr> <td>CTRL-H</td> <td>ERASE</td> <td>Backspace one character</td> </tr> <tr> <td>CTRL-U</td> <td>KILL</td> <td>Erase entire line being typed</td> </tr> <tr> <td>CTRL-V</td> <td>LNEXT</td> <td>Interpret next character literally</td> </tr> <tr> <td>CTRL-S</td> <td>STOP</td> <td>Stop output</td> </tr> <tr> <td>CTRL-Q</td> <td>START</td> <td>Start output</td> </tr> <tr> <td>DEL</td> <td>INTR</td> <td>Interrupt process (SIGINT)</td> </tr> <tr> <td>CTRL-\</td> <td>QUIT</td> <td>Force core dump (SIGQUIT)</td> </tr> <tr> <td>CTRL-D</td> <td>EOF</td> <td>End of file</td> </tr> <tr> <td>CTRL-M</td> <td>CR</td> <td>Carriage return (unchangeable)</td> </tr> <tr> <td>CTRL-J</td> <td>NL</td> <td>Linefeed (unchangeable)</td> </tr> </tbody> </table> Figure 5-35. Characters that are handled specially in canonical mode. The X Window System (1) <table> <thead> <tr> <th>Escape sequence</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>ESC [n A</td> <td>Move up n lines</td> </tr> <tr> <td>ESC [n B</td> <td>Move down n lines</td> </tr> <tr> <td>ESC [n C</td> <td>Move right n spaces</td> </tr> <tr> <td>ESC [n D</td> <td>Move left n spaces</td> </tr> <tr> <td>ESC [m; n H</td> <td>Move cursor to (m, n)</td> </tr> <tr> <td>ESC [s J</td> <td>Clear screen from cursor (0 to end, 1 from start, 2 all)</td> </tr> <tr> <td>ESC [s K</td> <td>Clear line from cursor (0 to end, 1 from start, 2 all)</td> </tr> <tr> <td>ESC [n L</td> <td>Insert n lines at cursor</td> </tr> <tr> <td>ESC [n M</td> <td>Delete n lines at cursor</td> </tr> <tr> <td>ESC [n P</td> <td>Delete n chars at cursor</td> </tr> <tr> <td>ESC [n @</td> <td>Insert n chars at cursor</td> </tr> <tr> <td>ESC [n m</td> <td>Enable rendition n (0=normal, 4=bold, 5=blinking, 7=reverse)</td> </tr> <tr> <td>ESC M</td> <td>Scroll the screen backward if the cursor is on the top line</td> </tr> </tbody> </table> Figure 5-36. The ANSI escape sequences accepted by the terminal driver on output. ESC denotes the ASCII escape character (0x1B), and n, m, and s are optional numeric parameters. Figure 5-37. Clients and servers in the M.I.T. X Window System. The X Window System (3) Types of messages between client and server: 1. Drawing commands from the program to the workstation. 2. Replies by the workstation to program queries. 3. Keyboard, mouse, and other event announcements. 4. Error messages. Graphical User Interfaces (1) ```c #include <X11/Xlib.h> #include <X11/Xutil.h> main(int argc, char *argv[]) { Display disp; /* server identifier */ Window win; /* window identifier */ GC gc; /* graphic context identifier */ XEvent event; /* storage for one event */ int running = 1; disp = XOpenDisplay("display_name"); /* connect to the X server */ win = XCreateSimpleWindow(disp, ...); /* allocate memory for new window */ XSetStandardProperties(disp, ...); /* announces window to window mgr */ gc = XCreateGC(disp, win, 0, 0); /* create graphic context */ XSelectInput(disp, win, ButtonPressMask | KeyPressMask | ExposureMask); XMapRaised(disp, win); /* display window; send Expose event */ ...} ``` Figure 5-38. A skeleton of an X Window application program. while (running) { XNextEvent(disp, &event); /* get next event */ switch (event.type) { case Expose: ...; break; /* repaint window */ case ButtonPress: ...; break; /* process mouse click */ case Keypress: ...; break; /* process keyboard input */ } XFreeGC(disp, gc); /* release graphic context */ XDestroyWindow(disp, win); /* deallocate window’s memory space */ XCloseDisplay(disp); /* tear down network connection */ } Figure 5-39. A sample window located at (200, 100) on an XGA display. Graphical User Interfaces (4) ```c #include <windows.h> int WINAPI WinMain(HINSTANCE h, HINSTANCE, hprev, char *szCmd, int iCmdShow) { WNDCLASS wndclass; /* class object for this window */ MSG msg; /* incoming messages are stored here */ HWND hwnd; /* handle (pointer) to the window object */ /* Initialize wndclass */ wndclass.lpfnWndProc = WndProc; /* tells which procedure to call */ wndclass.lpszClassName = "Program name"; /* Text for title bar */ wndclass.hlIcon = LoadIcon(NULL, IDI_APPLICATION); /* load program icon */ wndclass.hCursor = LoadCursor(NULL, IDC_ARROW); /* load mouse cursor */ RegisterClass(&wndclass); /* tell Windows about wndclass */ hwnd = CreateWindow ( ... ) /* allocate storage for the window */ ShowWindow(hwnd, iCmdShow); /* display the window on the screen */ UpdateWindow(hwnd); /* tell the window to paint itself */ ... Figure 5-40. A skeleton of a Windows main program. ``` while (GetMessage(&msg, NULL, 0, 0)) { /* get message from queue */ TranslateMessage(&msg); /* translate the message */ DispatchMessage(&msg); /* send msg to the appropriate procedure */ } return(msg.wParam); long CALLBACK WndProc(HWND hwnd, UINT message, UINT wParam, long lParam) { /* Declarations go here. */ switch (message) { case WM_CREATE: ... ; return ... ; /* create window */ case WM_PAINT: ... ; return ... ; /* repaint contents of window */ case WM_DESTROY: ... ; return ... ; /* destroy window */ } return(DefWindowProc(hwnd, message, wParam, lParam)); /* default */ } Figure 5-40. A skeleton of a Windows main program. Figure 5-41. An example rectangle drawn using Rectangle. Each box represents one pixel. Figure 5-42. Copying bitmaps using *BitBlt*. (a) Before. (b) After. Figure 5-43. Some examples of character outlines at different point sizes. Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639 # Thin Clients ![Table of THINC protocol display commands](image) <table> <thead> <tr> <th>Command</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Raw</td> <td>Display raw pixel data at a given location</td> </tr> <tr> <td>Copy</td> <td>Copy frame buffer area to specified coordinates</td> </tr> <tr> <td>Sfill</td> <td>Fill an area with a given pixel color value</td> </tr> <tr> <td>Pfill</td> <td>Fill an area with a given pixel pattern</td> </tr> <tr> <td>Bitmap</td> <td>Fill a region using a bitmap image</td> </tr> </tbody> </table> **Figure 5-44.** The THINC protocol display commands. Power Management Hardware Issues <table> <thead> <tr> <th>Device</th> <th>Li et al. (1994)</th> <th>Lorch and Smith (1998)</th> </tr> </thead> <tbody> <tr> <td>Display</td> <td>68%</td> <td>39%</td> </tr> <tr> <td>CPU</td> <td>12%</td> <td>18%</td> </tr> <tr> <td>Hard disk</td> <td>20%</td> <td>12%</td> </tr> <tr> <td>Modem</td> <td></td> <td>6%</td> </tr> <tr> <td>Sound</td> <td></td> <td>2%</td> </tr> <tr> <td>Memory</td> <td>0.5%</td> <td>1%</td> </tr> <tr> <td>Other</td> <td></td> <td>22%</td> </tr> </tbody> </table> Figure 5-45. Power consumption of various parts of a notebook computer. Figure 5-46. The use of zones for backlighting the display. (a) When window 2 is selected it is not moved. (b) When window 1 is selected, it moves to reduce the number of zones illuminated. Power Management The CPU Figure 5-47. (a) Running at full clock speed. (b) Cutting voltage by two cuts clock speed by two and power consumption by four.
{"Source-Url": "https://users.soe.ucsc.edu/~sbrandt/111/Slides/chapter5.pdf", "len_cl100k_base": 4802, "olmocr-version": "0.1.53", "pdf-total-pages": 61, "total-fallback-pages": 0, "total-input-tokens": 78095, "total-output-tokens": 6468, "length": "2e12", "weborganizer": {"__label__adult": 0.00103759765625, "__label__art_design": 0.0007901191711425781, "__label__crime_law": 0.0006861686706542969, "__label__education_jobs": 0.0025691986083984375, "__label__entertainment": 0.00018286705017089844, "__label__fashion_beauty": 0.00046753883361816406, "__label__finance_business": 0.0004162788391113281, "__label__food_dining": 0.0007524490356445312, "__label__games": 0.001529693603515625, "__label__hardware": 0.2476806640625, "__label__health": 0.0014467239379882812, "__label__history": 0.0006194114685058594, "__label__home_hobbies": 0.0004794597625732422, "__label__industrial": 0.0030269622802734375, "__label__literature": 0.0004963874816894531, "__label__politics": 0.00039839744567871094, "__label__religion": 0.0009465217590332032, "__label__science_tech": 0.1876220703125, "__label__social_life": 0.00011718273162841796, "__label__software": 0.0212554931640625, "__label__software_dev": 0.5244140625, "__label__sports_fitness": 0.0008258819580078125, "__label__transportation": 0.0020236968994140625, "__label__travel": 0.0003204345703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16291, 0.02967]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16291, 0.26371]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16291, 0.73901]], "google_gemma-3-12b-it_contains_pii": [[0, 23, false], [23, 974, null], [974, 1079, null], [1079, 1181, null], [1181, 1449, null], [1449, 1643, null], [1643, 1986, null], [1986, 2051, null], [2051, 2091, null], [2091, 2389, null], [2389, 2571, null], [2571, 2864, null], [2864, 2933, null], [2933, 3315, null], [3315, 3561, null], [3561, 3716, null], [3716, 4271, null], [4271, 4403, null], [4403, 4581, null], [4581, 4642, null], [4642, 4848, null], [4848, 6199, null], [6199, 6307, null], [6307, 6493, null], [6493, 6574, null], [6574, 6649, null], [6649, 6695, null], [6695, 6866, null], [6866, 7100, null], [7100, 7277, null], [7277, 7327, null], [7327, 7376, null], [7376, 7444, null], [7444, 7528, null], [7528, 7762, null], [7762, 7864, null], [7864, 7966, null], [7966, 8128, null], [8128, 8253, null], [8253, 8321, null], [8321, 8372, null], [8372, 8726, null], [8726, 8779, null], [8779, 8840, null], [8840, 9027, null], [9027, 10045, null], [10045, 11023, null], [11023, 11087, null], [11087, 11334, null], [11334, 12151, null], [12151, 12639, null], [12639, 12709, null], [12709, 13729, null], [13729, 14436, null], [14436, 14524, null], [14524, 14592, null], [14592, 14772, null], [14772, 15300, null], [15300, 15947, null], [15947, 16137, null], [16137, 16291, null]], "google_gemma-3-12b-it_is_public_document": [[0, 23, true], [23, 974, null], [974, 1079, null], [1079, 1181, null], [1181, 1449, null], [1449, 1643, null], [1643, 1986, null], [1986, 2051, null], [2051, 2091, null], [2091, 2389, null], [2389, 2571, null], [2571, 2864, null], [2864, 2933, null], [2933, 3315, null], [3315, 3561, null], [3561, 3716, null], [3716, 4271, null], [4271, 4403, null], [4403, 4581, null], [4581, 4642, null], [4642, 4848, null], [4848, 6199, null], [6199, 6307, null], [6307, 6493, null], [6493, 6574, null], [6574, 6649, null], [6649, 6695, null], [6695, 6866, null], [6866, 7100, null], [7100, 7277, null], [7277, 7327, null], [7327, 7376, null], [7376, 7444, null], [7444, 7528, null], [7528, 7762, null], [7762, 7864, null], [7864, 7966, null], [7966, 8128, null], [8128, 8253, null], [8253, 8321, null], [8321, 8372, null], [8372, 8726, null], [8726, 8779, null], [8779, 8840, null], [8840, 9027, null], [9027, 10045, null], [10045, 11023, null], [11023, 11087, null], [11087, 11334, null], [11334, 12151, null], [12151, 12639, null], [12639, 12709, null], [12709, 13729, null], [13729, 14436, null], [14436, 14524, null], [14524, 14592, null], [14592, 14772, null], [14772, 15300, null], [15300, 15947, null], [15947, 16137, null], [16137, 16291, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16291, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16291, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16291, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16291, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16291, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16291, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16291, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16291, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16291, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 16291, null]], "pdf_page_numbers": [[0, 23, 1], [23, 974, 2], [974, 1079, 3], [1079, 1181, 4], [1181, 1449, 5], [1449, 1643, 6], [1643, 1986, 7], [1986, 2051, 8], [2051, 2091, 9], [2091, 2389, 10], [2389, 2571, 11], [2571, 2864, 12], [2864, 2933, 13], [2933, 3315, 14], [3315, 3561, 15], [3561, 3716, 16], [3716, 4271, 17], [4271, 4403, 18], [4403, 4581, 19], [4581, 4642, 20], [4642, 4848, 21], [4848, 6199, 22], [6199, 6307, 23], [6307, 6493, 24], [6493, 6574, 25], [6574, 6649, 26], [6649, 6695, 27], [6695, 6866, 28], [6866, 7100, 29], [7100, 7277, 30], [7277, 7327, 31], [7327, 7376, 32], [7376, 7444, 33], [7444, 7528, 34], [7528, 7762, 35], [7762, 7864, 36], [7864, 7966, 37], [7966, 8128, 38], [8128, 8253, 39], [8253, 8321, 40], [8321, 8372, 41], [8372, 8726, 42], [8726, 8779, 43], [8779, 8840, 44], [8840, 9027, 45], [9027, 10045, 46], [10045, 11023, 47], [11023, 11087, 48], [11087, 11334, 49], [11334, 12151, 50], [12151, 12639, 51], [12639, 12709, 52], [12709, 13729, 53], [13729, 14436, 54], [14436, 14524, 55], [14524, 14592, 56], [14592, 14772, 57], [14772, 15300, 58], [15300, 15947, 59], [15947, 16137, 60], [16137, 16291, 61]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16291, 0.25868]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
c4f02712cb04f64f2fb2e727a758fa61e16b6bbe
REAL-TIME MULTI-USER TRANSCODING FOR PUSH TO TALK OVER CELLULAR Stéphane Coulombe École de technologie supérieure, Department of Software and IT Engineering 1100 Notre Dame Ouest, Montreal, Qc, Canada, H3C 1K3 e-mail: stephane.coulombe@etsmtl.ca ABSTRACT Transcoding is required to enable interoperability between Push to talk over Cellular (PoC) clients with incompatible capabilities (e.g. between a PoC client supporting the AMR speech codec and another supporting EVRC). Although the Open Mobile Alliance (OMA) recognizes the need for transcoding in the PoC application, no solution is provided by the standard to enable it. In this paper, we present a transcoding system for real-time multi-user PoC sessions. The solution is centralized at the Controlling PoC Function which manages session control operations and the flow of media streams to enable transcoding to be performed in a distinct transcoding server (TS). There are several advantages to this solution, such as scalability, applicability to all PoC group session scenarios, compatibility with existing PoC specifications, and transparency for existing PoC clients. Index Terms—Transcoding, multi-user, PoC, IMS. 1. INTRODUCTION The Push to talk over Cellular (PoC) service allows mobile users to create group sessions where participants can engage in voice and data communications on a 1-to-1 or 1-to-many basis [1], as illustrated in Figure 1. The voice communications are similar to those of walkie-talkie services, where terminals have dedicated ‘talk’ buttons. Only one person can speak at any given time, and each Talk Burst (TB) is relatively short (a few seconds). Each TB is copied to all the other participants in the session. Users can also exchange instant messages (IMs). Soon, voice-only TBs will evolve into voice and video TBs, and IMs will contain rich media content (audio, video frames, text, etc.). Because of the diversity of terminals and networks, issues associated with interoperability are arising. For instance, 3GPP mandates the AMR narrowband speech codec as the default speech codec for the PoC service [2]. Further, 3GPP mandates support of the AMR wideband speech codec, if the PoC client’s equipment uses a 16 kHz sampling frequency for speech. In contrast, 3GPP2 mandates the EVRC speech codec as the default speech codec [3]. More serious incompatibilities are expected to arise with respect to video streams (with various codecs, such as H.263, MPEG-4, and H.264) and media rich IMs. Figure 1: Example of a 1-to-many group session (voice). In the PoC standard (versions 1.0 and 2.0), the need for transcoding is well recognized, but no detailed solution is provided. It is stated in [1] that transcoding may be performed by both the Controlling PoC Function (CPF) and the Participating PoC Function (PPF), although the means for achieving this is not provided. In PoC version 2.0 [4], the PoC Interworking Function has been introduced, which may, among other things, perform transcoding. But its realization is outside the scope of the OMA, and currently no solution has been proposed by the standards bodies to ensure interoperability in PoC. Furthermore, the author is not aware of any solution proposed to enable interoperability in SIP-based real-time multi-party sessions (as found in PoC), besides the obvious and inefficient Back-to-Back User Agent (B2BUA)-based approaches [5]. This paper proposes a solution to support transcoding within the scope of the real-time multi-user sessions offered by PoC. In the proposed solution, transcoding will be managed by the CPF and performed in a separate logical entity, the Transcoding Server (TS). We will show that actions must be taken at different levels of the session to enable transcoding: --- 1 This work was funded by Vantrix Corp. (http://vantrix.com/). session offering, session control, and media control. An important feature of the proposed solution is that it is compatible with the existing PoC architecture and protocols. The paper is organized as follows. In section 2, we present an overview of the PoC architecture; in section 3, the proposed solution; in section 5, the advantages and disadvantages of the solution, as well as other possible solutions; and in section 6 our conclusions. 2. OVERVIEW OF THE POC ARCHITECTURE We assume that the reader is fairly familiar with the PoC system, as described in [1, 6, 7]. The overall PoC architecture, enhanced with the TS, for the generic case of users distributed over different networks is illustrated in Figure 2. Here, we see various PoC clients, each connected to its own PPF (over its own network), participating in a common session controlled by a CPF. The PoC service is built on top of a SIP/IP core, which could correspond to the 3GPP IP Multimedia Sub-system (IMS) [8, 9] or to the 3GPP2 IMS [10, 11]. ![Figure 2: High-level architecture of the PoC application with transcoding.](image) The CPF provides centralized PoC session handling, which includes RTP media distribution (copies of RTP packets to each participant), Talk Burst Control (TBC), policy enforcement for participation in group sessions, and the participant information. The PPF provides PoC session handling (such as policy enforcement for incoming PoC sessions) and relays TBC messages (to manage who has permission to speak) between the PoC client and the CPF. It may also relay RTP media between the PoC client and the CPF. It is important to note that the CPF is responsible for managing (deciding) who has permission to speak at any given time and for copying media packets from the source to the other participants. The PPF cannot perform these operations. Note that, at any given time, at most one user has permission to speak. The request to speak is made when the user presses the ‘talk’ button on his mobile. The TB will last until the user releases the button, at which time a TB Complete message is sent to the CPF. In principle, transcoding can be centrally managed by the CPF or distributed among the various PPFs. We maintain that managing it centrally at the CPF is more efficient, however, because the CPF has a global view of the session. Indeed, it ‘knows’ who has permission to speak and is responsible for duplicating the media packets. As a result, it can manage the transcoding operations optimally, in comparison to a PPF, which has only a local view of the session (further justification for this choice will be presented in section 4). We therefore focus on the management of transcoding at the CPF. 3. TRANSCODING CENTRALIZED AT THE CPF This section describes the various elements required to support transcoding centralized at the CPF. 3.1. Roles of the CPF The CPF must manage the transcoding operations in addition to managing permission to speak. The CPF has two main responsibilities with respect to enabling transcoding: 1. Manage session control operations (setup and update): - Setup: ensure that mobile devices with incompatible capabilities ( codecs) will nevertheless transparently connect together in a session, and manage the transcoding operations to be performed by the TS. - Update: update transcoding operations as different users have permission to speak or as users join and leave a session. 2. Manage the flow of media streams between users: - When transcoding is required, the media streams (RTP packets) will have to flow through a TS, where they will be transcoded and then sent to their destination. This requires that the media flow be managed by the CPF. The sub-sections below will explain how these roles can be fulfilled. 3.2. Session control managed by the CPF During the session setup phase, as PoC clients may support incompatible codecs, the CPF may have to change the Session Description Protocol (SDP) [12] by adding to the SDP list the codecs supported by that client, and for which a proper transcoding to the codecs of other participating clients is possible. For instance, a PoC client supporting only AMR would not normally be able to establish a direct session with a client supporting EVRC (see Figure 3a). However, a CPF enabling AMR-EVRC transcoding would include both EVRC and AMR among the session offerings, as illustrated in Figure 3b. Figure 3: CPF role of ensuring proper session offerings: a) CPF not supporting transcoding; b) CPF supporting transcoding and enhancing codec offerings. This operation is not as straightforward as it looks, since not only must new codecs be added to the SDP, but new IP addresses and ports must be provided in the invitations, in order for the media flows to be rerouted through the TS. Figure 4 shows an example of the control flow between the CPF, the TS, and the PoC clients when a session is set up with transcoding enabled. When client A invites another user to speak, the CPF first asks the TS to set up a transcoding session and requests a list of acceptable codecs to offer to other users. The CPF forwards the enhanced invitation to the other client, which accepts with a different codec from that of client A. The CPF then updates the transcoding session by, among other things, providing information about the selected codec. The CPF informs client A that the invitation has been accepted with the codec offered. Although this is not illustrated, it is assumed that the user of client A has obtained permission to speak. This system then proceeds to send AMR packets to the TS, which transcodes them to EVRC and forwards them to client B. For this to happen, the IP addresses and ports in the SDP messages are modified during the invitation process (either by the CPF or by the TS), in order for users to send packets through the TS. The TS is also informed about the addresses and ports of each participant, as well as the capabilities they support (codecs). This whole process is totally transparent to PoC clients, which is an important feature of the proposed solution, as there are already many PoC clients in use. Even once the session has been set up, the CPF has to continually manage the session, updating transcoding operations to be performed by the TS, as well as the media flow. Indeed, when the session parameters change (e.g. to account for a joining or departing client) or when a different user has permission to speak, the CPF will have to inform the TS of the situation so that proper transcoding and routing of streams will be performed (indeed, the media packets have to be sent to every user except the one who has permission to speak). For instance, let us consider a session where participating clients support either AMR or EVRC. If the PoC device of the user with permission to speak supports AMR, then AMR to EVRC transcoding is performed. However, if that user’s device supports EVRC, then EVRC to AMR transcoding is performed. Also, the list of destinations changes, based on who has permission to speak. In Figure 5, we can see an example of the control flow between the CPF, the TS, and the clients when a user requests permission to speak. We assume that initially no one has permission to speak. The user of client A asks for permission to speak by issuing a TB request (we assume the media flow passing through the TS is as described in option 2 of Figure 7 -- to be discussed in the next subsection). The request arrives at the TS and is forwarded to the CPF. The CPF informs the TS that the user now has permission to speak so that the latter can allocate transcoding resources properly and enforce proper control over streams. When the TS confirms that the request has been granted, the CPF informs the client that this is the case. Client A can then start sending AMR packets, which are transcoded prior to being sent to client B. Figure 5: Example of control flow for transcoding centralized at the CPF when a new user has permission to speak. Note: In this document, we make the TB (Request/Confirm) messages flow between the TS and the CPF for illustration purposes. However, in a real system, we can use an IP switch to route such packets directly to the CPF without wasting TS resources. 3.3. Media streams managed by the CPF Regarding media streams, the CPF must manage two types of traffic: TBC and the usual media. The first type relates to the requests to speak and the responses between the PoC clients and the CPF. The second type relates to the usual audiovisual media streams, such as AMR over RTP and RTCP packets. Each media stream is assigned a specific port number. For the media flow, two options are possible: 1. All the media packets arrive at the CPF (see Figure 6). The CPF processes the TBC packets arriving at the TBC Protocol (TBCP) port, while it forwards the usual media streams to the TS. The TS performs transcoding and either returns the result to the CPF (which in turn forwards them to the destination) or sends them directly to the destination. 2. All the media packets arrive at the TS (see Figure 7). The TS forwards the TBC packets arriving at the TBCP port to the CPF, while it transcodes the usual media streams and sends them to their destination. The CPF manages messages arriving from TBCP port and returns the result to the TS (which forwards them to the destination) or sends them directly to the destination. From a scalability and general performance perspective, option 2 is more efficient, as it minimizes the flow of information arriving at the CPF. The CPF should manage sessions and not have to deal with media packets. Figure 6: Example of media flow option 1. All media packets arrive at the CPF. Figure 7: Example of media flow option 2. All media packets arrive at the TS. 4. DISCUSSION There are several advantages to this solution: 1. The transcoding is transparent for all PoC clients. The solution only requires changes to servers. 2. The solution is compatible with existing PoC specifications. 3. It works for all PoC group session scenarios (1 to 1, 1 to many, 1 to many to 1), ad hoc and pre-arranged, as well as chat group sessions. 4. It is scalable, since the CPFs of many operations can be offloaded by routing the media flow through the TS. • The TSs can even be distributed (there can be more than one server). 5. It can be extended to other SIP/SDP-based real-time multi-party sessions. 6. It allows the processing resources required for transcoding to be minimized. For instance, if many destinations require the same transcoded format, transcoding is performed once and the result is delivered to many. 7. It allows the transcoding operation to be customized for each user. For instance, we could select a distinct AMR bitrate for every PoC client. The main disadvantage of our solution is the added complexity required at the level of the CPF to manage session control operations and media streams. Figure 8: Transcoding performed at different PPFs: a) at the sending and receiving PPFs; b) only at the receiving PPF. Other transcoding solutions could have been considered. For instance, we could manage and perform the transcoding operations locally at the PPF level. In that case, we must first determine the best location for transcoding, since the transcoding can be performed at the sending PPF or at the receiving PPF. From a speech quality perspective, when multiple users are involved in a session, it is best to perform the transcoding at the receiving user’s PPF. This would prevent double transcoding, as illustrated in Figure 8a, where the CPF has decided that EVRC is the codec to be used by all users for the session. In Figure 8b, the person who has permission to speak uses the codec format it supports, and the receiving PPF transcodes only if the receiving client does not support that codec format. This is the most reasonable solution. However, this is a sub-optimal one from a computing perspective, as PPFs from various networks (even from the same network) may perform precisely the same transcoding operation on the same media stream for different users. For these reasons, and others that are beyond the scope of this short paper, we believe that centralizing the transcoding operation is the best option. 5. CONCLUSIONS AND FUTURE WORK In this paper, we have presented a transcoding system for real-time multi-user PoC sessions centralized at the Controlling PoC Function. The solution has several advantages, such as scalability, applicability to all PoC group session scenarios, compatibility with existing PoC specifications, and, most importantly, transparency to existing PoC clients. In future work, we propose to investigate a proxy-based transcoding solution which does not require any change to PoC servers already deployed or to PoC clients. 6. REFERENCES [1] Open Mobile Alliance, "Push to talk over Cellular (PoC) – Architecture," version 1.0.2, September 2007, OMA-AD-PoC-V1_0_2-20070905-A. [2] 3GPP TS 26.235 v7.4.0, "Packet switched conversational multimedia applications; Default codes (Release 7)," March 2008. [3] 3GPP2 S.R0100-0, "Push-to-Talk over Cellular (PoC) [4] Open Mobile Alliance, "Push to talk over Cellular (PoC) – Architecture;" candidate version 2.0, February 2008, OMA-AD-PoC-V2_0-20080226-C. [5] IETF RFC 3261, "SIP: Session Initiation Protocol, Standards Track, June 2002. [6] Open Mobile Alliance, "Push to talk over Cellular (PoC) – Control Plane Document," v 1.0.2, September 2007, OMA-TS-PoC_ControlPlane-V1_0_2- 20070905-A. [7] Open Mobile Alliance, "Push to talk over Cellular (PoC) – User Plane," version 1.0.2, September 2007, OMA-TS-PoC_UserPlane-V1_0_2-20070905-A. [8] 3GPP TS 23.228 v7.11.0, "IP Multimedia Subsystem (IMS); Stage 2 (Release 7)," March 2008. [9] 3GPP TS 24.229 v7.11.0, "IP Multimedia Call Control based on SIP and SDP; Stage 3 (Release 7)," March 2008. [10] 3GPP2 X.S0013.2-B, "IP Multimedia Subsystem (IMS); Stage 2;" version 1.0, December 2007. [11] 3GPP2 X.S0013.4-B, "IP Multimedia Call Control Protocol, Based on SIP and SDP stage 3;" version 1.0, December 2007. [12] IETF RFC 4566, "SDP: Session Description Protocol, Standards Track, July 2006.
{"Source-Url": "https://espace2.etsmtl.ca/id/eprint/2987/1/Coulombe%20S.%202008%202987%20Real-time%20multi-user%20transcoding%20for%20Push%20to%20talk%20over%20Cellular.pdf", "len_cl100k_base": 4215, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 16260, "total-output-tokens": 4564, "length": "2e12", "weborganizer": {"__label__adult": 0.0004413127899169922, "__label__art_design": 0.00037479400634765625, "__label__crime_law": 0.0005345344543457031, "__label__education_jobs": 0.0006604194641113281, "__label__entertainment": 0.00032782554626464844, "__label__fashion_beauty": 0.00018012523651123047, "__label__finance_business": 0.0004453659057617187, "__label__food_dining": 0.0004050731658935547, "__label__games": 0.0009055137634277344, "__label__hardware": 0.01904296875, "__label__health": 0.000934123992919922, "__label__history": 0.0004069805145263672, "__label__home_hobbies": 7.385015487670898e-05, "__label__industrial": 0.0006937980651855469, "__label__literature": 0.00033092498779296875, "__label__politics": 0.0003566741943359375, "__label__religion": 0.000621795654296875, "__label__science_tech": 0.35791015625, "__label__social_life": 0.00010251998901367188, "__label__software": 0.1412353515625, "__label__software_dev": 0.472412109375, "__label__sports_fitness": 0.0004184246063232422, "__label__transportation": 0.0007977485656738281, "__label__travel": 0.0002741813659667969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18105, 0.04958]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18105, 0.11253]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18105, 0.93185]], "google_gemma-3-12b-it_contains_pii": [[0, 3824, false], [3824, 7604, null], [7604, 11583, null], [11583, 14002, null], [14002, 18105, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3824, true], [3824, 7604, null], [7604, 11583, null], [11583, 14002, null], [14002, 18105, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18105, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18105, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18105, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18105, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18105, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18105, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18105, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18105, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18105, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18105, null]], "pdf_page_numbers": [[0, 3824, 1], [3824, 7604, 2], [7604, 11583, 3], [11583, 14002, 4], [14002, 18105, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18105, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
c33f234e327ab6d7679092421c529aa4b1aecbeb
Extending the ArgQL Specification Yannis Roussakis\(^1\), Giorgos Flouris\(^1\), Dimitra Zografistou\(^2\) and Elisjana Ymeralli\(^1\) \(^1\) Institute of Computer Science, FORTH Heraklion, Crete, Greece \(^2\) Centre for Argument Technology (ARG-tech), University of Dundee Abstract. Recent developments in Web technologies have transformed Web users from passive consumers to active creators of digital content. A significant portion of this content is of argumentative form, as users see the Web as a means to enable dialogical exchange, debating, and commenting on products, services or events. In this context, being able to identify, mine, represent, reason with, and query argumentative information found online is an important consideration. In previous work, some of the authors of this paper proposed ArgQL, a high-level declarative language for querying argumentative information found online. The current paper describes various extensions and improvements of ArgQL that bring it closer to actual use in realistic environments. These include methods to support more expressive keyword-based searching in arguments, and the support for querying non-argumentative information that is associated with arguments, such as the date of creation, author, topic etc (i.e., argument metadata). Keywords Computational argumentation, online debating, querying argumentative information, ArgQL, metadata, keyword search 1. Introduction Recent advances in Web technologies transformed its users from passive information consumers to active creators of digital content. Web became a universal terrain, where humans accommodate their inherent need for communication and self-expression. This new era revealed several new research problems. Navigating in dialogues and identifying argumentative data is one of the most challenging ones. On the other hand, the process of human argumentation has been the object of study in Computational Argumentation \([2, 4]\), a branch of AI that provides theoretical and computational reasoning models that simulate human cognitive behavior while arguing. ArgQL (Argumentation Query Language) \([9]\) is a high-level, representation-agnostic and declarative query language that allows for information extraction from a graph of structured and interconnected arguments (see Subsection 2.2). It allows accessing arguments stored in a repository, and is suitable for querying arguments in the Argument Web \([5]\), through queries like “how an argument with conclusion X is attacked?”. Such a repository could be created using a specialized tool for debate and argument generation (e.g., APOPSIS \([8]\)), or through argument mining techniques from textual corpora. In this paper, we improve ArgQL by proposing a set of extensions over its original specification (see Section 3). These extensions consist of the keyword search functionality over arguments (Subsection 3.1), as well as the introduction of a new notion, namely metadata, which is a versatile tool allowing the association of any property or path of properties with an argument, and the querying of arguments based on such metadata (Subsection 3.2). We argue that these functionalities allow for more meaningful queries, and constitute an important extension of the original specification. This work was performed in the context of the DebateLab project\(^1\), which conducts research towards developing the theoretical infrastructure for mining, representing and reasoning with online arguments. --- \(^1\) https://debatelab.ics.forth.gr/ RuleML+RR’22: 16th International Rule Challenge and 6th Doctoral Consortium, September 26–28, 2022, Virtual EMAIL: rousakis@ics.forth.gr (A. 1); fgeo@ics.forth.gr (A. 2); dzografistou@gmail.com (A. 3); ymeralli@ics.forth.gr (A.4) © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org) while delivering a suite of tools supporting the uptake of the related technologies in the domain of e- journalism. The rest of the paper is structured as follows: we first give some preliminaries, including a short description of the original ArgQL (Section 2). In Section 3, we describe the functionality of the implemented extensions, their implementation, and how they can be used by the user. In Section 4, we describe how the ArgQL syntax was extended to support these additional features, and conclude in Section 5. 2. Preliminaries and Related Work There are no equivalent languages to directly compare ArgQL with. Several tools have been developed to facilitate participation in online debates [7]. Although these tools allow graphical access to the provided arguments, none of them allows for a declarative query language for accessing arguments. In fact, the querying process internally employs traditional query languages such as SQL or SPARQL. ArgQL supports the different information needs of such tools and provides a language that allows the user to perform his/her own queries in a user-friendly manner. 2.1. AIF Ontology The AIF ontology (Argument Interchange Format) [1] is a popular core ontology designed to represent arguments and their relations in a structured and systematic way. It is used as an abstract and high-level language that connects arguments from various argumentation tools and applications, and thus can be queryable and searchable by several search engines, such as ArgDF [3, 5], DiscourseDB2, and also ArgQL [9, 10]. The AIF specification3 is available in various formats4, as described in [1]. An extension of AIF [6] provides better support for the representation of dialogues. 2.2. ArgQL Description ArgQL [9, 11] is a high-level, representation-agnostic, declarative query language for argumentative information. Its syntax considers the arguments’ internal structure, as well as an abstract, graph-like view of the dialogue, shaped by the existing interrelations among arguments. It allows the elegant formulation of queries on arguments and/or the associated dialogue. Its prominence is amplified by the fact that expressing the same information needs in traditional languages (e.g., SPARQL) would require the formulation of complex queries, even for simple statements. Moreover, to do so, one needs to be aware of the underlying representation scheme of arguments. The syntax of ArgQL allows for expressions that filter the argumentative structure, combined with expressions used to identify sequences (paths) of arguments in the graph. It supports queries that fall into the following four categories (and their intersection): a) identification of individual arguments based on their content and structure, b) identification of structurally similar arguments, c) identification of different types of relations between arguments and d) identification of complete paths in the graph. The results of ArgQL can be either individual values consisting of arguments and/or the components of arguments (i.e., premises or conclusions – called propositions), or more complex expressions that correspond to complete paths of arguments that match with the queries. Some examples or ArgQL queries follow: - Description: Find arguments which have in their premises the proposition “Freedom means responsibility”. return ?a 2 http://discoursedb.org/ 4 http://www.arg.dundee.ac.uk/aif 3. Extending ArgQL with Keyword Search and Metadata 3.1. Keyword Search Argument patterns constitute the fundamental elements that are used to match arguments in ArgQL. One of the ways to filter arguments is through string matching, but the original specification only allowed for exact string matching on propositions that appeared in argument patterns, and this was highlighted as one of the language’s shortcomings [9, 11]. In the proposed ArgQL extension, a more generic keyword search functionality can be used to filter arguments whose premise and/or conclusion contains a keyword, while supporting wildcards to allow non-exact matching. To support keyword search at the syntactic level, we reused the existing argument pattern mechanism of ArgQL, which allows searching based on the text of the premise and/or conclusion of the argument. In the original specification, the argument pattern <?pr, "text"> would identify arguments with conclusion being exactly “text”; analogously, the argument pattern <?pr/{"text"}], ?c> would identify arguments whose premise set contains the premise “text”. We extend this idea and allow argument patterns of the above form to match triples in which the “text” string is contained within the conclusion/premise respectively, in a case-insensitive manner. We also allow the special character ‘*’ after the keyword, to denote that the conclusion/premise should contain text starting with the keyword. For instance, a query to return all the arguments that contain any word starting with "Rich", "rich", "Richard", "richie" etc. in their conclusion would be: match ?a: <?p, "rich*"> return ?a To implement this new functionality, we reused the existing translation mechanism of ArgQL into SPARQL [10]. Table 1 shows how the ArgQL query presented in the above example is translated into its respective SPARQL. <table> <thead> <tr> <th>Table 1</th> <th>Keyword-search over conclusions</th> </tr> </thead> </table> | match ?a: <?p, "rich*"> return ?a | SELECT * WHERE { | | ?_1 aif:claimText ?_prem_txt. | | ?_1 aif:Premise ?_ra1. | | ?_2 aif:claimText ?_conc_txt. | | filter(regex(?_conc_txt, "^rich/i"))). | | ?_ra1 aif:Conclusion ?_i2. | | ?_ra1 rdf:type aif:RA-node.} | Note that the keyword search could be implemented with the use of the regex filter (as shown in Table 1), which is a generic SPARQL feature. However, in big datasets, such an implementation could have performance issues. To address this, different triplestores contain optimized structures for keyword searching, which could be employed by the ArgQL implementation, if the underlying triplestore is known at design time. In the context of DebateLab we used the Virtuoso Triplestore\(^5\), exploiting its full-text index\(^6\) to achieve a very good performance in the full text search using bif:contains, a specialized Virtuoso keyword that replaces the SPARQL’s generic regex filter. As a result, the translated SPARQL query of Table 1 would actually be written as shown in Table 2 in the context of DebateLab. This also explains our syntactic choice (using “rich*” rather than “rich/i”) for ArgQL keyword search. **Table 2** <table> <thead> <tr> <th>Keyword-search over conclusions using Virtuoso triplestore</th> </tr> </thead> <tbody> <tr> <td>SELECT * WHERE { ?_i1 aif:claimText ?_prem_txt. ?_i1 aif:Premise ?_ra1. ?_i2 aif:claimText ?_conc_txt. filter(bif:contains(?_conc_txt, &quot;rich*&quot;)), ?_ra1 aif:Conclusion ?_i2. ?_ra1 rdf:type aif:RA-node. }</td> </tr> </tbody> </table> ### 3.2. Metadata In the original ArgQL specification, the main focus for searching was the arguments themselves, or paths of arguments. However, arguments may be associated with attributes in the form of metadata (e.g., the date the argument was created, the author etc), which may be of interest to the user, either as an argument filtering mechanism, or to be returned as part of the query result. To support this, we introduce the notion of metadata that refer to arguments, and are essentially: - Datatype properties referring to arguments, such as the author of the argument, the date of its creation etc. - Paths of properties which lead to datatype properties such as the topics or the title of the document which contains the corresponding argument. Querying metadata is a versatile tool, which can be used in different ways. In particular, any type of property or path of properties associated with an argument can be classified as “metadata”, allowing ArgQL to consider it. A metadata filter is essentially a pair of the form (metadata: expression). The type of metadata determines the allowed expressions to be used in the argument pattern: - Metadata that refer to numeric and date constants support comparison operators (i.e., >, <, >=, <=, !=, =), as well as operators which define a range of values either exclusively (i.e., ( … )) or inclusively (i.e., [ … ]). - Metadata that refer to string constants support keyword-based search. Finally, we can have combinations of filters with conjunctions (&&) or disjunctions (||). Next, we provide some examples to show how ArgQL is extended to accept metadata filters. For our examples we will use two metadata properties, namely, the creationDate (tm) and the argTitle (tit), which denote the date the argument was created and the title of the document where the argument is contained respectively: - Find arguments with a creation date in April 2022 - Find arguments within articles whose title contains the keyword “airport” \(^5\) https://virtuoso.openlinksw.com \(^6\) http://docs.openlinksw.com/virtuoso/rdfsparqlrulefulltext return ?a • Find arguments which were created after 2022-04-01 and are contained in an article whose title contains the keyword “airport” return ?a As already mentioned, metadata are not only useful as a filtering tool, but can also be returned along with the arguments’ information. To support this functionality, we extend the form of the “return” block of ArgQL as follows: return ?a, metadata_name_1(?a), metadata_name_2(?a), … As an example, if we want to return all arguments in the knowledge base, along with their creation date (tm) and title of containing document (tit), we should write: match ?a: <?pr, ?c> return ?a, tm(?a), tit(?a) Finally, apart from returning the metadata, we can also sort the results with respect to one or more metadata, in ascending or descending order, by using an order-by expression. If we omit the order type, we get ascending order by default. The form of the order-by expression is: order by metadata_name_1(?a) {ASC/DESC}, metadata_name_2(?a) {ASC/DESC}, … In the above example, if we wanted arguments to be ordered with respect to their creation date in a descending order, we would write: match ?a: <?pr, ?c> return ?a, tm(?a), tit(?a) order by tm(?a) DESC As mentioned, metadata are essentially datatype properties or paths of properties that lead to datatype values. Thus, for the translation of the ArgQL query into the corresponding SPARQL, we just have to include the corresponding triple patterns in the SPARQL containing the required metadata filter. Since we support any type of metadata, these triple patterns are not known at design time and should be provided at initialization time (through a configuration file) to the ArgQL implementation. This configuration file essentially maps each metadata type to a metadata definition that is a set of triple patterns which should be included into the translated SPARQL query to identify the respective metadata. Table 3 shows the translated SPARQL query in the case of a metadata date filter in which we require the creation date of the argument to be after 2022-04-01. Table 3 Extended ArgQL with date filters | return ?a | ?_i1 aif:claimText ?_pr_txt. | | ?_i1 aif:Premise ?_ra1. | | ?_i2 aif:claimText ?_conc_txt. | | ?_ra1 aif:Conclusion ?_i2. | | ?_ra1 rdf:type aif:RA-node. | | ?_ra1 aif:creationDate ?_ra1_tm. | | filter ( xsd:datetime(?_ra1_tm) >= xsd:datetime("2022-04-01") ). For the returning of metadata, no extra treatment is required with regards to the SPARQL query, as they are already returned (due to SELECT *, see Table 3). If we want to order the resulting arguments with respect to a metadata type, we should add the “ORDER BY” expression in the respective SPARQL as shown in Table 4. Table 4 Extended ArgQL with metadata filters and metadata values returned sorted | match ?a: <pr, ?c> [tm: "=>2022-04-01"] | SELECT * WHERE { | return ?a, tm(?a) | ?_i1 aif:claimText ?_pr_txt. | order by tm(?a) asc | ?i2 aif:claimText ?_conc_txt. | | ?_i1 aif:Premise ?_ra1. | | ?_ra1 aif:Conclusion ?_i2. | | ?_ra1 rdf:type aif:RA-node. | | filter ( xsd:datetime(?_ra1_tm) >= xsd:datetime("2022-04-01") ) | | } ORDER BY ASC(?_ra1_tm) Note that, in order for this functionality to work, the metadata values need to be stored in the underlying Knowledge Graph. Since we adopt an open architecture, allowing any type of developer-defined metadata to be supported, AIF does not necessarily provide features to store this information, and appropriate additional properties need to be defined. A configuration file is used to associate such properties with the respective metadata, allowing ArgQL to be extended with any arbitrary metadata that are needed for the application at hand. In the context of DebateLab, such metadata are included in the DebateLab database at ingestion time, thereby allowing the use of ArgQL to query this information. 4. Extended ArgQL Syntax We briefly mentioned above the syntactic extensions of ArgQL to support keyword searching and metadata querying. Here, we provide a more complete description, in the form of a BNF grammar (see Table 5), clearly showing (in italics and underlined font) the additions to the original BNF provided in [9]. More specifically, for the keyword search, we extended the proposition expression to consider the starts-with keyword search (if the character '*' is present) as we are using Virtuoso’s bif:contains property. For the metadata, we had to introduce some new expressions in order to be able to recognize the metadata definitions. First, we had to extend the argpattern expression by adding the metadata expression namely, md_express. A metadata expression consists of a set of metadata filters (md_filter) combined with conjunctions (&&) or disjunctions (||). Finally, a metadata filter consists of a metadata variable name (md_name) and a filter, which, as mentioned, depends on the type of the metadata (see expressions num_filter, rang_filter, keyw_filter). Finally, considering that the metadata values can also be returned along with the arguments’ information, we extended the returnvalue expression with a set of metadata (md_return_val) with an optional ascending or descending order. Table 5 Extended ArgQL syntax (reserved words in bold, new extensions in underlined italics) | query ::= | ‘MATCH’ (dialoguepattern (’,’ dialoguepattern )* ‘RETURN’ returnvalue (’,’ returnvalue)* | | dialoguepattern ::= | argpattern | | argpattern ::= | variable | argpattern pathpattern dialogue_pattern | | (variable):?‘<’premisepattern’,’ concluspattern’>’ md_express? | | premisefilter ::= | ‘[’ (‘/’ | ‘.’ ) (propset | variable) ‘]’ | | concluspattern ::= | variable | proposition | | propset ::= | ‘{’ proposition (’,’ proposition)* ‘}’ | | pathpattern ::= | pp (‘/’ pp )* | | pp ::= | relation | 5. Conclusion and Future Work We presented two extensions of the ArgQL specification, namely the keyword search functionality and the metadata querying functionality. Both extensions constitute significant components of most query languages (especially in the context of the Semantic Web), but were lacking from the original ArgQL specification. Thus, we argue that they enhance ArgQL’s expressive power in meaningful ways, and believe they will assist users in addressing more complex and intuitive information needs. As a future step, we plan to extend the new functionalities to consider content equivalences (rephrasings), i.e., cases where two different propositions express the same thing in different words, a common scenario in real-world argumentation. We also plan to provide an efficient implementation over a specific Triplestore. As DebateLab is dealing with the domain of e-journalism, we are interested in using real life data from that domain and conduct experiments to see both the performance of our implementation and the usefulness of the provided results for our end users (i.e., journalists). Additional useful features could include the implementation of a tool to support naïve and/or advanced users to write their own ArgQL queries. Acknowledgements. This work was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the “1st Call for H.F.R.I. Research Projects to support Faculty Members and Researchers and the procurement of high-cost research equipment” (Project #4195). 6. References
{"Source-Url": "http://users.ics.forth.gr/~fgeo/files/RULEMLRR22.pdf", "len_cl100k_base": 4598, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 18869, "total-output-tokens": 5829, "length": "2e12", "weborganizer": {"__label__adult": 0.0006327629089355469, "__label__art_design": 0.0014801025390625, "__label__crime_law": 0.001708984375, "__label__education_jobs": 0.036590576171875, "__label__entertainment": 0.0009531974792480468, "__label__fashion_beauty": 0.0004758834838867187, "__label__finance_business": 0.001262664794921875, "__label__food_dining": 0.0007767677307128906, "__label__games": 0.0032939910888671875, "__label__hardware": 0.0008525848388671875, "__label__health": 0.00141143798828125, "__label__history": 0.0009889602661132812, "__label__home_hobbies": 0.0001906156539916992, "__label__industrial": 0.0005955696105957031, "__label__literature": 0.01390838623046875, "__label__politics": 0.0032958984375, "__label__religion": 0.0012226104736328125, "__label__science_tech": 0.2318115234375, "__label__social_life": 0.0008559226989746094, "__label__software": 0.1422119140625, "__label__software_dev": 0.5537109375, "__label__sports_fitness": 0.0005598068237304688, "__label__transportation": 0.0006794929504394531, "__label__travel": 0.0003161430358886719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22550, 0.02778]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22550, 0.30439]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22550, 0.83309]], "google_gemma-3-12b-it_contains_pii": [[0, 3953, false], [3953, 7506, null], [7506, 9909, null], [9909, 13178, null], [13178, 16052, null], [16052, 19123, null], [19123, 21434, null], [21434, 22550, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3953, true], [3953, 7506, null], [7506, 9909, null], [9909, 13178, null], [13178, 16052, null], [16052, 19123, null], [19123, 21434, null], [21434, 22550, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22550, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22550, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22550, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22550, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22550, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22550, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22550, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22550, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22550, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22550, null]], "pdf_page_numbers": [[0, 3953, 1], [3953, 7506, 2], [7506, 9909, 3], [9909, 13178, 4], [13178, 16052, 5], [16052, 19123, 6], [19123, 21434, 7], [21434, 22550, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22550, 0.09249]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
487cdedf7abae59e27f96e48493725a6605fcce9
Compiling Fortran 77D and 90D for MIMD Distributed-Memory Machines Alok Choudhary* Geoffrey Fox* Seema Hiranandani† Ken Kennedy† Charles Koelbel† Sanjay Ranka* Chau-Wen Tseng† Abstract Fortran D provides a set of data decomposition specifications for either Fortran 77 or Fortran 90. In this paper we present an integrated approach to compiling Fortran 77D and Fortran 90D programs into SPMD (Single Program Multiple Data) message-passing programs for efficient execution on MIMD distributed-memory machines. The integrated Fortran D compiler relies on two key observations. First, array constructs may be scalarized by the Fortran 90 front end into forall loops without loss of information. Second, loop fusion, partitioning, and sectioning optimizations in the Fortran D back end can be useful for both Fortran D dialects. 1 Introduction Parallel computing on distributed-memory machines is very cost-effective, but is hindered by the lack of portability for the resulting programs. Our goal is to establish a machine-independent programming model for data-parallel programs that is easy to use, yet performs with acceptable efficiency on different parallel architectures. Toward this end we are developing the Fortran D language extensions, which are applicable to both Fortran 77 and Fortran 90. Hereafter, we use “Fortran D” to refer to concepts common to both dialects, and Fortran 77D/Fortran 90D for dialect-specific comments. In this paper, we describe a unified strategy for compiling both Fortran 77D and Fortran 90D the Intel iPSC/860 and Touchstone, two MIMD distributed-memory machines. The principal issues involved in compiling Fortran 90D are partitioning the program across multiple nodes and scalarizing *Northeast Parallel Architectures Center, 111 College Place, Syracuse University, Syracuse, NY 13244-4100 †Center for Research on Parallel Computation, Rice University, P.O. Box 1892, Houston, TX 77251-1892 it for execution on each individual node. Previous work has described the partitioning process [20, 21, 22]. Here, we argue that efficient scalarization is also an interesting problem, and demonstrate how to integrate it with program partitioning. We also describe runtime library support for such a compiler. The remainder of this paper presents a brief overview of the Fortran D language and compilation strategy, then describes the Fortran 90D and 77D front ends, and the Fortran D back end in detail. The design of the runtime library is discussed, and an example is used to illustrate the compilation process. We conclude with a discussion of related work. 2 Fortran D Language In this section, we briefly overview aspects of Fortran D relevant to this paper. The complete language is described elsewhere [16]. Fortran D provides explicit control over data partitioning with alignment and distribution specifications. The DECOMPOSITION declaration specifies an abstract problem or index domain. The ALIGN statement specifies fine-grain parallelism, mapping each array element onto one or more elements of the decomposition. This provides the minimal requirement for reducing data movement for the program given an unlimited number of processors. The Distribute statement specifies coarse-grain parallelism, grouping decomposition elements and mapping them and aligned array elements to physical processor memories. Each dimension of the decomposition is distributed in a block, cyclic, or block-cyclic manner, or is undistributed. Some sample data alignment and distributions are shown in Figure 1, the complete language is described in detail elsewhere [16]. Fortran D provides FORALL loops to permit the user to specify difficult parallel loops in a deterministic manner [4]. In a FORALL loop, each iteration uses only values defined before the loop or within the current iteration. When a statement in an iteration of the FORALL loop accesses a memory location, it will receive a value written on the current iteration, if there is one; otherwise, it gets the value at that memory location before the execution of the FORALL loop. A merging semantics ensures that a deterministic value is obtained after the FORALL if several iterations assign to the same memory location. This semantics has also been called copy-in/copy-out semantics. 3 Fortran D Compilation Strategy 3.1 Overall Strategy Our strategy for parallelizing Fortran D programs for distributed-memory MIMD computers is illustrated in Figure 2. In brief, we transform both Fortran 77D and Fortran 90D to a common intermediate form, which is then compiled to code for the individual nodes of the machine. We have several pragmatic and philosophical reasons for this strategy. Sharing a back end for the Fortran 77D and Fortran 90D avoids duplication of effort and allows both compilers to immediately take advantage of new optimizations. Decoupling the Fortran 77D and Fortran 90D front ends allows them to be machine independent. Finally, one of our long-term research goals is to define an efficient compiler/programmer interface for programming the nodes of a massively parallel machine. Providing a common intermediate form helps us experiment with this interface. 3.2 Intermediate Form To compile both dialects of Fortran D using a single back end, we must have an appropriate intermediate form. The most important requirement is that the intermediate form must capture three aspects of the program: - Data decomposition information, telling how data is aligned and distributed among processors. - Parallelization information, telling when operations in the code are independent. - Communication information, telling what data must be transferred between processors. In addition, the primitive operations of the intermediate form should be operations that can be translated simply for single-processor execution. We have chosen Fortran 77 with data decompositions, FORALL, and intrinsic functions as the intermediate form for the Fortran D compiler. We claim that this form preserves all of the information available in a Fortran 90 program, but maintains the flexibility of Fortran 77. 3.3 Node Interface Another topic of interest in the overall strategy is the node interface — the program produced by the Fortran D compiler. It must be both portable and efficient. In addition, the level of the node interface should be neither so high that efficient translation to object code is impossible, nor so low that its workings are completely opaque to the user. We have selected Fortran 77 with calls to a runtime library based on Express [30] as the node interface. We will eventually evaluate this experience to further refine our notion of an "optimal" support level. 4 Fortran D Compiler The Fortran D compiler consists of three parts: two front ends to process input programs into the common intermediate form, and one back end to compile this to the node interface. The Fortran D compiler is implemented on top of the ParaScope programming environment to take advantage of its advanced features [12, 25]. 4.1 Fortran 90D Front End The function of the Fortran 90D front end is to scalarize the Fortran 90D program, translating it to an equivalent Fortran 77D program. This is necessary because the underlying machine executes computations sequentially, rather than on entire arrays at once as specified in Fortran 90. For the Fortran D compiler we find it useful to view scalarization as three separate tasks: - **Scalarizing Fortran 90 Constructs.** Many Fortran 90 features are not present in our intermediate form and must be translated into equivalent Fortran 77D statements. - **Fusing Loops.** Simple scalarization results in many small loop nests. Fusing these loop nests can improve performance, as we show below. - **Sectioning.** The Fortran D compiler must divide Fortran 90's atomic array operations into sections that can execute on the target machine [5, 7]. We defer both loop fusion and sectioning to the Fortran D back end. We do this because Fortran 77D programs can also benefit from these optimizations. The Fortran 90D front end handles scalarizing Fortran 90 constructs that have no equivalent in the Fortran 77D intermediate form. There are three principal Fortran 90 language features that must be scalarized: array constructs, WHERE statements, and intrinsic functions [9]. Array Constructs Fortran 90 array constructs allow entire arrays to be manipulated atomically. These operations enhance the clarity and conciseness of the program, and help to make parallelism explicit. Previous research has shown that efficiently implementing array constructs for scalar machines is a difficult problem [5, 7]. The Fortran 90 front end can defer these problems by relying on a key observation—the FORALL loop's semantics are identical to array assignment semantics. Array constructs can therefore be translated into equivalent FORALL loops with no loss of information. Some optimizations to simplify the resulting index expressions are still useful, but not central to the design of our compiler. **WHERE Statement** Another Fortran 90 feature that has no Fortran 77 equivalent is the WHERE statement. It requires a boolean argument that is used to mask array operations, inhibiting assignments to some array elements. As was the case with the rhs above, the boolean argument must be completely evaluated before executing the body of the statement. Using the same insight above, the WHERE statement may be translated into equivalent IF nested within a FORALL statement. **Intrinsic Functions** Intrinsic functions not only provide a concise means of expressing operations on arrays, but also identify computation patterns that may be difficult to detect automatically. Fortran 90 provides intrinsic functions for operations such as shift, reduction, transpose, and matrix multiplication. To avoid excessive complexity and machine-dependence in the Fortran D compiler, we convert most Fortran 90 intrinsics into calls to customized runtime library functions. However, some processing is necessary. Like the WHERE statement, some intrinsic functions accept a mask expression that restricts execution of the computation. The Fortran 90D front end may need to evaluate the expression and store it in a temporary boolean array before performing the computation. As with the other constructs, avoiding these temporaries is a key optimization. **Temporary Arrays** When the Fortran 90D front end needs to create temporary arrays, it must also generate appropriate Fortran D data decomposition statements. A temporary array is usually aligned and distributed in the same manner as its "master" array, i.e., the array whose computation created the need for the temporary. ### 4.2 Fortran 77D Front End The Fortran 77D front end is simple because Fortran 77D is very close to the intermediate form. Its only task is to detect complex high-level parallel computations, replacing them by equivalent Fortran 90 intrinsics to take advantage of the optimized runtime library. With advanced program analysis, some operations such as DOTPRODUCT, SUM, or TRANSPOSE can be detected automatically[31]. ### 4.3 Fortran D Back End The Fortran D back end performs two main functions—partition the program onto the nodes of the parallel machine, and complete the scalarization of Fortran D into Fortran 77. This process is organized as a series of data dependence-based transformations. Data dependence is a concept developed for vectorizing and parallelizing compilers to characterize memory accesses at compile time [8, 27, 34]. True dependences indicate definition followed by use, anti-dependences show use before definition. Data dependences may be either loop-carried or loop-independent. The transformations performed by the back end are loop fusion, followed by partitioning and sectioning. Loop fusion is performed first because it simplifies partitioning by reducing the need to consider inter-loop interactions. It also enables optimizations such as strip-mining and loop interchange [8, 34]. In addition, loop fusion does not increase the difficulty of later compiler phases. In comparison, sectioning is performed last because it can significantly disrupt the existing program structure, increasing the difficulty of partitioning analysis and optimization. 4.3.1 Loop Fusion Loop fusion is legal if it does not reverse the direction of any data dependence between two loop nests [5, 6, 34]. Choosing how to fuse loops to maximize the number of parallel loops is NP-hard in general, even for singly nested loops without statement reordering [18]. The Fortran D back end fuses adjoining loop nests if and only if no new true loop-carried dependences are created for the fused loops. This is a heuristic which we have reason to think will perform well in practice. Loop fusion has the added advantage, when used in concert with other transformations, of improving memory reuse in the resulting program, a significant benefit on modern processors [1, 5, 7, 27]. Our results in Section 6 give an example of this. We have described other transformations with the same goals elsewhere [24]. 4.3.2 Program Partitioning The major step in compiling Fortran D for MIMD distributed-memory machines is to partition the data and computation across processors, introducing communication where needed. We present a brief overview of the Fortran D partitioning algorithm below; details are discussed elsewhere [20, 21, 22]. Slight extensions are introduced here for compiling FORALL loops and intrinsics. Analyze Program The Fortran D compiler performs aggressive scalar dataflow analysis and dependence testing. Partition data The compiler analyzes Fortran D alignment and distribution statements calculate the array section owned by each processor. Partition computation The compiler partitions computation across processors using the “owner computes” rule — each processor only computes values of data it will store [13, 32, 36]. It relaxes this rule for reduction operations and assignments to private variables. This results in each processor being assigned a subset of the iterations of each loop. Analyze communication Once the work partition is calculated, it is used to mark references that may result in nonlocal accesses. Optimize communication The compiler examines each marked nonlocal reference, using results of the analysis phases, and attempts to reduce the communications costs. Message vectorization [32] and choosing the loop level at which to insert the communication[10, 17] are the key optimizations applied here. To properly handle FORALL loops, the Fortran D compiler treats all their carried true dependences as anti-dependences. This reflects the semantics of the FORALL loop and ensures that the message vectorization algorithm will insert all communication outside the loop. Manage storage The compiler collects the extent and type of nonlocal data accesses to calculate the storage required for nonlocal data. Overlaps [17] and temporary buffers are used depending on the classification of the nonlocal access. Generate code Finally, the Fortran D compiler uses the results of the above phases to generate the node interface program. Some of the tasks involved are reducing array and loop bounds instantiate the data and computation partition, generating the actual communications calls, and adjusting parameters needed for the intrinsic functions in the runtime library. 4.3.3 Sectioning The final phase of the Fortran D back end completes the scalarization process by applying sectioning to convert FORALL loops into DO loops [5, 7]. True dependences carried on the FORALL loop represent instances where values are defined in the loop and used on later iterations thus violating FORALL semantics. The Fortran D uses this analysis in the back end to detect cases where temporary storage may be needed. This allows us to avoid the straightforward translation of \[ A(2:N-1) = \frac{(A(1:N-2) + A(3:N))}{2} \] into DO i = 1, N-2 TMP(i) = A(i-1) ENDDO DO i = 2, N-1 A(i) = (TMP(i) + A(i+1))/2 ENDFOR (where the temporary array is needed to preserve old values of A(i - 1)). Instead, the Fortran D compiler can reduce the amount of temporary storage required through data prefetching [7], producing X = A(1) DO i = 2, n-1 Y = (X + A(i+1))/2 X = A(i) A(i) = Y ENDFOR 5 Runtime Library Fortran 90 intrinsic functions represent computations (such as TRANSPOSE and MATMUL) that may have complex communication patterns. It is possible to support these functions at compile time, but we have chosen to implement these functions in the runtime library instead to reduce the complexity and machine-dependence of the compiler. The Fortran D compiler translates intrinsics into calls to runtime library routines using a standard interface. Additional information is passed describing bounds, overlaps, and partitioning for each array dimension. The runtime library is built on top of the Express [30] communication package to enable portability to other systems. Table 1 presents a sample of performance numbers for a subset of the intrinsic functions on iPSC/860. A detailed performance study is presented in [2]. The times in the table include both the computation and communication times for each function. For most of the functions we were able to obtain almost linear speedups. In the case of TRANSPOSE function, going from one processor to two or four actually results in increase in the time due to the communication requirements. However, for larger size multiprocessors the times decrease as expected. 6 Example and Performance Results In this section we demonstrate how an example Fortran 90D code is compiled into executable code for a 32 node iPSC/860. Figure 3 shows a Fortran 90D code fragment implementing one sweep of ADI integration on a 2D mesh, a typical numerical algorithm. Fortran D specifications partition the 2-D array into 32 blocks of rows, one for each processor. (We use a fixed number of processors only to simplify the output programs. We tested three versions of the ADI code on the iPSC/860, two of which are shown in Figures 4 and 5. For clarity, many details in the codes have been elided or simplified, and the second loop in the original program is deleted. First, the code in Figure 4 represents the simplest translation of Fortran 90D to our node interface. This is shown as the dotted line in Figure 6. Second, the code in Figure 5 represents the fully optimized version of the same program. Loop fusion has been applied, and used to enable pipelining of computation and communication. This appears as the solid line in Figure 6. Third, an intermediate code which applied only the loop fusion optimization to Figure 4 represented applying only Fortran 90 scalarization optimizations. This appears as the dashed line in Figure 6. All three programs were compiled under -O4 using Release 2.0 of if77, the iPSC/860 compiler. Timings were taken using dclock() on a 32 node Intel iPSC/860 with 8 Meg of memory per node. The first version of the program shows almost no speedup. Here, sequentialization is caused by conflict between the natural parallelism (independent rows) and the data distribution (independent columns). Loop fusion provides a small improvement due to memory reuse. Pipelining, however, exposes parallelism to achieve the best performance. Note that this required both Fortran 90 and Fortran 77 optimizations to achieve. We conclude that some algorithms require sophisticated optimization both in the scalarization and partitioning phase; a compiler must incorporate both to compete with hand-crafted code. 7 Related Work The Fortran D compiler is a second-generation distributed-memory compiler that integrates and extends many previous analysis and optimization techniques. Many distributed-memory compilers reduce communication overhead by aggregating messages outside of parallel loops [23, 26] or parallel procedures [19, 33], while others rely on functional or data-flow languages [28, 32]. In comparison, the Fortran D compiler uses dependence analysis to automatically exploit parallelism and extract communication even from partially sequential loops. The Fortran D compiler also applies analysis and optimization before code generation, unlike earlier transformation-based systems that apply program transformations and partial evaluation after inserting fine-grain communications [13, 32, 36]. Several other projects are also developing Fortran 90 compilers for MIMD distributed-memory machines. This group includes ADAPT, based on a runtime intrinsic library [29], CM FORTRAN, which introduced alignment and layout specifications [3], and a proposal by Wu & Fox that also discussed program generation and optimization [35]. The last project has been most directly influential on our work. PARAGON is a version of C extended with array syntax, operations, reductions, permutations, and distribution specifications [14]. None of these systems describe scalarization or communication optimizations in detail, to which we give high priority. We have already mentioned previous work on Fortran 90 implementation on scalar and vector machines [1, 5, 7, 27]. We use many of the same transformations, but in a new context. 8 Conclusions This paper presented an integrated approach to compiling both Fortran 77D and 90D. Our approach is based on a common back-end implementing optimizations useful for both dialects. A key feature of this research is its emphasis on efficient scalarization of Fortran 90 constructs, which has an important effect on individual-node performance. We have also described how a portable runtime library can also reduce the complexity of the compiler. The Fortran D compiler for MIMD distributed-memory machines is only a part of the Fortran D project; we also are working on Fortran 77D and Fortran 90D compilers for SIMD machines, translations between the two Fortran dialects, support for irregular computations, and environmental support for static performance estimation and automatic data decomposition [10, 11, 15]. 9 Acknowledgements We are grateful to the ParaScope and Fortran D research groups for their assistance, and to Parasoft for providing the Fortran 90 parser and Express. This research was supported by the Center for Research on Parallel Computation (CRPC), a National Science Foundation Science and Technology Center. Use of the Intel iPSC/860 was provided by the CRPC under NSF Cooperative Agreement Nos. CCR-8809615 and CDA-8619893 with support from the Keck Foundation. This work was sponsored by DARPA under contract # DABT63-91-C-0028. The content of the information does not necessarily reflect the position or policy of the government, and no official endorsement should be inferred. References DECOMPOSITION D(N,N) REAL A(N,N) ALIGN A(I,J) with D(J-2,I+3) DISTRIBUTED D(:, BLOCK) DISTRIBUTED D(CYCLIC, :) Figure 1: Fortran D Data Decomposition Specifications Fortran 77 Fortran 77D + data decompositions + FORALL Intermediate Specification + data decompositions + FORALL + intrinsics Fortran D Back End Node Interface Fortran 77 + message passing + runtime library Portable Runtime Library Fortran 77 Compilers Touchstone Delta iPSC/860 Ncube/2 Workstations Fortran 90 Fortran 90D + array constructs + WHERE + intrinsics + data decompositions + FORALL + intrinsics User User Figure 2: Fortran D Compilation Strategy <table> <thead> <tr> <th>Proc</th> <th>ALL (1K x 1K)</th> <th>ANY (1K x 1K)</th> <th>MAXVAL (1K x 1K)</th> <th>PRODUCT (256K)</th> <th>DOT PRODUCT (256K)</th> <th>TRANSPOSE (512 x 512)</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>580.6</td> <td>606.2</td> <td>658.8</td> <td>90.1</td> <td>164.8</td> <td>299.0</td> </tr> <tr> <td>2</td> <td>291.0</td> <td>303.7</td> <td>330.4</td> <td>50.0</td> <td>83.0</td> <td>575.0</td> </tr> <tr> <td>4</td> <td>146.2</td> <td>152.6</td> <td>166.1</td> <td>25.1</td> <td>42.2</td> <td>395.0</td> </tr> <tr> <td>8</td> <td>73.84</td> <td>77.1</td> <td>84.1</td> <td>13.1</td> <td>22.0</td> <td>213.0</td> </tr> <tr> <td>16</td> <td>37.9</td> <td>39.4</td> <td>43.4</td> <td>7.2</td> <td>12.1</td> <td>121.0</td> </tr> <tr> <td>32</td> <td>19.9</td> <td>20.7</td> <td>23.2</td> <td>4.2</td> <td>7.4</td> <td>69.0</td> </tr> </tbody> </table> Table 1: Performance of Some Fortran 90 Intrinsic Functions PARAMETER (N = 512) PARAMETER (N$PROC = 32) REAL X(N,N), A(N,N), B(N,N) DECOMPOSITION DEC(N,N) ALIGN X, A, B WITH DEC DISTRIBUTE DEC(BLOCK,: DO I = 2, N X(I,1:N) = X(I,1:N) - X(I-1,1:N)*A(I,1:N)/B(I-1,1:N) ENDDO X(N,1:N) = X(N,1:N) / B(N,1:N) J = N DO I = 1, N-1 J = J - 1 X(J,1:N) = (X(J,1:N)-A(J+1,1:N)*X(J+1,1:N))/B(J,1:N) ENDDO Figure 3: ADI integration in Fortran 90D REAL X(0:17,512), A(16,512), B(0:16,512) MY$PROC = myproc() { 1...32 } LB1 = MAX((MY$PROC-1)*16+1,2) - (MY$PROC-1)*16 IF (MY$PROC .GE. 2) THEN recv(X(0,1:512),B(0,1:512),MY$PROC-1) ENDIF DO I = LB1, 16 DO K = 1, N X(I,K) = X(I,K) - X(I-1,K)*A(I,K)/B(I-1,K) ENDDO DO K = 1, N B(I,K) = B(I,K) - A(I,K)*A(I,K)/B(I-1,K) ENDDO ENDDO IF (MY$PROC .LE. 31) THEN send(X(16,1:512),B(16,1:512),MY$PROC+1) ENDIF Figure 4: ADI Output (Unoptimized) REAL X(0:17,512), A(16,512), B(0:16,512) MY$PROC = myproc() {* 1...32 *} LB1 = MAX((MY$PROC-C+1)*16+1,2) - (MY$PROC-1)*16 DO KK = 1, 512, 4 IF (MY$PROC .GE. 2) THEN recv(X(0,KK:KK+3),B(0,KK:KK+4),MY$PROC-1) ENDIF DO I = LB1, 16 DO K = KK, KK+3 X(I,K) = X(I,K) - X(I-1,K)*A(I,K)/B(I-1,K) B(I,K) = B(I,K) - A(I,K)*A(I,K)/B(I-1,K) ENDDO END IF (MY$PROC .LE. 31) THEN send(X(16,KK:KK+3),B(16,KK:KK+3),MY$PROC+1) ENDIF ENDDO Figure 5: ADI Output (w/ Loop Fusion & Pipelining) ![Graph](graph.png) Figure 6: Performance of three versions of ADI integration
{"Source-Url": "http://users.eecs.northwestern.edu/~choudhar/Publications/CompilingFortran77Dn90DMIMDDistributedMemoryMachines.pdf", "len_cl100k_base": 6083, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 19431, "total-output-tokens": 8990, "length": "2e12", "weborganizer": {"__label__adult": 0.00034499168395996094, "__label__art_design": 0.0003025531768798828, "__label__crime_law": 0.0003142356872558594, "__label__education_jobs": 0.0005803108215332031, "__label__entertainment": 9.077787399291992e-05, "__label__fashion_beauty": 0.0001811981201171875, "__label__finance_business": 0.0003192424774169922, "__label__food_dining": 0.0004045963287353515, "__label__games": 0.0006022453308105469, "__label__hardware": 0.0021419525146484375, "__label__health": 0.0005931854248046875, "__label__history": 0.00030231475830078125, "__label__home_hobbies": 0.0001118779182434082, "__label__industrial": 0.0008697509765625, "__label__literature": 0.0002168416976928711, "__label__politics": 0.0003323554992675781, "__label__religion": 0.0005855560302734375, "__label__science_tech": 0.11590576171875, "__label__social_life": 8.45789909362793e-05, "__label__software": 0.01120758056640625, "__label__software_dev": 0.86328125, "__label__sports_fitness": 0.00039076805114746094, "__label__transportation": 0.0007104873657226562, "__label__travel": 0.0002417564392089844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32880, 0.05916]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32880, 0.56258]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32880, 0.84185]], "google_gemma-3-12b-it_contains_pii": [[0, 1937, false], [1937, 4288, null], [4288, 6530, null], [6530, 8830, null], [8830, 11367, null], [11367, 13800, null], [13800, 16048, null], [16048, 18052, null], [18052, 20581, null], [20581, 22835, null], [22835, 25116, null], [25116, 27462, null], [27462, 29696, null], [29696, 29749, null], [29749, 30388, null], [30388, 32288, null], [32288, 32880, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1937, true], [1937, 4288, null], [4288, 6530, null], [6530, 8830, null], [8830, 11367, null], [11367, 13800, null], [13800, 16048, null], [16048, 18052, null], [18052, 20581, null], [20581, 22835, null], [22835, 25116, null], [25116, 27462, null], [27462, 29696, null], [29696, 29749, null], [29749, 30388, null], [30388, 32288, null], [32288, 32880, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32880, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32880, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32880, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32880, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32880, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32880, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32880, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32880, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32880, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32880, null]], "pdf_page_numbers": [[0, 1937, 1], [1937, 4288, 2], [4288, 6530, 3], [6530, 8830, 4], [8830, 11367, 5], [11367, 13800, 6], [13800, 16048, 7], [16048, 18052, 8], [18052, 20581, 9], [20581, 22835, 10], [22835, 25116, 11], [25116, 27462, 12], [27462, 29696, 13], [29696, 29749, 14], [29749, 30388, 15], [30388, 32288, 16], [32288, 32880, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32880, 0.0354]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
b9a2c398064240cdcb6924a91d32be73360bea9f
A Roadmap to Self-service Data Lakes in the Cloud Unlocking the value of streaming data by simplifying big data discovery, processing and management in cloud data lakes. Written by Yoni Iny, Co-founder & CTO, Upsolver Table of Contents Executive Summary .................................................................4 Drivers of Streaming Data Complexity........................................5 Non-static data ........................................................................5 Unstructured versus tabular ..................................................6 Experimental versus standardized use cases..............................6 Storage is a major cost factor ..................................................6 Rise of the Data Lake ..............................................................7 Challenges and entry barriers ................................................8 Upsolver: Reducing Friction and Complexity with a Streaming Data Platform........................................................11 Design Principles ................................................................11 Reference Architecture ......................................................12 # Table of Contents The Platform: Core Components .......................................................... 13 Data Management and Governance ............................................ 13 Stateful Stream Processing, ETL and Data Transformation ............ 16 Real-time and Ad-hoc Data Consumption ................................... 20 Use Case Examples and Reference Architectures ....................... 25 Sisense Drives New Analytical Insights from S3 ......................... 25 Bigabid Builds a Real-time Architecture ...................................... 27 Next Steps ..................................................................................... 29 Executive Summary While the transformative nature of big data is commonly celebrated, its organizational applications are often less glamorous: Gartner analyst Nick Heudecker has recently estimated that 85% of big data projects fail, largely due to misaligned expectations, management resistance, lack of skills, and data governance challenges. From a technology perspective, failure takes the form of an inability to deliver tangible business benefits from big data initiatives within actionable time frames, despite significant resources and engineering hours being sunk into them. Technical departments struggle to maintain the complex software and hardware stacks needed to effectively work with big data, and encounter difficulties in hiring and retaining skilled personnel across the analytical life-cycle - from data scientists who translate swathes of unstructured data into relevant insights, to data engineers who build the infrastructure that enables data exploration and experimentation. In this paper we will present the infrastructural challenges of working with big data streams; we will then proceed to introduce our solution: Upsolver, a Streaming Data Platform that provides data management, processing and delivery as services, within a data lake architecture that utilizes the scalability of object storage such as Amazon S3. We will show how Upsolver simplifies the process of transforming streaming data into workable data, dramatically shortening time-to-value and increasing success rates for streaming data projects. Drivers of Streaming Data Complexity Streaming data is not merely big data - it is also fundamentally different data, produced by different sources and requiring its own set of tools to handle. Accordingly, the solution is rarely as simple as getting a bigger database. Traditional organizational data originates from various operational systems such as ERP, CRM, finance and HR systems-of-record; streaming data is typically produced by sources such as industrial sensors, clickstreams, servers and user app activity. This creates several core differences between streaming and traditional data: Non-static data A traditional dataset typically captures a clearly delineated time and space, e.g.: number of items currently in stock, new employees hired in the past year; whereas big data is generated constantly, delivered in a continuous stream of small files, capturing event-based interactions on a second-by-second basis, e.g.: servers relaying their current state of operation, or a log of user activity on a mobile app. Querying this data means creating an arbitrary start and end point and creates challenges around data freshness, missing records and synchronization. Unstructured versus tabular Due to the high velocity in which event-based data is produced, and its highly disparate nature, it will be stored as objects (e.g. JSON files) rather than tables. The relational databases that power almost every enterprise and every application are built on tabular storage; using them to store unstructured data requires lengthy cleansing and transformation processes. Experimental versus standardized use cases Common data sources lend themselves to tried-and-tested forms of analysis and reporting; the potential of streaming data is unlocked through exploratory techniques, predictive modeling and machine learning. Applying these analyses requires broader and more flexible access to data, and can be hindered by the need to structure data according to existing enterprise data warehouse specifications. Storage is a major cost factor Databases can usually get away with using expensive storage with multiple redundancies, since the size of the data is relatively small and the compute and licensing costs outweigh the cost of storage. When dealing with big data, storage costs can easily dwarf all other project costs. Rise of the Data Lake The unique characteristics of streaming data lead organizations to accept the limitations of relational databases for storing and analyzing streams; instead, recent years have seen a growing prominence of data lake architectures. In this framework, the organization foregoes the attempt to ‘tame’ big data as it is being ingested, instead adopting the principle of store now, analyze later: records are stored in their raw, unstructured and schema-less form in a distributed file system such as HDFS, or in cloud object storage such as Amazon S3. The complicated process of transforming it into workable form is postponed until the time the need arises to do so, in order to satisfy a new business application or answer an analytical query. Data lakes present several advantage for streaming data architectures: - **Easy to ingest data**: the data lake approach removes the need for the complex and expensive ETL processes that are prerequisite to storing streaming data in a relational database; instead, a relevant subset of the data can be transformed and wrangled as needed to address the particulars of the use case that is being developed. This reduces much of the upfront costs of working with big data. • **Ability to support a large range of use cases:** Data lakes provide vastly greater flexibility for developing new data-driven applications, especially for innovative and exploratory analyses that rely on access to data in its original form. Since invariably transformations into a rigid schema cause loss of connections within the data, exploratory analysis often finds itself facing the brick wall of not having retained the necessary information for that specific question; the data lake approach sidesteps this issue by storing all the raw data in its original form, making it easier to answer questions based on historical data. • **Reduced storage costs:** In data lakes, storage is decoupled from compute and can rely on inexpensive object storage in the cloud or on-premises. Since compute resources are dramatically more costly than storage, data lakes can significantly reduce infrastructure costs when working with large amounts of data compared to storing the same amount of data in a database. This comes in addition to the savings associated with removing the need for data transformation and wrangling upon ingest. **Challenges and entry barriers:** Nevertheless, data lakes are not a silver bullet; while they serve to reduce complexity in storing data, they also introduce new challenges around managing, accessing and analyzing data, often becoming massive engineering undertakings without clear results for the business. Typical stumbling blocks include: • **Governance:** data lakes are at constant risks of becoming data swamps as new data is ‘dumped’ into them without a clear process of cataloging, managing and visualizing existing data assets. This creates difficulty in extracting relevant datasets and can pose governance and security challenges around access permissions. • **Performance and latency:** unstructured data must be structured before it can be queried, resulting in lengthy and code-intensive ETL jobs. Batch processing can produce partial, inaccurate results (limited to the data in the batch), while stream processing introduces its own set of technical challenges. Additional latencies and performance issues manifest due to query engines needing to process millions of small files in order to return query results; optimizing the storage layer through compression, compaction and columnar file formats will help, but it is in itself a complex process that requires specialized knowledge. • **Complexity and time to value:** addressing the previous challenges poses a major engineering challenge. DevOps and data engineering teams need to invest massive amounts of resources and time in hardware and software. Managed cloud services can address some of the physical infrastructure (at a cost), but the latter still requires a broad set of open-source and licensed software tools which vary based on the particular data lake implementation, as well as significant customization and manual coding in Python and Scala to manage, process and structure streaming data. Skilled big data engineers are needed to maintain this infrastructure - and demand for these professionals greatly exceeds supply, making them to difficult to hire and retain. Reference architecture for cloud data lakes (without Upsolver) As we can see, this type of architecture relies on a lot of moving parts and a lot of data engineering. For the data lake to provide value, every piece of the puzzle is properly optimized and working as expected – which requires hundreds of big data engineering hours. The end result: Projects that drag for months and years with no definite endpoint, exceeding budgets and failing to deliver the promised business results - leading decision makers to conclude that the streaming data initiative has failed, abandoning it altogether and potentially stifling innovation. Upsolver: Reducing Friction and Complexity With a Streaming Data Platform and potentially stifling innovation. Design Principles: In the previous chapter we highlighted the problem with streaming data; traditional, data warehouse-based approaches are inadequately suited for storing streams, while data lakes create significant complexity and technical overhead when data needs to be effectively analyzed. Upsolver modernizes data lakes by applying the design principles of SaaS and full-stack analytics platforms to streaming data integration, processing and analysis. This means giving data consumers an end-to-end platform for transforming streaming data into usable data, with minimal IT or data engineering overhead. A visual self-service solution for governing data and building ETL flows: typical data lake architectures rely on code-intensive frameworks for schema discovery, metadata management and data modeling; Upsolver provides a WYSIWYG user interface for visual data preparation and management. **A stateful ETL engine for streaming use cases:** Upsolver uses homegrown in-memory processing and compression algorithms to deliver performance that far exceeds open source stream processing frameworks such as Apache Spark, out of the box and without the need for months of manual performance tuning. **Automating data engineering best practices:** the need to transform thousands of unstructured small files into structured tables typically requires months of coding to build ETL flows and data pipelines according to a convoluted list of necessary best practices; Upsolver handles this process behind the scenes in a manner that is almost entirely invisible to the end user, and automatically transforms raw data streams into a high-performance data lake. **Reference architecture:** In the next sections we will dive deeper into the Upsolver data lake architecture, and explain how it addresses current challenges in the data lake world and enables data consumers to rapidly unlock the value of their big data streams. The Platform: Core Components 1. Data Management and Governance Data lakes are inherently messy as they ingest data as close as possible to its raw form. However, optimizing raw data storage for analytical performance can have tremendous impact on the ability to utilize the data further down the road, while schema discovery and metadata management are crucial for governing and controlling your data lake and preventing it from ‘swampifying’. Storage In order to be able to work with large quantities of data, efficient data loading is essential. Data loaded into the data lake should be standardized and compressed to make it easier for data consumers to work with it without the end users needing to intimately understand the underlying storage layer. Upsolver automatically optimizes the storage layer, using a hierarchical folder structure with a lexicographic date part to facilitate data discovery. Files created are either Avro or Parquet and rely on either Snappy or Gzip compression to provide optimal performance when accessed by common query engines such as Athena, Presto or Hive. Let’s look at some of the details behind each process. File Formats Avro and Parquet are complementary formats, where Avro is a self describing hierarchical serialization format, and Parquet is a columnar file format that uses Avro as the in memory representation. Data can easily be converted between the two formats, and Avro is a very flexible and fast format. Upsolver ingests data in Avro while creating columnar Parquet files to facilitate analytical querying. Some limitations exist in the Avro and Parquet specifications, for example field names must be alphanumeric. Since real data can have any format and losing information on the original data is undesirable, Upsolver uses an encoding within the Avro/Parquet format in order to extend it to all field names. Compression Upsolver gives users the option to choose between Snappy and Gzip for all data storage. Snappy is a fast and standard compression algorithm that gives reasonable compression rates. Gzip results in smaller files and is also standard, but has a much higher CPU overhead, which can be undesirable when processing many terabytes of data. Compression is one of the most important tools in reducing the size and cost of a data lake; data should never be stored uncompressed! Schema Discovery and Metadata Management Metadata management is required since the data lake is opaque without it. Unlike databases with built in querying and management tools, a data lake without metadata requires expensive and time consuming queries with external tools in order to figure out what it contains. Without a central repository of such information, users will be stuck doing these actions over and over each time they want to inspect their data. An often-repeating pattern is a user creating an ETL flow based on where data should be and how they expect the data to look, only to discover after a costly ETL process that it isn’t what they expected. We call this “blind ETL” and it is incredibly pointless and wasteful. A user doing a transformation of data should have full visibility into the actual data at their fingertips, so that they can make informed decisions and correct data issues before running an expensive batch job. In Upsolver, rich metadata on all fields in the data is automatically extracted when data is loaded, and stored in an Upsolver Materialized View (more on that later). This allows consumers of the data lake to operate within a framework that gives visibility into the data at every step, preventing costly mistakes in ETL definitions and helping with data governance. 2. Stateful Stream Processing, ETL and Data Transformation Why managing state is a challenge in data lake ETLs Data lakes are highly valuable to enable agile data science, as storing the data in its original form gives us the ability to reproduce an accurate picture of a historical state of affairs [link to event sourcing article somewhere]. However, unstructured object storage is not built for analytic querying: Unstructured streams are impossible to analyze so ETLs (many many ETLs) are required to structure the data so it could be queried. Application state, is what makes the ETL challenge much much harder with data lakes. Data applications usually require an aggregate view of the data over time (state). For example: predicting the next best offer for a user requires a dataset with historic user behaviour, tracking funnel KPIs like CTR requires a join between a stream of views and a stream of clicks. Unlike data lakes, relational databases are built for joining data since they have indices. With data lakes, there are 2 options: Limit the use case to recent data (streaming analytics systems like Spark Streaming store raw data in RAM for joins) or spin-up an external NoSQL database (Redis and Cassandra are popular options) to manage state over a long period of time. The first option is limiting functionality and the second option creates a huge overhead. In order to accomplish the vision of data lakes - agile analytics and data science over un/semi structured data, a powerful ETL engine is required. **Stateful ETLs With Upsolver** Upsolver uses a built-in indexing system called Materialized Views (see below) to manage state. Materialized Views store the results of a Group-By query, running continuously on data streams, as a time-series. The relevant results can be fetched using a key and a snapshot time. For example: A data scientist could retrieve results from any snapshot in time and avoid the impact of Data Leakage in Models. Materialized Views are persisted to an object storage, like AWS S3 and HDFS, and cached in RAM for fast retrieval (under 1 millisecond). Materialized Views work at a low RAM footprint (benchmark is 10% comparing to Cassandra) so it’s possible to load a lot of data into RAM to drive stateful use cases like machine learning. By utilizing object storage and avoiding local disk storage, Materialized Views dramatically reduce the cost and effort relative to using an external database, especially when it comes to scaling, replication and recovery. Lastly, Materialized Views are built retroactively (Replay / Re-stream) using data already stored in an object storage, saving precious time spent waiting for data to accumulate. **Tools for workflow management, data transformation and movement** Data in a data lake is only as useful as your ability to consume it, which inevitably requires ETL. For example, join is a very slow operation in a data lake (no native indexing) so joins are usually performed in the ETL stage and not in the query stage. The current go-to tool for big data transformation is Spark. Spark is an excellent tool that gives incredible flexibility in transformations and SQL support over the data lake, and has a vibrant ecosystem that is continually evolving. However, it is not without its drawbacks. Firstly, Spark works best for ad-hoc work and large batch processes, whereas scheduling Spark jobs or managing them over a streaming data source requires external tools, and can be quite challenging. Second, while getting started with Spark for ad-hoc querying is very easy and does not require any special knowledge of big data, the expertise required to use Spark correctly in production systems is rare and expensive to obtain. Upsolver offers an alternative approach by providing a robust WYSIWYG data transformation tool built in that leverages Materialized Views to allow users without big data expertise to define complex transformations over streaming data. Scheduling and cluster/task management are handled implicitly, so users only need to define the actual transformations without needing to have an in-depth understanding of the underlying operations or infrastructure. Support for nested data structures Discussions around data often assume that data is flat, as can be represented by a CSV file, or in a database table. There are many rows in the table, and each has a single value for several typed columns. Unfortunately, real data is usually much more complicated than that. A better data representation is what can be represented in a JSON file; multiple records, each containing multiple named fields, which in turn can be either records themselves, or arrays of values or records. This nested hierarchy helps represent meaningful connections between different types of data, but does not lend itself well to simple query languages. Traditional solutions would either flatten data in advance, losing connection information and creating duplicate records; or else require code to be written in order to perform the transformations within the nested structure, often introducing bugs and performance issues. Upsolver aims to give users maximum flexibility with real data structures. For that reason, data enrichments in Upsolver always write to a location within the original data hierarchy. How the data is processed depends on the relationship between the input fields and the target field of the enrichment, as well as the function used. 3. Real-time and Ad-hoc Data Consumption Finally, for a data lake to provide actual return on investment, it needs to address an actual need within the business such as improving operational processes or enabling better data-driven decision making. Upsolver operationalizes organizational data lakes by delivering clean, structured data to various outputs and enabling high-throughput, low-latency access to data for real-time use cases. A data project will usually have one of two distinct methods for consuming the data in its final state. The first is SQL, which is useful for analytics workloads and ad-hoc querying for research and reporting. The second is using an operational database for real-time serving, in order to power user interfaces and internal systems. In order for the data lake to be successful as the central repository of all data, it needs to be able to fill both of these roles. Ad-hoc Analytics using SQL SQL is the de-facto data querying language. Almost all consumers of data, from data analysts to data scientists, are happy to use SQL where possible, since it is expressive and easy to use. Common use cases for SQL-based querying would be: • **Business intelligence and analytics**: for example, a data analyst in a large web company wants to create a daily report that contains the conversion rate of all the different web pages of the organization. • **Data science**: for example, a data scientist in an online retail company wants to find products to offer to users based on products that they already viewed/purchased. They will want to find all the products that the user viewed, and find similar users’ purchases. **Integration with SQL Engines (Athena, Presto, Impala)** There are quite a few SQL over big data systems, including Apache Presto, Apache Impala, Apache Drill and Amazon Athena (to name a few). What all these systems have in common is that they do not manage the data itself, but only provide a SQL layer over existing data. In order for them to work well, the data needs to be stored in a way that keeps the amount of data scanned per query to a minimum. These “best practices” of data storage are quite challenging to implement for the uninitiated. This will often cause teams to reach the mistaken conclusion that the query engine itself isn’t performant enough, instead of realizing that the data storage format itself is the problem. Upsolver has a deep integration with SQL engines, which allows a user to treat their data lake as they would a database. In the same way that databases have opaque storage structures that “just work”, Upsolver abstracts the storage implementation details away from the end user. This means that you will get the best performance possible for your data from the get go. Many data use cases only require aggregated data, or a small subset of the data. Since data lakes store raw data, often data lake users are turned off to the idea of scanning all their data, when the actual interesting data set is a tiny fraction of that. Upsolver helps with this use case by creating pre-aggregated and filtered outputs of the data lake. This allows users to have multiple data tables in their SQL engine, each looking at a different aspect of their data. Querying is made much easier, faster and inexpensive when done only on the relevant subset of the data. **Enabling Real-time Applications** The second data lake use case is where queries of the data lake are run in order to make real time decisions, or display many data points to a user in an interactive interface. These queries require very low latencies, and will often use an operational database like Cassandra in order to fulfil the performance requirements. Integrating such a database with data from a data lake can be quite challenging, which is why Upsolver encapsulates a native operational database within the data lake itself. Users can use Upsolver Materialized Views to create pre-defined aggregations over their data lake, which can be served in real time for operational use cases. A few examples of operational workloads: - **Real-time reporting**: A website wants to have interactive elements, such as showing how many other active users are currently viewing the same page from the same country. - **Anomaly detection**: An industrial monitoring system measures many metrics such as noise levels and air content at points around a machine. This data is compared to historical values in real time in order to alert on anomalous readings, which allows technicians to shut down the machine before any permanent damage can occur. **Materialized Views** Materialized Views in Upsolver are key-value stores that are defined as aggregations over a stream of data. For example, “The number of distinct user ids per country in the last month” or “The first time and the last time we’ve seen each user id each calendar month in the last year” are both definitions of materialized views. In the first case, querying by country will return the number of distinct users, and in the second case querying by user id will return the first and last times we saw that user in each calendar month. Materialized Views live on top of a stream, and all the data files are stored in cloud storage. The data contained in them changes every minute by adding the latest minute and removing the minute from the beginning of the time window. They are built within an Upsolver Compute Cluster, and served from Upsolver Query Clusters, which load the data files from cloud storage directly. Data files are created every minute, and merged files are created in increasing time intervals to support access to large time windows with logarithmic complexity. Materialized Views are stored as sorted immutable hash tables, with the data stored using a hybrid row/columnar format. Many databases use columnar formats in order to greatly improve compression and query times. Columnar formats also support run length encoding, which is a very efficient way to store sparse values. Since Materialized Views are used for random access “get by key” requests, storing data in a pure columnar format introduces quite a lot of overhead - the value from each column would need to be found individually, and lots of random memory access slows things down considerably. Instead, Upsolver uses columnar compression in a row based format. In order to achieve the same compression as a columnar format would using run length encoding, Arithmetic Coding is used as the entropy encoder. This allows for optimal compression in exchange for performance, but our implementation of the arithmetic decoder performs 16 million symbol decodes per second per CPU (on AWS m5 instances), which prevents it from being a performance bottleneck. Use Case Examples and Reference Architectures Sisense Drives New Analytical Insights from S3 Sisense is one of the leading software providers in the highly competitive business intelligence and analytics space, and has been named a Visionary by Gartner for the past two years. Headquartered in New York and serving a global market, the company provides an end-to-end BI platform that enables users to reveal business insights from complex data. Sisense counts Nasdaq, Airbus, Philips and the Salvation Army among its customers. Seeking to expand the scope of its internal analytics, Sisense set out to build a data lake in the AWS cloud in order to more effectively store and analyze product usage data - and following a recommendation from the AWS team, began looking into Upsolver’s streaming data platform. The Requirements - Transform data streams into structured fact tables that could then be sent to Sisense’s BI software - Ability to quickly iterate and answer new business questions as they arise - Self-service solution that would not require a dedicated data engineering team The Solution Business Benefits • Ability to run new analytical queries on streaming data in Sisense • Much-improved visibility into internal data • Additional use cases around machine learning being rolled out Engineering Benefits • Agile infrastructure that enables new queries and tables to be generated on the fly • Data lake project could be handled internally and without the need to devote a team of engineers. To read the full case study, visit: https://www.upsolver.com/case-studies/sisense-s3-data-lake Bigbid Builds a Real-time Architecture to Supercharge Mobile User Acquisition Bigbid is an innovative mobile marketing and real-time bidding company that empowers its partners to scale their mobile user acquisition efforts while lowering advertising costs through CPA and AI-based optimization. In order to maintain a high level of performance in the competitive app advertising market, Bigbid needed to introduce real-time user profiling to its algorithmic decision engine. This case study will demonstrate how the company built a high-performance real-time architecture with minimal data engineering using Upsolver, S3 and Amazon Athena. The Requirements • Create a real-time aggregate view of users which would include data from the last 180 days, and use it to deliver more accurate bidding decisions. • Clean, enrich and join data from several streams: bids, impressions, clicks, and several third-party data providers. • Support for advanced features such as Count of Sessions per User, which would be difficult to achieve in a traditional database architecture. • Improve data freshness from 8 hours to less than one minute. The Solution: The Results Working with fresher, more accurate data has played a major role in Bigabid’s ability to continuously provide exceptional performance for its customers. The real-time architecture, powered by Upsolver, is now an essential component in Bigabid’s platform. To read the full case study, visit: https://www.upsolver.com/case-studies/bigabid-real-time-architecture Next Steps • Learn more about Upsolver or schedule a live demo: https://www.upsolver.com/ • Read more Upsolver case studies: https://www.upsolver.com/customers • Start a fully functional 14 day free trial: https://app.upsolver.com/signup
{"Source-Url": "https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWBwR8", "len_cl100k_base": 6017, "olmocr-version": "0.1.53", "pdf-total-pages": 29, "total-fallback-pages": 0, "total-input-tokens": 48584, "total-output-tokens": 7108, "length": "2e12", "weborganizer": {"__label__adult": 0.00034308433532714844, "__label__art_design": 0.0008730888366699219, "__label__crime_law": 0.00047469139099121094, "__label__education_jobs": 0.0007686614990234375, "__label__entertainment": 0.00018334388732910156, "__label__fashion_beauty": 0.00020194053649902344, "__label__finance_business": 0.006114959716796875, "__label__food_dining": 0.000492095947265625, "__label__games": 0.0006031990051269531, "__label__hardware": 0.001644134521484375, "__label__health": 0.00041866302490234375, "__label__history": 0.0004382133483886719, "__label__home_hobbies": 0.0001461505889892578, "__label__industrial": 0.0011301040649414062, "__label__literature": 0.00028705596923828125, "__label__politics": 0.0003843307495117187, "__label__religion": 0.00040793418884277344, "__label__science_tech": 0.11114501953125, "__label__social_life": 0.0001270771026611328, "__label__software": 0.16748046875, "__label__software_dev": 0.705078125, "__label__sports_fitness": 0.00024509429931640625, "__label__transportation": 0.0006003379821777344, "__label__travel": 0.00028204917907714844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32160, 0.00361]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32160, 0.47796]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32160, 0.91435]], "google_gemma-3-12b-it_contains_pii": [[0, 220, false], [220, 1215, null], [1215, 1894, null], [1894, 3439, null], [3439, 4619, null], [4619, 5778, null], [5778, 7015, null], [7015, 8408, null], [8408, 9832, null], [9832, 10542, null], [10542, 11721, null], [11721, 12649, null], [12649, 14041, null], [14041, 15242, null], [15242, 16558, null], [16558, 17939, null], [17939, 19326, null], [19326, 20744, null], [20744, 22023, null], [22023, 23198, null], [23198, 24606, null], [24606, 26070, null], [26070, 27410, null], [27410, 28780, null], [28780, 29872, null], [29872, 30392, null], [30392, 31531, null], [31531, 31920, null], [31920, 32160, null]], "google_gemma-3-12b-it_is_public_document": [[0, 220, true], [220, 1215, null], [1215, 1894, null], [1894, 3439, null], [3439, 4619, null], [4619, 5778, null], [5778, 7015, null], [7015, 8408, null], [8408, 9832, null], [9832, 10542, null], [10542, 11721, null], [11721, 12649, null], [12649, 14041, null], [14041, 15242, null], [15242, 16558, null], [16558, 17939, null], [17939, 19326, null], [19326, 20744, null], [20744, 22023, null], [22023, 23198, null], [23198, 24606, null], [24606, 26070, null], [26070, 27410, null], [27410, 28780, null], [28780, 29872, null], [29872, 30392, null], [30392, 31531, null], [31531, 31920, null], [31920, 32160, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32160, null]], "pdf_page_numbers": [[0, 220, 1], [220, 1215, 2], [1215, 1894, 3], [1894, 3439, 4], [3439, 4619, 5], [4619, 5778, 6], [5778, 7015, 7], [7015, 8408, 8], [8408, 9832, 9], [9832, 10542, 10], [10542, 11721, 11], [11721, 12649, 12], [12649, 14041, 13], [14041, 15242, 14], [15242, 16558, 15], [16558, 17939, 16], [17939, 19326, 17], [19326, 20744, 18], [20744, 22023, 19], [22023, 23198, 20], [23198, 24606, 21], [24606, 26070, 22], [26070, 27410, 23], [27410, 28780, 24], [28780, 29872, 25], [29872, 30392, 26], [30392, 31531, 27], [31531, 31920, 28], [31920, 32160, 29]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32160, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
98015bc6bc96219d19b601f5aee880265f57f7db
A Comparative Study of Log-Structured Merge-Tree-Based Spatial Indexes for Big Data Young-Seok Kim† Samsung Advanced Institute of Technology (SAIT), Samsung Electronics. Co., Ltd. ys24.kim@samsung.com Taewoo Kim, Michael J. Carey, Chen Li Department of Computer Science, University of California, Irvine {taewok2, mjcarey, chenli}@ics.uci.edu Abstract—The proliferation of GPS-enabled mobile devices has generated geo-tagged data at an unprecedented rate over the past decade. Data-processing systems that aim to ingest, store, index, and analyze Big Data must deal with such geo-tagged data efficiently. In this paper, among representative, disk-resident spatial indexing methods that have been adopted by major SQL and NoSQL systems, we implement five variants of these methods in the form of Log-Structured Merge-tree-based (LSM) spatial indexes in order to evaluate their pros and cons for dynamic geo-tagged Big Data. We have implemented the alternatives, including LSM-based B-tree, R-tree, and inverted index variants, in Apache AsterixDB, an open source Big Data management system. This implementation enabled comparison in terms of real end-to-end performance, including logging and locking overheads, in a full-function, query-based system setting. Our evaluation includes both static and dynamic workloads, ranging from a “load once, query many” case to a case where continuous concurrent incremental inserts are mixed with concurrent queries. Based on the results, we discuss the pros and cons of the five index variants. I. INTRODUCTION During the past decade, diverse geo-tagged data such as texts, photos, and videos have been generated at an unprecedented rate from GPS-enabled devices. The trend will be further accelerated by the advent of the Internet-of-Things era, where literally everything could involve GPS-enabled sensors and generate geo-tagged data. In addition, popular NoSQL systems such as [1], [2], [3], [4] have adopted Log-Structured Merge (LSM) Trees [5] as their storage structure in order to support such high-frequency data generation. LSM-trees amortize the cost of writes by batching updates in memory before writing them to disk, thus avoiding random writes. There have been many studies in the spatial data processing area, from proposing new spatial index data structures to evaluating and analyzing those proposed indexes [6], [7]. Most of the studies, however, have dealt with “load once, query many”, i.e., static workloads. Also, many of the measures in those studies consider only I/O without including CPU or index access costs instead of an overall system’s end-to-end processing cost. Moreover, LSM-based spatial indexing methods have received little attention. A few recent studies [8], [9] have investigated LSM-based or insert-optimized spatial indexing methods with dynamic workloads that reflect continuous incremental inserts and concurrent queries. Given this context, now is an appropriate time to revisit spatial indexing methods. The contributions of this work are: 1) Among representative, disk-resident spatial indexing methods that have been adopted by major SQL and NoSQL †This study was performed while this author was at UC Irvine. <table> <thead> <tr> <th>R-tree-based methods</th> <th>Non-R-tree-based methods</th> </tr> </thead> <tbody> <tr> <td>Oracle, IBM Informix Spatial DataBlade, PostgreSQL (which also supports GiST), MySQL, Couchbase</td> <td>Oracle, IBM DB2, MS SQL Server, MongoDB, Apache Couchbase, Lucene</td> </tr> </tbody> </table> TABLE I: Spatial indexing methods supported by major SQL and NoSQL systems, where Non-R-tree-based methods use a well-known scheme which maps two dimensional point objects into one dimensional sequence values and stores/retrieves them from a typical one-dimensional index structure, e.g., a B-tree. 2) Among the five variants, three are based on B-trees, one on R-trees, and one on inverted indexes, which are three of the most popular disk-resident indexes. With this setup, we can answer an interesting question: If built-in indexes like B-tree and inverted indexes are used to implement spatial indexing, can they be as efficient as or even superior to R-trees? 3) We focus on real-world geo-tagged point data in our evaluation and cover a broad spectrum of workloads, from a “load once, query many” case without incremental inserts to a case with continuous concurrent insertions and queries. The remainder of the paper is organized as follows. First, we briefly explain the basic idea of LSM-trees and review how AsterixDB transforms traditional indexes into LSM-tree indexes. Next, we summarize the details of the five spatial indexes, then present our evaluation strategy and results. II. BACKGROUND A. LSM-trees An LSM-tree [5] is an ordered, persistent index structure that supports typical operations such as insert, delete, and search. It is optimized for frequent or high-volume updates. By first batching updates in memory, the LSM-tree amortizes the cost of an update by converting what would have been several disk seeks into some portion of a sequential I/O. Entries being inserted into an LSM-tree are initially placed into a component of the index that resides in main memory – an in-memory component. When the space occupancy of the in-memory component exceeds a specified threshold, entries are sequentially flushed to disk – a disk component. As the number of disk components increases, disk components are periodically merged together subject to a merge policy that decides when and what to merge. B. Storage management in Apache AsterixDB Apache AsterixDB [10], [11] is a parallel, semi-structured information management platform that provides the ability to ingest, store, index, query, and analyze mass quantities of data. Its storage layer includes a framework for converting a class of indexes (including conventional B-trees, R-trees, and inverted indexes) with basic operations such as insert, delete, and bulkload into LSM-based secondary indexes. The framework serves as a coordinating wrapper that orchestrates the creation and destruction of LSM components and delegates operations to the appropriate components as needed. See [9] for more details of the framework and [12] for details of the LSM indexes in AsterixDB, such as LSM B-trees, LSM R-trees, and LSM inverted indexes, with transactional support. III. FIVE LSM SPATIAL INDEXES This section describes the five indexes, which are named the R-tree, DHB-tree, DHVB-tree, SHB-tree, and SIF indexes. The R-tree is the LSM R-tree index. The DHB-tree, DHVB-tree, and SHB-tree are each based on an underlying LSM B-tree index. SIF is based on an LSM inverted index. A. R-tree An LSM R-tree consists of an in-memory component and zero or more disk components. An in-memory component of an LSM R-tree consists of a traditional R-tree along with a deleted-key B-tree that captures deleted entries. A disk component is a variant of an R-tree, where it orders indexed entries using a Hilbert curve when loading the tree. The deleted-key B-tree is useful when merge-based reconciliation is needed; in contrast, merge-based reconciliation is available for LSM B-trees since they maintain entries in a totally ordered manner [9]. An insert operation inserts an entry \( e = (sk, pk) \) into the in-memory component via traditional R-tree insertion logic, with \( sk \) and \( pk \) representing a secondary key and its corresponding primary key, respectively. A delete operation deletes an entry \( e = (sk, pk) \) from an in-memory R-tree if \( e \) exists. It also inserts \( e \) into the associated deleted-key B-tree; this step serves as a sentinel by preventing entries of \( e \) in an older component’s R-tree from being returned. A flush operation creates a new disk component as follows: Entries in the in-memory component are sorted using a Hilbert curve, as are entries in the associated deleted-key B-tree. During both sorts, the entries are compared relatively based on the Hilbert curve. (Relative comparisons are explained in Section III-B.) A new disk component is then created by merging and bulk-loading the two sets of sorted entries. During this process, an entry \( e \) from the deleted-key B-tree becomes an anti-matter entry \( e' \) in the new disk component if an identical entry \( e \) does not appear in the sorted R-tree entry list. Otherwise, \( e \) from the deleted-key B-tree is ignored. A search operation creates a range-scan cursor consisting of a heap of sub-cursors on the disk components and gets their reconciled entries; those entries must be checked against the in-memory component’s deleted-key B-tree, and surviving entries from the check are returned by the range cursor at the end. A merge operation gets reconciled entries like the search operation, but without checking the in-memory component’s deleted-key B-tree since a merge operation only merges disk components. [12] offers further details. B. DHB-tree and DHVB-tree Details of indexing and querying 2D spatial point objects using a B-tree with space-filling curves (such as Hilbert and Z curves) are described in previous studies [13], [14], [15], [16]. We use the Hilbert curve since it is considered to be superior to other space-filling curves [14]. The DHB-tree and DHVB-tree were implemented using the main ideas from [16], which details how to index 2D spatial points using a B-tree with space-filling curves and how to support spatial range queries. In general, both our DHB-tree and DHVB-tree indexes store point objects and support range queries. However, there is a key difference in terms of how they compare points. One approach is to compute Hilbert sequence numbers for two given points and then compare the resulting numbers. An alternative is to compare the points relatively, i.e., without computing full sequence numbers. Relative comparisons can be faster than the absolute sequence number comparisons if the two points lie far away from each other. The DHB-tree uses the relative-comparison method, so stored entries in DHB-tree index have coordinates and primary key fields but no stored sequence number. Note that this means that whenever a comparison is needed, the relative comparison steps are repeated. The DHVB-tree adopts the absolute-comparison method, so its index entries have the coordinates, primary key fields, and also a Hilbert sequence number. The tradeoffs between these two methods will be examined in the experiments. Fig. 1: A 3-level 2×2 grid hierarchy and cell numbers at each level C. SHB-tree In the SHB-tree indexing method, a two-dimensional space is statically decomposed into a \( k \)-level \( 2^n \times 2^n \) grid hierarchy, where \( k \) and \( n \) are numbers chosen when the index is created. The top level (level 0) has \( 2^n \times 2^n \) cells, and each successive level further decomposes a cell at the previous level into \( 2^{n+1} \times 2^{n+1} \) cells. Also, the top-level cells are numbered in a linear fashion by using a Hilbert curve, and so are the sub-cells belonging to a parent cell at the previous level (by prepending the ascendant cell numbers). Figure 1 depicts a 3-level 2×2 grid hierarchy and its cell numbers at each level. Since the top level has four cells and each cell has four sub-cells, the sequence numbers are from the first-order Hilbert curve. To store a spatial object in the SHB-tree, the MBR of the object is decomposed into a set of cells, each of which either overlaps with the MBR or is completely covered by the MBR according to the following covering rule. If the MBR completely covers some cell at a level, that cell is said to be covered by the MBR. A covered cell is included in the result set and not decomposed at lower levels. The rest of the overlapping cells, if any, are decomposed further at the next level. This rule applies at all levels of the grid hierarchy except that overlapping cells at the lowest level are included in the result set without further decomposition. Note that a point object will always be mapped into a cell at the lowest level. Once the set of cells are computed for an object being inserted, each cell number (as a key) is inserted into the underlying B-tree, where a cell number also includes the level number of the cell at the end. Each SHB-tree index entry thus consists of a \( \langle \text{cell number}, \text{primary key} \rangle \) pair. When a query region is given, a set of cell numbers for the MBR of the query region are computed and used to search for matching cell numbers of indexed objects. Searching for the set of MBR cell numbers over the index can be optimized, when there are consecutive cell numbers, by forming a range search with the consecutive numbers. This optimization can effectively reduce the number of index searches. False positives caused by (1) the MBR of the query region, (2) the MBR of the stored spatial objects, and/or (3) the overlapping, non-covered cells at the lowest level are all discarded during the post-processing step. Cause (1) is not relevant to a rectangular region query, and cause (2) is not relevant to point objects. In general, a finer granularity of cells at the lowest level can reduce the number of false positives due to cause (3), but it may also increase the number of index searches resulting from reading more cells for a given query region’s MBR. This grid-based approach is particularly interesting since it is used for spatial indexing in MS SQL Server [17]. The Linear Quadtree [18] in Oracle Spatial [19] works in a similar way, except that its number of sub-cells in a cell is fixed (at four) and its cells are numbered in a quadtree-based manner. D. SIF SIF provides a way to support spatial indexing based on inverted indexing. The main idea is similar to the SHB-tree except that SIF uses an inverted index. A spatial object is mapped to a set of cell numbers, which are stored in an inverted index as a set of tokens. Similarly, a region query goes through a mapping step to obtain a set of cell numbers and then searches these cell numbers just like how a set of keyword tokens are searched for in an inverted index. In general, an inverted index supports exact match searches for a given string token, where a set of string tokens can be generated from a given query string. Some inverted indexes provide richer search types, such as prefix or range searches, based on the underlying data structure (such as trie or B-tree) used to implement the index. We used the LSM inverted index in AsterixDB as a black box to implement SIF; as a result, our SIF index only supports exact matches. SIF thus has more overhead than the SHB-tree due to its lack of range search support. As described earlier, a range search in the SHB-tree can reduce the number of searches when contiguous cell numbers result from a given query region. Nonetheless, a related optimization is available for SIF to reduce the number of searches over the underlying LSM inverted index: a set of child cell numbers can be replaced with their common parent cell number if these child cells completely cover the parent cell. This optimization requires a complete spatial object to be captured at all levels. However, it comes with an increased index size due to storing a point object at each of the levels in a grid hierarchy. We will see that the increased index size can degrade this indexing method’s performance. <table> <thead> <tr> <th>Index</th> <th>Fields of an index entry (size in bytes)</th> </tr> </thead> <tbody> <tr> <td>DHB-tree</td> <td>point (16), pk (8)</td> </tr> <tr> <td>DHVB-tree</td> <td>Hilbert sequence number (8), point (16), pk (8)</td> </tr> <tr> <td>R-tree</td> <td>point (16), pk (8)</td> </tr> <tr> <td>SHB-tree</td> <td>cell number (7), point (16), pk (8)</td> </tr> <tr> <td>SIF</td> <td>cell number (7), pk (8), point (16)</td> </tr> </tbody> </table> TABLE II: Each index entry’s fields with their sizes in bytes V. EXPERIMENTAL RESULTS Due to space limitations, we present only a subset of our full results here. See [12] for more on our experimental setup. We implemented the DHVB-tree with 64-bit Hilbert sequence numbers and the same resolution was given to the DHB-tree. We configured both the SHB-tree and SIF index to have a 6-level $2^4 \times 2^4$ grid hierarchy. A cell’s size at the bottom level is around 2 meters $\times$ 1 meter ($2m \times 1m$), with 2m for the $x$-axis and 1m for the $y$-axis. Table II shows the details of each index entry’s fields and their sizes at the logical level. Each index’s entry also includes the original point in order to support index-only scans when a query includes predicates that can be evaluated using just the fields in the secondary index. See [12] for more on how index-only scan queries and non-index-only scan queries are processed in AsterixDB. A. Spatial Dataset We obtained a set of real-world GPS point data from OpenStreetMap [20], which includes more than 2.7 billion points. Because the obtained data includes only the points themselves, represented as a latitude and a longitude, we generated synthetic tweets using the obtained point data for this evaluation. We uniformly sampled 1.6 billion points out of the 2.7 billion points and augmented them with tweet data. These 1.6 billion tweet records amount to 1.6 TB. B. Workloads We used two classes of workloads: one is static and the other is dynamic. The first has no insertions after data is loaded and it is called the Static workload. In contrast, the dynamic class involves continuous data arrivals; one variant of this class, Dynamic workload 1, is ingestion-only, while another variant has concurrent queries as well and is called Dynamic workload 2. For the queries, we examine both select and join queries, both of which use a secondary spatial index as their access path. These queries fall into two categories; one is an index-only-scan query that only accesses a secondary spatial index, and the other is a non-index-only-scan query that searches a secondary spatial index first and then accesses the primary index due to a predicate not covered by the secondary index. See [12] for further workload details. V. EXPERIMENTAL RESULTS Due to space limitations, we present only a subset of our full results here. See [12] for additional results. A. Static workload results Table III shows the primary index (“pidx”) size after loading 1.6 billion records as well as the elapsed time for data loading. Also, it shows the elapsed times for creating each spatial secondary index on the sender location field of the 1.6 billion tweet records and the corresponding index sizes. The results for the index-only-scan select queries are shown in Figure 2(a). This figure shows, for each spatial index, its query response time percentage relative to the query response time of the LSM R-tree. B. Dynamic workload 2 results Dynamic workload 2 tests each index’s ability to ingest and query concurrently while the data arrival rate varies. Figures C. Summary of the results Here we summarize the main lessons from the experiments. First, without concurrent data arrivals (static workload), for small-circle range queries, the SHB-tree outperformed the R-tree in terms of query response time due to the index-range-scan mechanism enabled by the underlying B-tree. For larger-circle range queries, however, this effect was washed out by more false positives and more searches caused by having more cells. Second, with ingestion but no concurrent querying (dynamic workload 1), the DHVB-tree and SHB-tree outperformed the R-tree in terms of IPS [12]. Third, with concurrent data ingestion and querying (dynamic workload 2), the DHVB-tree outperformed the R-tree in terms of QPS at slower data arrival rates; there was no IPS difference among the indexes except for SIF. For higher incoming data rates, the DHVB-tree and SHB-tree outperformed the R-tree in terms of IPS, but the R-tree outperformed them in terms of QPS. In addition, the DHB-tree’s insert performance was worse than the DHVB-tree’s due to the overhead of its relative comparisons. SIF, whose underlying data structure is an inverted index, performed the worst overall, due to its lack of range and prefix search support, both for ingestion performance (IPS) and query performance (QPS). Except for SIF, there was neither a clear winner nor a clear loser considering both insert and query performance. Query differences were mostly modest in our real end-to-end system setting. This result was especially true for large-circle non-index-only-scan queries with many results, where final primary key lookups were costly. If we had to pick one winner from an end user’s perspective, we would choose the R-tree index since it performed well and does not need tuning such as picking a proper number of grid levels and the number of cells in each grid; tuning was required for the SHB-tree and SIF to find a sweet spot to trade off false positives versus more searches. Acknowledgements The AsterixDB project has been supported by an initial UC Discovery grant, by NSF IIS awards 0910989, 0910859, 0910820, and 0844574, and by NSF CNS awards 1305430 and 1059436. It has also enjoyed support from Amazon, eBay, Facebook, Google, HTC, Microsoft, Oracle Labs, Infosys, and Yahoo! Research. We thank Pierre-Jean Rollondo for asking a seemingly innocent question about SQL Server spatial indexing that inspired us to look more deeply at these issues. We also thank Odej Kao of TU Berlin for providing our access to a cluster there for the experiments. REFERENCES
{"Source-Url": "https://chenli.ics.uci.edu/files/icde2017-AsterixDB-Spatial-Comparison.pdf", "len_cl100k_base": 4745, "olmocr-version": "0.1.50", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 14358, "total-output-tokens": 5656, "length": "2e12", "weborganizer": {"__label__adult": 0.0003311634063720703, "__label__art_design": 0.0003905296325683594, "__label__crime_law": 0.0005002021789550781, "__label__education_jobs": 0.0010919570922851562, "__label__entertainment": 0.00012242794036865234, "__label__fashion_beauty": 0.00021183490753173828, "__label__finance_business": 0.0004718303680419922, "__label__food_dining": 0.00040650367736816406, "__label__games": 0.0005283355712890625, "__label__hardware": 0.0023326873779296875, "__label__health": 0.000774383544921875, "__label__history": 0.0006518363952636719, "__label__home_hobbies": 0.0001227855682373047, "__label__industrial": 0.0007066726684570312, "__label__literature": 0.00033020973205566406, "__label__politics": 0.0004096031188964844, "__label__religion": 0.0004775524139404297, "__label__science_tech": 0.473876953125, "__label__social_life": 0.00014340877532958984, "__label__software": 0.035400390625, "__label__software_dev": 0.4794921875, "__label__sports_fitness": 0.0002791881561279297, "__label__transportation": 0.0007505416870117188, "__label__travel": 0.0002620220184326172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23485, 0.02454]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23485, 0.44708]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23485, 0.90643]], "google_gemma-3-12b-it_contains_pii": [[0, 5746, false], [5746, 12443, null], [12443, 18980, null], [18980, 23485, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5746, true], [5746, 12443, null], [12443, 18980, null], [18980, 23485, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23485, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23485, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23485, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23485, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23485, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23485, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23485, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23485, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23485, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23485, null]], "pdf_page_numbers": [[0, 5746, 1], [5746, 12443, 2], [12443, 18980, 3], [18980, 23485, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23485, 0.11111]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
647f4aaa017d41052636058379fdf0eeed5d69f9
1 Description In this programming project you and one partner will design and implement an Othello playing program. If you are unfamiliar with the rules of Othello there is a short guide in this document. There is also a Java implementation linked from the class page that can help you improve your Othello skills. We make two major changes to the common version of Othello: first, we use a 10 × 10 board rather than the traditional 8 × 8 board; second, we use a board from which the four corner squares have been removed. A maximum time limit for the entire game and a maximum node expansion limit for each move will be placed on your program. Your program has to make a move as soon as the node limit has been exhausted. If your program runs out of time during the game, the game is a win for the other player. There won’t be enough time/nodes for your program to search the entire search tree for the best move. So, the program you create should search for the move it estimates is best to play by using alpha-beta search with an evaluation function. The preferable implementation language is C, but other languages can be used as well as long as you can establish socket communication with the game server that we will provide. The project is designed to be quite open-ended and you are encouraged to have some fun exploring different methods of improving your player. We want to encourage some competition, but not make things so competitive that we take the fun out of it. As a result, everybody who produces a working program that plays by rules will receive a score of at least 90. After everybody has turned in their programs, we will have a tournament. The remaining 10 the tournament and your creativity in going beyond the bare minimum requirements. 2 Othello The Othello game-board consists of an 8 × 8 grid with an initial setup of two red discs and two white discs centered on the grid. In our version of the game the board will be set to 10 × 10, although you should follow good programming practices and design code that works for any size of the board. (An exception to this rule would be any heuristic functions you use, which may be specialized for the particular parameters of this project.) The objective of the game is to have the majority of discs on the board be of your color at the end of the game. Some terminology: - A move consists of outflanking your opponent’s disc(s), then flipping the outflanked disc(s) to your color. - To outflank means to place a disc on the board, adjacent to one of your opponent’s discs, so that your opponent’s line (or lines) of disc(s) is bordered at each end by a disc of your color. • A line is defined as one or more discs in a continuous straight line: a row, a column or a diagonal. Now, the official rules: • White always moves first. • If on your turn you cannot outflank and flip at least one opposing disc, your move is forfeited and your opponent moves again. However, if a move is available to you, you may not forfeit your turn. • A disc may outflank any number of discs in one or more lines in any number of directions at the same time – horizontally, vertically or diagonally. • Disc(s) may only be outflanked as a direct result of a move and must fall in the direct line of the disc placed down. • All discs outflanked in any one move must be flipped, even if it is to the player’s advantage not to flip them all. • When it is no longer possible for either player to move, the game is over. Discs are counted and the player with the majority of his or her color discs on the board is the winner. 3 Tasks to Do Here is list of tasks that you are expected to accomplish for this assignment: • Review this document and the associated code. The code is in http://www.cs.duke.edu/education/courses/spring02/cps170/othello/. • Make sure you can access and use the software supplied to you. Go over the sample client program and make sure you understand it. Write the rest of your code as a part of the client. • Implement minimax search. For this part you should use the evaluation function that simply looks at the difference between the number of your pieces and your opponent’s pieces, i.e., \[ V(\text{state}) = (\# \text{my pieces}) - (\# \text{opponent’s pieces}) \] You will also have to consider what sort of data structures you will need in order to create a game tree. Remember that in Othello it may be possible that one of the sides has no valid move. Be sure that your algorithm can cope with this situation. • Construct an improved evaluation function and show that it at least beats the simple one described above. • Make the search more efficient by implementing alpha-beta pruning. Make sure that you can turn alpha-beta pruning on and off as you wish. • Try some improvements to your search control that go beyond basic alpha-beta pruning. • Turn your Othello player in by March 7. Your program should be able to play any number of games without crashing. • You should also include a brief writeup explaining any clever extensions you have devised. 4 A Few Suggestions Minimax Search and Alpha-Beta Pruning have been covered in the class and the textbook contains all the information and pseudocode you need for this assignment. Here are a few comments and suggestions: - There is a node expansion limit (currently, 50,000 nodes) for each move. Vanilla alpha-beta performs a simple depth-first search, but you should think about doing something more interesting than this. - Each game has a time limit for each player. The current version of the server may be set for a 5 minute limit, but note: In the tournament, you will have only 3 minutes. Your player must finish within 3 minutes. The 3 minute time limit is, unfortunately, measured in real time by the server. (There’s not a great workaround for this if we want to stick with the heterogeneous client/server architecture.) We will be conducting the tournament on unloaded Sun Ultra 10s. These should be at least as fast as the Suns to which you currently have access. If you don’t have a terribly inefficient heuristic function, this should be plenty of time to expand nodes up to your node budget per move. One clever trick to reduce the number of node expansions is to reuse parts of search trees from earlier searches. Note, however, that this does add considerable overhead. - The textbook mentions that when we perform alpha-beta pruning we would ideally like to expand the best successors first. In general, the order in which you expand nodes can have a tremendous impact the effectiveness of your search. Think about methods for detecting when one node is a more promising candidate for expansion than another. - You might also think about Quiescence search, which is used to selectively extend the search tree. It doesn’t have a direct analog in Othello, since pieces are exchanged on every move. However, a generalization of this concept might be useful. - If you do some research, you will find a number of clever AI approaches to Othello players. You are free to incorporate these ideas into your player, but you must explain the method you have used and give proper citations in your writeup. 5 Implementation Issues In order to be able to have the different programs compete against each other in a tournament, and in order to decrease some of your work load, we provide some of the software for you in C. That includes the infrastructure specific to Othello, the communication services, and the server that manages the games. A graphical user interface in Tcl/Tk is currently under major renovation. While this is under development you can use the ASCII interface provided with the C code. The communication protocol to both interfaces is exactly the same, so if your program runs with the ASCII interface it will be able to run with the graphical one as well. The Othello infrastructure provides the main data structures and mechanisms for the game. The code includes also some useful procedures (e.g., checking the validity of a move) for you to use. Of course, you can always implement your own procedures. You are not allowed to change the communication protocol between the server and the client, the code of the GUI, and the code of the ASCII server. This is to make sure that your program will be fully compatible with any changes we choose to make to these modules and with the other players as well. 5.1 Code Directory You will find two subdirectories in the code directory, Sun/ and i86/. Choose one of the two depending on the type of machine you are using. The source files are the exactly the same in both directories. The difference is only in the executables. Note: You may prefer to develop your code on a PC, but please test and submit a final executable for Suns. 5.2 Othello Infrastructure The Othello infrastructure is supplied in the file board.h. Mainly it includes a data structure that represents a board configuration and some procedures that can be used to manipulate it (e.g., performing a move). As a convention we refer to the first player as WHITE and to the second player as RED. In fact, these are constants and their values are set to 0 and 1 respectively. 5.2.1 The Board Data Structure The board data structure is BoardPosition, which is defined as: ```c typedef struct { SquareState squares [BOARD_SIZE][BOARD_SIZE]; /* access squares[row][col] */ int pieces[2]; /* 0=White, 1=Red */ } BoardPosition; ``` The field squares is a matrix of size BOARD_SIZE x BOARD_SIZE which represents the board. BOARD_SIZE is currently set to 10, but the part of the code that plays the game should work even if we change the size (using the simple evaluation function that counts pieces). To access the square (row,col) you can simply use board.squares [row] [col]. Note that according to the C conventions both row and col range between 0 and BOARD_SIZE-1 where (0,0) is the upper left corner of the board. Each square can have one of four values: - 0=WHITE – Player 1 has a piece on this square. - 1=RED – Player 2 has a piece on this square. - 2=EMPTY – The square is empty. - 3=ILLEGAL – This square is not a legal position (e.g. the four corners). The other field in the structure is the array pieces, which indicates the number of pieces each side has. This is an integer array of size 2. Note that according to our conventions board.pieces [WHITE] would give the number of pieces that the first player has on the board, and board.pieces [RED] the number of pieces that the second player has on the board. 5.2.2 Some Useful Procedures The file board.h declares some useful procedures, that you may want to use. The procedures are documented in the file, so here we include only a brief description. - **bool CanMove (BoardPosition *board, Side side)** This procedure returns TRUE iff side has at least one legal move in this position. Side is a type that can take the values WHITE and RED. - **bool IsLegalMove (BoardPosition *board, int row, int col, Side side)** This procedure checks the legality of a move. It returns TRUE iff (row,col) is a legal move for side. • BoardPosition *DoMove (BoardPosition *board, int row, int col, Side side, bool allocate_new_board) This procedure performs a move. It either changes the position on the given board or allocates a new board and performs the move there (this is determined by the variable allocate_new_board). • BoardPosition *AllocateNewBoard (void) This procedure allocates a new board, initialized to the initial position. For those of you familiar with C++, note that this is equivalent to the C++ constructor. • void InitializeBoard (BoardPosition *board) Change board to the initial position. It is assumed that board was previously allocated. • void CopyIntoBoard (BoardPosition *in_board, BoardPosition *out_board) This procedure copies the board position in out_board into in_board. out_board is not changed. It is assumed that in_board was previously allocated. • void PrettyPrint (BoardPosition *board) Prints the board as an ASCII board. This is the printing procedure used by the ASCII server. • void PrettyPrintWide (BoardPosition *board) Same as PrettyPrint but with a wider layout for better viewing. • ReadBoardFromFile (BoardPosition *board, char *file) This procedure initializes a board position according to a position written in a file. The file format is the same format that the PrettyPrint (or PrettyPrintWide) procedure outputs, so you can cut and paste a position into a file and test it further. The procedure that reads the file only cares about the characters ‘.’, ‘R’ and ‘W’ so it will work even if you do not include the boarders and numbers printed by PrettyPrint. Some samples of opening position files can be found in the subdirectory BoardPositions/. 5.3 Communication In order to let two programs compete against each other we have implemented a simple server-client architecture. The server communicates with two clients, representing the two players, that may reside on the same or on a difference machine. At each time the server will send to one of the clients the board position and the amount of time left. The client needs to decide on its move, and after that it needs to send the message to the server. All the communication procedures are implemented for you. Do not change these procedures. There is also a simple main program for the client (client.c) which uses either random moves or asks the user for moves. You can use this code to create your player. 5.3.1 The ASCII Server In order to run the ASCII server you need to issue the command: server <games> [port] [time] [init-position]. • games is the number of games you want the server to accommodate (these games are played sequentially). This is the only required argument. • port is the port the server will use to listen for the clients (the default for this argument is 5555). • time is the time allocated for each player per game in seconds. (Ignore the default value. In tournament play you will have only 3 minutes, i.e, 180.0 seconds.) • init-position is a name for a file that contains an opening position you want the server to use. The default is for no file, in which case the server uses the standard opening position. You will find this option useful when you test your alpha-beta pruning on some test-cases. The file format is the one used by the function ReadBoardFromFile. (Note: If you want to specify time, the third argument, you need to specify the second argument, port. Similarly, if you want to specify init-position, the fourth argument, you need to specify the second argument, port, and the third argument, time.) The server will wait until two clients join it. Once this happens it will manage the game. At the end of the game it will either exit or coordinate another game between the same two clients (depending on games). The server is designed to close all the communication ports even if the clients crash. However, if you exit the server using Ctrl-C when one of the clients is logged in, the communication will not shut down itself. The next time you will try to run the server, you may get the message: bind: Address already in use. In general, you should avoid stopping the server in this manner. If you want to stop a run in the middle, you should kill one of the clients — the server will kill the other client, close the ports and exit. In case the problem occurs, you have two ways to recover. The first is to simply wait a few minutes until the operating system discovers the problem and closes the ports. The second is to use a different port. You can use any number between 5500 and 5599 as an alternative port. 5.3.2 The Client The client is designed to connect to the server and take the part of one player. The server decides how many games the client will play in a session; the client must be able to play more than one game (this is very easy). In order to run the client issue the command: client <white|red> [server-port] [server-host] [nodelimit]. • white|red is the color of the player. This is the only required argument. • server-port is the port the server is listening on (the default for this argument is 5555). • server-host is the name of host machine the server is running on, for example tetra.cs.duke.edu or arch30 (the default for this argument is localhost - same machine as the client). • nodelimit is the maximum number of node expansions allowed for each move (the default for this argument will be 50000). (Note: If you want to specify some argument you have to specify all the previous arguments first.) The easiest way to understand how the client should look is to look at its code. This piece of code is based on the sample client (client.c) which is given to you. int main (int argc, char *argv[]) { MsgFromServer msg; int row, col, socket; Side side; unsigned short port; char host[100]; side = GetSide(argc, argv); /* Decide whether the player is WHITE or RED */ port = GetPort(argc, argv); GetHostt(argc, argv, host); printf("server port:%d\n", port); StartSession (&socket, side, port); do { StartGame (socket, &msg); while (msg.status != GAME ENDED && msg.status != ABORT) { if (msg.status == GAME_NOT_MOVE) { /* I have no move to play; send (-1,-1); get my opponent's move */ SendAndGetMove (socket -1, -1, &msg); } else { GetMove (&msg.board, side, &row, &col); /* Decide on move */ /* Send my move, get opponent's move and new board configuration*/ SendAndGetMove (socket, row, col, &msg); } } } while (msg.status != ABORT); EndSession(socket); return 0; } The first procedure you need to call is StartSession. The parameter side is either WHITE or RED. The first parameter (socket) is a part of the communication protocol; it specifies the socket in which the communication will work. You do not need to worry about the details, but it may be useful for the training runs to actually open two sockets and play for both players. This way you can keep everything within one process. We are now ready to start the game. We call the procedure StartGame. This procedure waits until the other player is ready and it is your turn to play. When this happens, the server will send the first board position message. This message is returned in the last parameter. The structure for this message is as follows: typedef struct { Status status; double time_left; /* In seconds */ int row, col; /* Opponent's last move; (-1,-1) for no move */ BoardPosition board; } MsgFromServer; Let us go over the fields in this message one by one. status contains one of four values: - **0. GIVE MOVE** — It is your turn to play and you must come up with a legal move. - **1. CAN NOT MOVE** — It is your turn but you have no legal moves. The server sends you the position even if you do not have any possible moves to let you know about your opponent’s move, in case you need the information. - **2. GAME ENDED** — The game has ended. - **3. ABORT** — The set of games has ended. Either all the games were played, or the other player has crashed. time_left is the time left for your side in seconds (for the entire game). row and col describe the move of your opponent (in case your opponent did not move you will get the values (-1,-1)). Finally, board is the current board configuration. After receiving the information from the server you must decide on your move. This is done in the procedure GetMove which is the actual part you need to implement (using the game tree, alpha-beta pruning, and so on). Right now the procedure simply chooses a random move. In order to send your move to the server you need to use the procedure SendAndGetMove. This procedure takes as parameters the move which is sent as row and col. Recall that rows and columns are numbered from 0 to BOARD_SIZE-1. The structure for this message is as follows: typedef struct { int row, col; /* The player’s move; (-1,-1) for no move */ } MsgToServer; In case you have no legal move you must send -1,-1 for row and col, otherwise it will count as a technical loss. Make sure that the moves you are sending to the server are valid. An invalid move will count as a technical loss. The SendAndGetMove procedure will wait until the server sends the move of the other player. Once the move is received the procedure ends, returning the opponent’s move, the board position and the time left using the MsgFromServer structure. When the game ends the server will start another one, until all the games are played. At this point the server will send an ABORT message. You need to call EndSession to make sure the communication ends in the appropriate way. 5.3.3 The ASCII Socket Interface This section describes the format of the ASCII strings that are exchanged between the client and the server over the socket connection during a game. The C programmers need not worry about the details of this section, but if you plan to use any other implementation language you will have to confront to these conventions. There are 3 types of messages between the client and the server. - **The first message sent by the client to the server** to declare its color. This is sent only once at the very beginning and is never repeated. It consists of a single ASCII character, “0” or “1”, which stands for white or red respectively. - **The message from the client to the server** that contains the player’s move. This is sent from the client to the server everytime it is the client’s turn to move. It should basically consist of a string of 9 ASCII characters with the format “RRR,CCC”. The substring RRR holds the row-coordinate of the move and CCC holds the column-coordinate. Notice the space characters right after RRR and CCC. The last character can be a space or a “\0” termination character. For example, the move (3,7) becomes “1,37” or “003,007”, while the move (-1,-1) becomes “-1-1”. - **The message from the server to the client** that contains the status of the game, the remaining time, the opponent’s move, and the board configuration. This consist of a long ASCII character string of 228 characters total. The general format is the following: “S,T,TTTT,TT,RRR,CCC,WWW,RRR,B,B,B,B,B,B,B,...,B,B,B,B,B” The first character is the status of the game (one of 0, 1, 2, or 3 as described in the previous section). TTTTT.TT is the time remaining represented in fixed-point format with five digits and two decimals. RRR and CCC encode the opponent’s move with the same format as the player’s move above. WWW and RRR are the numbers of white and red pieces on the board. Finally, the long sequence “B B B B B B ... B B B B B” encodes the board configuration. Each B is 0, 1, 2, or 3 as described in the previous section. Squares are traversed row after row beginning with the upper left corner (similar to PrettyPrint). Notice the space characters right after each entity; the very last two characters are both spaces or a space and a "\0" termination character. As an example, assume that there are 250.66 seconds remaining, it’s the red player’s turn, the opponent’s move was \((0,2)\), and that the state of the board is as follows: ``` 0 1 2 3 4 5 6 7 8 9 +------------------------+ 0| # . W R . . . . . # | 0 1| . . . W . . . . . . | 1 2| . W R W W . . . . . | 2 3| . R . R W . . . . | 3 4| . . . R W . . . . | 4 5| . . . W R W . . . . | 5 6| . . . . . . . . . . | 6 7| . . . . . . . . . . | 7 8| . . . . . . . . . . | 8 9| # . . . . . . . . . # | 9 +------------------------+ ``` The server will send the following string to the red client: ``` 0.00250.66,000,002,009,006, 3,2,0,1,2,2,2,2,2,3, 2,2,2,0,2,2,2,2,2,2, 2,0,1,0,0,2,2,2,2,2, 2,1,2,2,1,0,2,2,2,2, 2,2,2,2,1,0,2,2,2,2, 2,2,2,2,0,1,0,2,2,2, 2,2,2,2,2,2,2,2,2,2, 2,2,2,2,2,2,2,2,2,2, 2,2,2,2,2,2,2,2,2,2, 3,2,2,2,2,2,2,2,3,3, ``` without the \texttt{<newline>es} of course. Using these conventions it should be fairly easily to decode the server’s messages and code the player’s messages. For additional details, you may want to look at the C code that implements the communication functions (\texttt{protocol.c}, \texttt{client-comm.c}, and \texttt{server.c}). Good luck! 5.3.4 The Graphical User Interface In addition to the ASCII server, we have provided a graphical user interface that can help you test your program and run experiments. The GUI is written in Tcl/Tk and is contained in the GUI subdirectory; you must have Tcl/Tk installed on your system in order to use it. The interpreter for Tcl/Tk is called wish. On Unix systems, you can run the GUI by just typing othello.tcl at the command line. You can check to see whether Tcl/Tk is installed by running which wish. On Windows, you will have to run the wish program, with othello.tcl as an argument. You may also have to change the SOURCE_DIR directory in othello.tcl, before you run the server. The GUI is fairly self-explanatory. The method of control for each player can be chosen from four possibilities: Manual, Random, Greedy, and Program. In Manual control mode, you click on the square on which you want to place a piece. The Random controller chooses its move randomly, while the Greedy controller chooses the move that turns over the most pieces, without looking ahead (ties are broken at random). You can use the random and greedy controllers to test how well your program plays; it should beat them both handily, and the margin of victory is an indication of how good your program is. You can also control a player from a client program. While the GUI server is running, it listens to port 5555 for client programs. Client programs can connect to the GUI server the same way they connect to the ASCII server. If you run your client program with the appropriate port and host parameters, you will see a message on the log display of the GUI that the client has been registered as either Player 1 or Player 2 depending on the color parameter you chose. Now, you can click on the Program option to pass control to your program. You may change the controller of a player at any time during the game. For example, you may want to enter a position manually, and then ask your client to play from that position. Or, you may want to play greedily for a few steps and then let your program continue, or play manually against your program, etc. There are three modes of play. You may play a single game, play continuously, or set up the server to play a fixed number of games. You may find the repeat and continuous play modes useful for running experiments. In the last two modes, a new game will start as soon as the previous one is finished (unless the specified number of games has been played in the repeat mode). The GUI will keep track of the total score of the two players. The “RESTART” button restarts the game that is currently being played, but it retains the total wins and losses for each player. The “RESET” button restarts the complete set of games and resets the total wins and losses. Some important warnings in using the GUI: - Do not click on the Program control mode unless you have connected your client to the server first. - Do NOT kill (CTRL-C) clients connected to the GUI, even when you are in a different control mode. The GUI will crash. If you want to try some other client(s) you will have to quit the GUI first, then restart it, and connect the new client(s). - If you really want to replace a connected client, then you can run the new client (with the appropriate parameters) without “touching” the old one (let it run until you quit). The new client will take over control of the game.
{"Source-Url": "https://www.cs.duke.edu/courses/spring02/cps170/Othello.pdf", "len_cl100k_base": 6522, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 56336, "total-output-tokens": 7133, "length": "2e12", "weborganizer": {"__label__adult": 0.0013322830200195312, "__label__art_design": 0.0004382133483886719, "__label__crime_law": 0.0006551742553710938, "__label__education_jobs": 0.0016832351684570312, "__label__entertainment": 0.0002956390380859375, "__label__fashion_beauty": 0.0004019737243652344, "__label__finance_business": 0.000209808349609375, "__label__food_dining": 0.0017147064208984375, "__label__games": 0.0251922607421875, "__label__hardware": 0.0021343231201171875, "__label__health": 0.0007653236389160156, "__label__history": 0.0004503726959228515, "__label__home_hobbies": 0.0002434253692626953, "__label__industrial": 0.0007219314575195312, "__label__literature": 0.0006470680236816406, "__label__politics": 0.00038695335388183594, "__label__religion": 0.0010890960693359375, "__label__science_tech": 0.005176544189453125, "__label__social_life": 0.0002589225769042969, "__label__software": 0.00421905517578125, "__label__software_dev": 0.94921875, "__label__sports_fitness": 0.0016422271728515625, "__label__transportation": 0.000843048095703125, "__label__travel": 0.0005125999450683594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27717, 0.04413]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27717, 0.28355]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27717, 0.9225]], "google_gemma-3-12b-it_contains_pii": [[0, 2654, false], [2654, 5064, null], [5064, 8406, null], [8406, 11117, null], [11117, 14076, null], [14076, 17113, null], [17113, 19460, null], [19460, 22918, null], [22918, 24295, null], [24295, 27717, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2654, true], [2654, 5064, null], [5064, 8406, null], [8406, 11117, null], [11117, 14076, null], [14076, 17113, null], [17113, 19460, null], [19460, 22918, null], [22918, 24295, null], [24295, 27717, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 27717, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27717, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27717, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27717, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 27717, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27717, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27717, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27717, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27717, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27717, null]], "pdf_page_numbers": [[0, 2654, 1], [2654, 5064, 2], [5064, 8406, 3], [8406, 11117, 4], [11117, 14076, 5], [14076, 17113, 6], [17113, 19460, 7], [19460, 22918, 8], [22918, 24295, 9], [24295, 27717, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27717, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
3aaeefb67b215b86ff7aa7adc82f0fbf5c90b746
International Conference on Computational Science, ICCS 2013 Extended Cyclostatic Dataflow Program Compilation and Execution for an Integrated Manycore Processor Pascal AUBRYa, Pierre-Edouard BEAUCAMPSb, Frédéric BLANCb, Bruno BODINb, Sergiu CARPOVa, Loïc CUDENTENCa, Vincent DAVIDa, Philippe DOREa, Paul DUBRULLEa, Benoît DUPONT de DINECHINb, François GALEAa, Thierry GOUBIERa, Michel HARRANDb, Samuel JONESb, Jean-Denis LESAGEa, Stéphane LOUISEAa, Nicolas MOREY CHAISEMARTINb, Thanh Hai NGUYENa, Xavier RAYNAUDb, Renaud SIRDEYa aCEA, LIST, 91191 Gif-sur-Yvette CEDEX, France bKalray SA, 86 rue de Paris, 91400 Orsay, France Abstract The ever-growing number of cores in embedded chips emphasizes more than ever the complexity inherent to parallel programming. To solve these programmability issues, there is a renewed interest in the dataflow paradigm. In this context, we present a compilation toolchain for the $\Sigma C$ language, which allows the hierarchical construction of stream applications and automatic mapping of this application to an embedded manycore target. As a demonstration of this toolchain, we present an implementation of a H.264 encoder and evaluate its performance on Kalray’s embedded manycore MPPA chip. Keywords: sigmac; parallelism; programming language; compilation; cyclostatic dataflow; manycore; embedded. 1. Introduction 1.1. Manycore in embedded environments The generalization of multicore sytems since the beginning of the 21st century has spread down to embedded devices. Computing parts of today’s smartphones and tomorrow vehicles are increasingly multicore and the new challenge is programming embedded manycore systems. The processing power associated with the emerging manycore systems is a key component toward new classes of applications in the embedded world in accordance with Gustafson’s law [1]. A new area of computing is arising: the embedded High Performance Computing (eHPC). Nonetheless, the key issue of the manycore area is how to express massive parallelism in an application in a manageable way for programmers. 1.2. Programmability and data flow programming Today’s programming concepts coming from the large scale HPC world are mostly focused on OpenMP [2] and MPI [3]. Neither of them are meet the needs of the embedded field. Moreover, since OpenMP is focused on threads and shared memory concepts, doubts can be raised on its scalability: data-sharing limits, avoiding both race conditions, large scale locks and deadlocks is hard [4]. MPI, being message driven, is less of a problem, but it lacks the soft-real time hooks that are usually required in the embedded world. The requirements of any language targeting multicore systems are to be able to: easily express massive parallelism, easily detect badly designed applications (at compile time), have a manageable workflow and be deterministic enough to permit design tests and easy debugging. Some of the emerging solutions are based on dataflow paradigms. In the HPC world, this movement is mostly driven by CUDA [5] and OpenCL [6] whose domain of interest is to develop a means to address the issue of programming heterogeneous targets with main processors weakly or strongly coupled with accelerators. Their usual downside for manycore programming is that they focus too much on the accelerator concept. Even further in the dataflow concepts, the stream programming languages are raising interest: their advantages rely for a part in their theoretical basis which make them amenable to formal verification of the important application properties stated above. Two well-known languages in the community are StreamIt [7] and Brook [8]. Another one is ΣC [9] which is a joint development between the CEA LIST and Kalray as a solution to program Kalray’s new MPPA manycore processor. The topic of this paper is the compilation process of this language, and especially why it is appropriate for manycore targets and applications. We shortly describe the MPPA architecture in Section 2. Then we present in Section 3 the ΣC language and underlying programming model. Section 4 is an overview of the different aspects involved in the ΣC compilation toolchain developed as a joint effort by CEA LIST and Kalray for the MPPA architecture. Section 5 presents the design aspects of a real-world application, and presents performance results. Finally, Section 6 concludes and presents the current and future works in the ΣC programming environment. 2. The MPPA architecture The MPPA chip is a (mostly) homogeneous manycore architecture. It contains 256 processing elements (cores) which are VLIW processors. VLIW are used since they are known for their high energy efficiency with regards to power consumption (think DSP, e.g. Texas Instruments). These processing elements (PE) which are the number-crunching parts of the chip are organized in 16 so called “clusters”, each with 16 PEs, and a shared memory. Using a local shared memory is an interesting part of this architecture, since it enables a high bandwidth and throughput between the PEs of a single cluster. An additional core is added to each cluster which acts as a scheduler and manager for the PEs and plays a role in the communication process with other clusters or the external world. Each cluster is tied to a Network on Chip (NoC) router which is the communication backbone of the chip, between clusters, but also with so called I/O clusters. These I/O clusters are in charge of managing I/O data exchanges between either external buses (e.g. PCIe) or SDRAM. As other clusters they have a local processor for management and interface. A simplified view of the chip can be seen in Figure 1. Since the first MPPA chip is aimed at simplicity, there is no implicit communication between clusters and cache coherence is not implemented between the L1 caches of the PE. This is not an issue with an execution model based on stream processing, since communications are explicit and data barriers are obvious. 3. The ΣC language The basis of stream programming relies on Kahn Process Networks (KPN [10]), more precisely on their special derivation, Data Process Networks [11]. Process networks eliminate race conditions by construction. Some restrictive variants, such as Synchronous DataFlow (SDF [12]) or Cyclo-Static DataFlow (CSDF [13]), are amenable to execution in bounded memory, and the presence of deadlocks can be detected offline [14]. 1Nowadays, it usually means GPGPU targets Fig. 1. A simplified view of the MPPA chip architecture. Cluster 3 is zoomed to see the details of a cluster with its 16 processing elements (PE). Four I/O clusters ensure the communication with the outside. Clusters communicate between each other thanks to a NoC. ΣC can be related to StreamIt [7], Brook [8], XC [15], or OpenCL [6], i.e. programming languages, either new or extensions to existing languages, able to describe parallel programs in a stream oriented model of computation. ΣC defines a superset of CSDF which remains decidable though allowing data dependent control to a certain extent. CSDF is sufficient to express complex multimedia implementations [16]. As a compiler, ΣC on MPPA can be compared to the StreamIt/RAW compiler [17], that is the compilation of a high level, streaming oriented, source code with explicit parallelism on a manycore with limited support for high-level operating system abstractions. However, the execution model supported by the target is different: dynamic tasks scheduling is allowed on MPPA; the communication topology is arbitrary and uses both a NoC and shared memory; the average task granularity in ΣC is far larger than the typical StreamIt filter, and the underlying model (CSDF) is more expressive than StreamIt on RAW because the topology can be arbitrarily defined and is not limited to (mostly) series-parallel graphs. Compared to programming IPCs on MPPA, the ΣC compiler relieves the programmer of building per-cluster executables, computing application-wide identifiers and spreading them in each per-cluster executable, optimizing the partitioning of its function code and data and communications over the chip (and ensuring each fits in the memory of each cluster), ensuring the safety, reproducibility and deadlock freeness of the application, while, for the algorithmic part, keeping the same code. The goal of the ΣC programming model and language is to ensure programmability and efficiency on many-cores. It is designed as an extension to C, to enable the reuse of embedded legacy code. This has the advantage to provide familiarity to embedded developers and allow the use of an underlying C compilation toolchain. It is designed as a single language, without pragmas, compiler directives or netlist format, to allow for a single view of the system. It integrates a component model with encapsulation and composition. 3.1. Programming Model The ΣC programming model builds networks of connected agents. An agent is an autonomous entity, with its own address space and thread of control. It has an interface describing a set of ports, their direction and the type of data accepted; and a behavior specification describing the behavior of the agent as a cyclic sequence of transitions with consumption and production of specified amounts of data on the ports listed in the transition. A subgraph is a composition of interconnected agents and it too has an interface and a behavior specification. The contents of the subgraph are entirely hidden and all connections and communications are done with its interface. Recursive composition is possible and encouraged; an application is in fact a single subgraph named root. The directional connection of two ports creates a communication link, through which data is exchanged in a FIFO order with non-blocking write and blocking read operations (the link buffer is considered large enough). An application is a static dataflow graph, which means there is no agent creation or destruction, and no change in the topology during the execution of the application. Entity instantiation, initialization and topology building are performed offline during the compilation process. System agents ensure distribution of data and control, as well as interactions with external devices. Data distribution agents are Split, Join (distribute or merge data in round robin fashion over respectively their output ports / their input ports), Dup (duplicate input data over all output ports) and Sink (consume all data). 3.2. Syntax and examples Entities are written as a C scoping block with an identifier and parameters, containing C unit level terms (functions and declarations), and $\Sigma C$-tagged sections: interface, init, map and exchange functions. The communication ports description and the behavior specification are expressed in the interface section. Port declaration includes orientation and type information, and may be assigned a default value (if oriented for production) or a sliding window (if oriented for intake). The construction of the dataflow graph is expressed in the map section using extended C syntax, with the possibility to use loops and conditional structures. This construction relies on instantiation of $\Sigma C$ agents and subgraphs, possibly specialized by parameters passed to an instantiation operator, and on the oriented connection of their communication ports (as in Figure 2). All assignments to an agent state in its map section during the construction of the application is preserved and integrated in the final executable. Exchange functions implement the communicating behavior of the agent. An exchange function is a C function with an additional exchange keyword, followed by a list of parameter declarations enclosed by parenthesis. Each parameter declaration creates an exchange variable mapped to a communication port, usable exactly in the same way as any other function parameter. A call to an exchange function is exactly like a standard C function call, the exchange parameters being hidden to the caller. An agent behavior is implemented as in C, as an entry function named start(), which is able to call other functions as it sees fit, functions which may be exchange functions or not. Figure 3 shows an example of an agent declaration in $\Sigma C$. 4. Description of the toolchain 4.1. Frontend The frontend of the $\Sigma C$ toolchain performs syntactic and semantic analysis of the program. It generates per compilation unit a C source file with separate declarations for the offline topology building and for the online execution of agent behavior. The instantiation declarations are detailed in subsection 4.2. The declarations for the online execution of the stream application are a transformation of the $\Sigma C$ code mainly to turn exchange sections into calls to a generic communication service. The communication service provides a pointer to a production (resp. Fig. 3. The ColumnFilter agent used in Figure 2 with two inputs and one output, and the associated portion of ΣC graph Fig. 4. The different stages of the toolchain. Starting with an application written in ΣC, we obtain an executable for the MPPA architecture. intake) area, which is used in code transformation to replace the exchange variable. This leaves the management of memory for data exchange to the underlying execution support, and gives the possibility to implement a functional simulator using standard IPC on a POSIX workstation. 4.2. Instantiation and Parallelism Reduction The ΣC language belongs to the dataflow paradigm in which instances of agents solely communicate through channels. One intuitive representation of the application relies on a graph, where the vertices are instances of agents and the edges are channels. This representation can be used for both compiler internal processings and developer debug interface. This second compiling step of the toolchain aims at building such a representation. Once built, further analyses are applied to check that the graph is well-formed and that the resulting application fits to the targeted host. The internal representation of the application (made of C structures) is designed to ease the implementation and execution of complex graph algorithms. Instantiating an application is made possible by compiling and running the instantiating program (skeleton) generated by the frontend parsing step. In this skeleton program, all the ΣC keywords are rewritten using regular ANSI C code. This code is linked against a library dedicated to the instantiation of agents and communication channels. The ΣC new agent instructions are replaced by a call to the library’s instance creation function. This function evaluates the new agent parameters and allocates a new instance in the internal graph. These parameters can be used to define the initial state of constants and variables, or even set the number of communication ports. This potentially makes all the instances of the same agent very different, except for the user code. Working on the same basis, a set of functions is provided to instantiate communication ports and channels, and to incrementally build the complete application graph. One of the leitmotiv coming with the ΣC language is that the developers should not care about the degree of parallelism, and that they should only focus on the algorithm side. This is quite a different and uncommon approach regarding regular parallel programming languages. The compiler is therefore in charge of adapting the degree of parallelism of the application to fit the targeted embedded platform, while preserving the semantics and properties. This step is later referred as parallelism reduction. The parallelism reduction in the ΣC compilation chain is done in two different ways. Each method has its benefits and drawbacks. The first method [18] is based on graph pattern substitution. Initially, the instantiations of a predefined set of patterns are matched in the application (i.e. sub-graphs with a specific structure). Afterwards each instantiation is replaced by an equivalent pattern of smaller size. The size of the replacement pattern is derived from a global reduction factor. The goal is to bound the number of actors per processing core to a predefined limit. A drawback of this method is that the set of patterns must be predefined. The second method is a generic parallelism reduction. It is based on equivalent agent merge. Two agents are equivalent if they perform the same computation but on different data streams. All the sets of equivalent agents are partitioned into subsets. The agents belonging to the same subset are merged together into a single agent. The sizes of the subsets are chosen such that ΣC application throughput constraints remain satisfied after the merge operations. The drawback of this method compared to the pattern substitution one is that it does not provide a fine-grain control over the parallelism reduction, i.e. it can modify the application not in the smartest way. 4.3. Scheduling, Dimensioning, Placing & Routing, Runtime Generation Once the agents have been instantiated into tasks, the resulting data flow application may pass the scheduling process. As we are compiling a parallel application for a dynamic parallel scheduling micro-kernel, scheduling does not consist in fully ordering the execution of the task occurrences and transitions. Instead, it results in a cyclic partial order of task occurrences, which can be represented with a dependency graph of task occurrences. The whole scheduling process consists in the following steps. First, one must determine a canonical period, which corresponds to the execution of one cycle of the application. Basically, once all task occurrences in the canonical schedule are executed, the application must return to its initial state (list of ready tasks, amount of data present in the FIFOs). This is determined by calculating the repetition vector which is the minimum non-zero integer vector whose components correspond to the number of execution cycles of each task transition, in order to return to the initial state [13]. The number of occurrences for each task in the canonical schedule is the corresponding component value in the repetition vector multiplied by the task’s number of cyclostatic transitions. Then, the dependencies between occurrences are determined by symbolic execution of a total order of all occurrences. During the symbolic execution, minimum buffer sizes are generated in order to determine a minimum dimensioning of the FIFOs. For this, we consider the FIFO sizes are infinite, and we measure the maximum fill size of each FIFO during the symbolic execution. Symbolic execution produces a total order of execution of all occurrences in the canonical schedule, thus it proves the determined FIFO sizes are sufficient to ensure the canonical period is executable with no deadlock. Those resulting FIFO sizes strongly depend on the heuristic used in the symbolic execution for choosing the next task to be symbolically executed. Special care is taken in the choice of this heuristic to minimize the computed FIFO sizes. The next step is the computation of effective buffer sizes for the application. Applications require to be executed with a certain frequency. For example, a video application requires a certain frame rate. The computation of buffer size consist in finding minimized buffer sizes that allow to reach the throughput required by the specification of the application. The execution time for each occurrence is determined by simulation and it allows computation of throughput or a given storage distribution. Throughput computed at this phase is used for the partitioning. Once satisfying FIFO sizes have been determined, a working period is generated. The working period consists in the repetition of several canonical periods, ensuring the allocated buffers for the critical FIFOs may be saturated during the execution, i.e. the produced (and consumed) amount of data in the period corresponds to the allocated buffer size. The working period is completed with return dependencies, which are consumer/producer execution dependencies corresponding to the necessity to not overflow the FIFO buffers. Those dependencies are generated by performing symbolic execution on each pair of producer and consumer tasks. Tasks are then mapped on the different clusters of the MPPA chip, and routes are determined for communication channels between tasks in different clusters. The constraints here are driven by the necessity to respect the NoC bandwidth, thus tasks are mapped in order to maximize the communication within the clusters, through concurrent accesses to shared memory. Two placing methods have been implemented for this purpose. The first one involves graph partitioning and quadratic assignment solvers [19]. It typically generates task mappings in less that 10 seconds, and is suitable in the early cycle of development, where the developer needs fast and repeated interaction with the compilation toolchain. The second task mapping method we implemented performs the task mapping in a single step, using a parallel simulated annealing-based solver [20]. In this case, solving time is longer as it typically takes about 15 minutes to solve a mapping of around 2000 tasks on a MPPA-like cluster and NoC topology, but solution values in terms of overall NoC bandwidth usage is much lower. This makes the method suitable in the late cycle of development, where one can afford to spend time before making the final application ready for running on an embedded chip. The amount of time the solver actually takes (and thus the quality of the result) can however still be configured to allow fast prototyping at an early stage of development. Routing is performed by solving a constrained multiflow problem using an off-the-shelf mixed-integer linear problem (MILP) solver. As the mapping tends to simplify routing, routing is generally done in less than 5 seconds. According to the behavior specification of each agent described in the $\Sigma$C language, access schemes to the FIFO buffers are generated to automate the determination of read and write access position in FIFO buffers ac- cording to the number of the next occurrence of each task. One major optimization that can be carried out at this stage of the compilation is the *inlining or compilation* of the aforementioned system agents. Since these agents do not modify the data they read, but simply reorganize it, it is possible in many cases to drop the agent from the generated runtime and simply generate a shared buffer, po- sitioning the pointers of each of the neighboring agents at the appropriate point in the shared buffer and generating the appropriate pointer increments. The advantages of this optimization are threefold: the system agent does not need to be scheduled by the runtime, therefore we minimize overheads, the system agent does not need to copy data from its inputs to its outputs, reducing the overall work, and the shared buffer is often smaller than the sum of the buffers that would have otherwise been generated, causing significant reductions in memory footprint. ### 4.4. Link edition and execution support For the runtime synchronization of the tasks, the execution support needs runtime data that can be generated from the information on task behavior gathered in previous compilation steps. One possibility for the execution support is to use a vector time, as described in [21]. The final stage in the $\Sigma$C compiler is the link edition. It consists in building, per cluster hosting tasks, first a relocatable object file with all the user code, user data and runtime data; then the final binary with the execution support. All this compilation stage was realized using the GNU binutils for MPPA, with the following constraints: - constant data declared out of agent scope or shared agent constants are not duplicated; - variables declared out of agent scope and instance variables are allocated once per task actually accessing them; - all functions actually called by a task are linked with the variables allocated for this task, in an object file we call the *task object* and in which all symbols are localized. To obtain the relocatable cluster object, we link the task objects and the object files with the constants and the runtime data. From there, Memory Protection Unit tables are enough to create the memory context of the tasks. Depend- ing on external library usage and the size of agent code, some space is wasted with this solution because of code duplication. It is possible to go further on the MPPA chip because the processor cores support address translation, which could allow in some cases to share the code between instances. To link the final binary, the link process adds the execution support that will start the tasks initially ready and use the runtime data to oversee the execution. The execution support uses the supervision core on MPPA clusters to support hardware and I/Os. In addition, the supervision core is in charge of the main part of scheduling (it computes dependencies, allocates tasks to other cores). The other cores just load/unload task contexts to execute their current activation when they are ready. ### 5. Application: a H.264 video encoder Several applications are currently available for the MPPA chip. Most of them have been partially or fully written in $\Sigma$C. Among them is a H.264 video encoder. 5.1. H.264 encoder quick overview H.264/MPEG-4 Part 10 or AVC (Advanced Video Coding) is a standard for video compression, and is currently one of the most commonly used formats for the recording, compression, and distribution of high-definition video. High-quality H.264 video encoding requires high compute power and flexibility to handle the different decoding platforms, the numerous image formats, and the various application evolutions. On the other hand, video encoding algorithms exhibit large amounts of parallelism, data, task, and instruction level parallelism lending themselves to efficient execution on many-core processors. This kind of applications can then be developed using the ΣC environment in order to describe task parallelism when addressing many-core architectures, such as the MPPA processor. 5.2. H.264 encoder description using ΣC dataflow environment Based on the x264 library, a parallel implementation of a professional quality H.264 encoder has been made using the ΣC dataflow language. This implementation starts by partitioning key encoding functions into separate modules. Each module contains input and output ports, used for data transfers and data synchronization (dependencies for example). The schematic of the parallel implementation of the encoder is shown below. The H.264 encoding process consists in separately encoding many macroblocks from different rows. This is the first level of parallelization, allowing a scalable encoding application, where a variable number of macroblocks can be encoded in parallel. In this graph, each “Encode MB Process” subgraph exploits this data parallelism. Fine-grained task parallelism is also described: motion estimation on each macroblock partition (up to 4x4), spatial prediction of intra-coded macroblocks, RDO analysis and trellis quantization are performed concurrently in separate agents: ![H.264 Parallel Implementation Diagram] Fig. 5. H.264 Parallel Implementation. The ΣC compiler analyzes the dataflow graph and gives to the user an overview of the scheduling of the application, using profiling data. It is also able to map the application onto the targeted MPPA architecture, and implements all communication tasks between each ΣC agents. 5.3. Compromise for optimized dataflow description The ΣC environment supports cyclo-static dataflow application, with execution based on a steady state. The application then exchanges defined amount of data, independent of runtime’s state or incoming data: in the H.264 algorithm, the amount of data differs according to image type (intra or inter), but the ΣC application always works with data for both cases. Describing and managing search window for motion estimation is another challenge when using a dataflow environment: difficulties to describe delay and shared memory between different processes. Fortunately, the ΣC environment implements different kinds of features (including virtual buffers and delays) allowing an efficient implementation (no unnecessary copy, automatic management of data, etc.) 5.4. Benefits of $\Sigma C$ dataflow when developing video application The $\Sigma C$ dataflow description helps the user to easily describe and validate data and task parallelism of an application, abstracting the details of the targeted architecture. He can focus on algorithm development, and memory optimization (each $\Sigma C$ agent must contain only useful data for better results). Furthermore, the final parallelized application can address different kinds of processor architecture based on distributed memory (such as the MPPA processor). The $\Sigma C$ environment takes care of compiling, placing and routing the application. Every step is automatically made by the compiler. It also includes many optimization techniques, such as double buffering or data coherency when sharing buffers ($\Sigma C$ agent inlining). All these operations made by the compiler allow the user to easily design scalable applications with dynamic configurations: H.264 encoder can then be configured to encode 2, 4, 10 or 16 macroblocks in parallel just by modifying a defined value. Finally, porting a H.264 encoder to the MPPA processor using the $\Sigma C$ environment reduces the design time compared to traditional Posix threads implementations targeting multicore processors or VHDL description for an FPGA target: $\Sigma C$ offers an easy way to design an application, thanks to its efficient debugging ability and the possibility to re-use existent C code. The fast functional simulations are easy to run and decrease validation time, partitioning and synchronization is hidden by the system software, and all the optimizations are based on algorithm and buffer sizing. 5.5. Results and performance From the current implementation of the H.264 encoder on $\Sigma C$, a performance analysis has been performed to determine the encoder global quality. Those performance results have been compared to the initial x264 library, applied on different video sequences frequently used for such analyzes. The conclusions are the following: - From a quality analysis based on bitstream size and decoded video quality (using SSIM and PSNR criteria), the parallelized H.264 application using $\Sigma C$ dataflow language offers better results than the initial x264. Using MPPA manycore architecture leads to a less restricted implementation (fewer thresholds, less bypass, etc.). For example, many motion vectors can be tested in parallel, as well as many intra predictors, without impacting encoder speed. Finally, much more information is available, enabling a better solution, impacting the resulted encoder quality. - Implementation of the x264 library on the MPPA processor offers a real-time encoder, for embedded solutions, and low-power needs. It achieves about 110 frames per second in the Intra I-frame case, 40 FPS for Inter P-frame and 55 FPS for Inter B-frame. - Using a configuration equivalent to the implementation on MPPA, the x264 encoder has been tested on an Intel Core i7-3820 (4 hyper-threaded cores). All CPU capabilities have been used, such as MMX2, SSE2Fast, SSSE3, FastShuffle and SSE4.2. A performance comparison is presented below: <table> <thead> <tr> <th>Processor</th> <th>Performance</th> <th>Energy efficiency</th> </tr> </thead> <tbody> <tr> <td>Intel Core i7-3820</td> <td>52 FPS</td> <td>2.5 W/FPS</td> </tr> <tr> <td>Kalray MPPA-256</td> <td>49 FPS</td> <td>0.4 W/FPS</td> </tr> </tbody> </table> It can be concluded that for equivalent H.264 encoding performance, the $\Sigma C$ implementation on the Kalray MPPA-256 processor offers better energy efficiency (about 6 times lower energy consumption). 5.6. Limits and future improvements The current $\Sigma C$ toolchain supports cyclostatic dataflow applications with static software architecture (links between tasks, data exchange amounts are determined at compile time). In addition, it does not support paging when the cluster memory size is insufficient. Furthermore there is no way to make the distinction between several states within an application (init, nominal, ...). Lastly, the toolchain does not take into account some other aspects like power consumption, fault management and safety. 6. Conclusion and future works In this paper, we described an end-to-end compilation toolchain and an execution support for the ΣC language, with an illustration of its performance on an implementation of a H.264 video encoder. Doing so, we assert in practice that the ΣC language meets the criteria enounced in [9] (good expressivity, efficient integration of existing C code, properties allowing a compiler to provide guarantees on produced binaries as well as support of modularity and code reuse, and to produce binaries fit for execution on embedded manycores). The performance results of the video encoder also demonstrate that, combined to the development ease given by a stream language, architectures like the MPPA chip offer a good alternative to VHDL description and FPGA-based solutions. In January 2013, HEVC video compression standard has been released. This new standard offers better and easier solutions for manycore architecture: increases potential parallelism, reduces number of critical path (like CABAC), allows more computation for better results, etc. It could be interesting to make a ΣC implementation of HEVC and evaluate it on the MPPA chip. Looking at the place and route stage of the toolchain, some ongoing studies in order to include energy related criteria as well as to allow dynamic reconfiguration at startup. This last point allows an application to run on some degraded MPPA devices with respect of minimal characteristics. Some works have been started to mix, on a MPPA device, both safety-critical modules and high-performance one. The step forward is to offer a way to design high performance real-time applications. Some studies intend to allow building dataflow graphs at runtime and provide dynamic channel sizing. This may be useful for instance in cognitive radio where the device has to adjust its configuration to its environment. Finally there are ongoing studies to both improve target abstraction and make implementation of some algorithm easier. This refers to extensions to support a DSM (Distributed Shared Memory) and OpenMP. References
{"Source-Url": "http://sirdeyre.free.fr/Papiers_etc/2013_Extended_cyclostatic_dataflow_program_compilation_and_execution_for_an_integrated_manycore_processor.pdf", "len_cl100k_base": 6971, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 35117, "total-output-tokens": 8667, "length": "2e12", "weborganizer": {"__label__adult": 0.0006990432739257812, "__label__art_design": 0.0008120536804199219, "__label__crime_law": 0.0006556510925292969, "__label__education_jobs": 0.00041604042053222656, "__label__entertainment": 0.000164031982421875, "__label__fashion_beauty": 0.00029754638671875, "__label__finance_business": 0.00029921531677246094, "__label__food_dining": 0.0006365776062011719, "__label__games": 0.001232147216796875, "__label__hardware": 0.012542724609375, "__label__health": 0.0009016990661621094, "__label__history": 0.0005207061767578125, "__label__home_hobbies": 0.0002104043960571289, "__label__industrial": 0.0013818740844726562, "__label__literature": 0.0002620220184326172, "__label__politics": 0.0005183219909667969, "__label__religion": 0.0011110305786132812, "__label__science_tech": 0.1639404296875, "__label__social_life": 8.684396743774414e-05, "__label__software": 0.007053375244140625, "__label__software_dev": 0.8037109375, "__label__sports_fitness": 0.0006709098815917969, "__label__transportation": 0.0016241073608398438, "__label__travel": 0.000377655029296875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38239, 0.02686]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38239, 0.454]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38239, 0.89816]], "google_gemma-3-12b-it_contains_pii": [[0, 2079, false], [2079, 6476, null], [6476, 9889, null], [9889, 12928, null], [12928, 15521, null], [15521, 20806, null], [20806, 25536, null], [25536, 28595, null], [28595, 32732, null], [32732, 38239, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2079, true], [2079, 6476, null], [6476, 9889, null], [9889, 12928, null], [12928, 15521, null], [15521, 20806, null], [20806, 25536, null], [25536, 28595, null], [28595, 32732, null], [32732, 38239, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38239, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38239, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38239, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38239, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38239, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38239, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38239, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38239, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38239, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38239, null]], "pdf_page_numbers": [[0, 2079, 1], [2079, 6476, 2], [6476, 9889, 3], [9889, 12928, 4], [12928, 15521, 5], [15521, 20806, 6], [20806, 25536, 7], [25536, 28595, 8], [28595, 32732, 9], [32732, 38239, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38239, 0.0241]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
7476a9aea444c3c1eb102145284929c16e7c4c22
Overview • Announcements • Bitwise Operations – Set, clear, toggle and invert bits – Shift bits – Test bits • I/O Ports • Lab 3 Announcements • Homework 3: Due on Sunday 6/5 • Exam 1: In class Wed 6/8 Why Bitwise Operation Why use bitwise operations in embedded systems programming? Each single bit may have its own meaning – Push button array: Bit n is 0 if push button n is pushed – LED array: Set bit n to 0 to light LED n Data from/to I/O ports may be packed – Two bits for shaft encoder, six bits for push button packed in PINC – Keypad input: three bits for row position, three bits for column position Data in memory may be packed to save space – Split one byte into two 4-bit integers Why Bitwise Operation Read the input: ``` unsigned char ch = PINC; ``` Then, how does the code get to know which button is being pushed? Connected to PINC, bits 5-0 PINC, bits 7-6 are input from shaft encoder Bitwise Operations: What To Do? We may want to do following programming tasks: - Clear/Reset certain bit(s) - Set certain bit(s) - Test if certain bit(s) are cleared/reset - Test if certain bit(s) are set - Toggle/invert certain bits - Shift bits around Bitwise Operators: Clear/Reset Bits C bitwise AND: & ch = ch & 0x3C; What does it do? Consider a single bit x x AND 1 = x Preserve x AND 0 = 0 Clear/Reset Bitwise Operators: Clear/Reset Bits ch = ch & 0x3C; <table> <thead> <tr> <th>x_7</th> <th>x_6</th> <th>x_5</th> <th>x_4</th> <th>x_3</th> <th>x_2</th> <th>x_1</th> <th>x_0</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> <td>0</td> <td>1</td> <td>1</td> <td>1</td> <td>0</td> <td>0</td> </tr> </tbody> </table> Clear bits 7, 6, 1, 0 Preserve bits 5, 4, 3, 2 Clear bit(s): Bitwise-AND with a mask of 0(s) Bitwise Operators: Clear/Reset Bits Another example: char op1 = 1011 1100; We want to clear bit 4 to 0. char op2 = 1110 1111; We use op2 as a mask char op3; op3 = op1 & op2; 1011 1100 AND 1110 1111 1010 1100 Class Exercise char ch; int n; Clear the upper half of ch Clear every other bits of ch starting from 0 Clear the lower half of n Bitwise Operators: Set Bits C bitwise OR: | ch = ch | 0xC3; What does it do? Consider a single bit x x OR 1 = 1 Set x OR 0 = x Preserve Bitwise Operators: Set Bits \[ ch = ch | 0xC3; \] \[ \begin{array}{ccccccccc} x_7 & x_6 & x_5 & x_4 & x_3 & x_2 & x_1 & x_0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\ \end{array} \] Set bits 7, 6, 1, 0 Preserve bits 5, 4, 3, 2 Set bit(s): Bitwise-OR with a mask of 1(s) Bitwise Operators: Set Bit Another example: \[ \text{char op1} = 1000 \ 0101; \] We want to set bit 4 to 1. \[ \text{char op2} = 0001 \ 0000; \] We use op2 as a mask \[ \text{char op3}; \] \[ \text{op3} = \text{op1} | \text{op2}; \] \[ \begin{array}{cccccccc} x_7 & x_6 & x_5 & x_4 & x_3 & x_2 & x_1 & x_0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\ \end{array} \] Bitwise Operators: Toggle Bits C bitwise XOR: ^ \[ ch = ch ^ 0xC3; \] What does it do? Consider a single bit \( x \) \[ x \ XOR \ 1 = \overline{x} \] Toggle \[ x \ XOR \ 0 = x \] Preserve Bitwise Operators: Toggle Bits C bitwise XOR: ^ \[ ch = ch ^ 0xC3; \] \[ \begin{array}{cccccccc} x_7 & x_6 & x_5 & x_4 & x_3 & x_2 & x_1 & x_0 \\ 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 \\ \end{array} \] Toggle bits 5, 4, 3, 2 Preserve bits 7, 6, 1, 0 Toggle bit(s): Bitwise-XOR with a mask of 1(s) Bitwise Operators: Invert Bits C bitwise invert: ~ \[ \text{ch} = \neg \text{ch}; \] \[ \begin{array}{cccccccc} \neg x_7 & \neg x_6 & \neg x_5 & \neg x_4 & \neg x_3 & \neg x_2 & \neg x_1 & \neg x_0 \\ x_7 & x_6 & x_5 & x_4 & x_3 & x_2 & x_1 & x_0 \\ \end{array} \] Example: \( \text{ch} = 0b00001111; \) \[ \neg \text{ch} = 0b11110000 \] Class Exercise \[ \text{char ch}; \] \[ \text{int n}; \] Set the lower half of \( \text{ch} \) Set every other bits starting from 0 of \( \text{ch} \) Set bit 15 and bit 0 of \( n \) Toggle bits 7 and 6 of \( \text{ch} \) **Bitwise Operators: Shift-Left** ```c unsigned char my_reg = 0b00000001; unsigned char shift_amount = 5; unsigned char my_result; my_result = my_reg << shift_amount; <<, shifts “my_reg”, “shift_amount” places to the left 0s are shifted in from the right ``` **Bitwise Operators: Shift-Right Logical** ```c unsigned char my_reg = 0b10000000; unsigned char shift_amount = 5; unsigned char my_result; my_result = my_reg >> shift_amount; With unsigned type, >> is shift-to-right logical 0s are shifted in from the left ``` **Bitwise Operators: Shift-Right Arithmetic** ```c signed char my_reg = 0b1000000; unsigned char shift_amount = 5; unsigned char my_result; my_result = my_reg >> shift_amount; With signed type, >> is shift-right arithmetic Sign bit value are shifted in from the left ``` **Bitwise Operators: Shift and Multiple/Divide** Shift-Right Arithmetic: Why shift in the sign bit? ```c Example: (char) 32 >> 2 = 32 / 4 = 8 0b0010 0000 >> 2 = 0b0000 1000 Example: (char) -32 >> 2 = -32 / 4 = -8 0b1110 0000 >> 2 = 0b1111 1000 ``` ```c n << k is equivalent to n * 2^k Example: 5 << 2 = 5*4 = 20 0b0000 0101 << 2 = 0b0001 0100 n >> k is equivalent to n / 2^k Example: 20 >> 2 = 5 0b0001 0100 >> 2 = 0b0000 0101 ``` **Bitwise Operators: Shift and Set** What’s the effect of the following state? ```c #define BIT_POS 4 ch = ch | (1 << BIT_POS); ``` What is (1 << 4)? ```c 0000 0001 << 4 0001 0000 ``` In general case: (1 << n) yields a mask of a 1 at bit n The effect of the statement: Set bit 4 Bitwise Operators: Shift and Set Another example: unsigned char my_mask = 0000 0001; unsigned char shift_amount = 5; unsigned char my_result = 1101 0101; Want to force bit 5 to a 1 my_result = my_result | (my_mask << shift_amount); Shift the 1(s) of the MASK to the appropriate position, then OR with my_result to force corresponding bit positions to 1. Bitwise Operators: Shift and Clear What's the effect of the following state? #define BIT_POS 4 ch = ch & ~(1 << BIT_POS); What is ~(1 << 4)? In general case: ~(1 << n) yields a mask of a 0 at bit n Note: Compiler does the calculation at compilation time Exercise unsigned char ch; unsigned int n; Divide n by 32 in an efficient way Swap the upper half and lower half of ch Bitwise Testing Remember, conditions are evaluated on the basis of zero and non-zero. The quantity 0x80 is non-zero and therefore TRUE. if (0x02 | 0x44) Valid or not? Bitwise Testing **Example** Find out if bit 7 of variable nVal is set Bit 7 = 0x80 in hex ``` if ( nVal & 0x80 ) { ... } ``` What happens when we want to test for multiple bits? If statement looks only for a non-zero value A non-zero value means at least one bit is set to TRUE Bitwise Testing: All Bits Are Set? Why does this present a problem? What happens if we want to see if both bits 2 and 3 are set, not just to see if one of the bits is set to true? Won’t work without some other type of test Two solutions Test each bit individually ``` if ( (nVal & 0x08) && (nVal & 0x04)) { // Check the result of the bitwise AND if ((nVal & 0x0C) == 0x0C) { // Why do these solutions work? 1. Separate tests – Check for each bit and specify logical condition 2. Equality test – Result will only equal 0x0C if bits 2 and 3 are set ``` Bitwise Testing: Any Bit Is Set? **Example** See if bit 2 or 3 is set Bits 2,3 = 0x0C in hex ``` if (nVal & 0x0C) { Some code... } ``` What happens for several values of nVal? nVal = 0x04 bit 2 is set Result = 0x04 TRUE nVal = 0x0A bits 3,1 are set Result = 0x08 TRUE nVal = 0x0C bits 2,3 are set Result = 0x0C TRUE Exercise ``` char ch; ``` Test if any of bits 7, 6, 5, 4 is set Test if all of bits 7, 6, 5, 4 are set Test if all of bits 7 and 6 are set, and if 5 and 4 are cleared Memory Mapped I/O How does a program executing within CPU communicate with the keyboard? Write a program to count the number of 1s in integer n ``` Unsigned int count = 0; unsigned int n; ``` Memory Mapped I/O Memory mapped I/O: Registers within Keyboard appear to be in memory. ``` Get_user_ID(char *name); char * KBDR; KBDR = (char *) 0xff00; while (�性(name++) = *KBDR) != newline); ``` We need a control mechanism for the keyboard to tell us when fresh data is available in KBDR? • Keyboard Control Register (KBCR) How quickly does the while loop iterate? 10-100 µs? How quickly do we type? 30-100 chars/minute? Bit Fields in Structures ``` struct KBCR { // Big Endian unsigned int model : 4; unsigned int KBERROR : 1; unsigned int CAPLOCK : 1; unsigned int READY : 1; } KBCR; ``` Maintenance of multiple views: byte or bit structure. ``` union KBCR_U { struct KBCR KBCR; uint8_t KBCR_Aggregate; } KBCR_U; ``` In one attempt, we may not even get one character? Memory Mapped I/O - Control Registers ``` char *KBDR; KBDR = (char *) 0xff00; while (性(name++) = *KBDR) != newline): if (KBCR.READY){ if((*name++) = *KBDR) == NEWLINE) break; } ``` Memory Mapped I/O - Polling - Without any device (keyboard) specific instructions, we can talk to the device/sensor. - Even the future devices whose interface we do not know yet can be memory-mapped! ```c char *KBDR; char *KBCR; KBDR = 0xff00; KBCR = 0xff01; while (!(KBCR.READY)); //polling loop //guaranteed fresh data if((*(name++) = *KBDR) == NEWLINE) return; ``` Bit fields in structures ```c if (KBCR.READY){ struct KBCR *pKBCR; pKBCR->CAPLOCK = 0; struct student student_records[100]; ``` Memory Mapped I/O - Polling ```c char *KBDR; char *KBCR; KBDR = 0xff00; KBCR = 0xff01; while (!(KBCR & 0x1)); //guaranteed fresh data if((*(name++) = *KBDR) == NEWLINE) return; ``` Atmel AtMega I/O Ports - What if sensor is not smart enough to pretend to be memory? Cannot memory-map its interface? http://class.ece.iastate.edu/cpre288 http://class.ece.iastate.edu/cpre288 Memory-Mapped I/O Ports - Built-in ports are memory-mapped. I/O Ports - ATmeg128 - 5 general purpose ports: Port A, B, C, D, E; two special purpose – Port F & G. - Processor communicates with them through memory mapped I/O. - Set of data and control registers associated with each port. I/O Ports - The processor communicates with attachments using ports - Each port has three registers PORTx – 8bit register for output PINx – 8bit register for input DDRx – Data direction register - DDR - 0 means input - 1 means output Example: DDRA = 0b00000001; // all bits on port A are used for input // except bit0 I/O Ports - PORTX Register: PORTA: If PORTx is 1 when the pin is configured as an input pin, the pull-up resistor is activated. To switch the pull-up resistor off, PORTxn has to be written logic zero or the pin has to be configured as an output pin. For output configured port: If PORTxn is written logic one when the pin is configured as an output pin, the port pin is driven high (one), and vice versa. Write to a port through PORTX register. E.g.: PORTA = my_char; // set port A to be value of my_char I/O Ports - PINX Register (is a data register): Always keeps the current state of the physical pin. Read only! For an input port, the only way to read data from that port. E.g: my_char = PINA; // set my_char to value on port A Example: Initialize Push Buttons ```c /// Initialize PORTC to accept push buttons as input void init_push_buttons(void) { DDRC &= 0xC0; //Setting PC0-PC5 to input PORTC |= 0x3F; //Setting pins’ pull up resistors } ``` Push Button port connection - Port C, pin 0 to pin 5 (button SW1 to SW6) - All input Example: Initialize Shaft Encoder ```c /// Initialize PORTC for input from the shaft encoder void shaft_encoder_init(void) { DDRC &= 0x3F; //Setting PC6-PC7 to input PORTC |= 0xC0; //Setting pins’ pull-up } ``` Shaft encoder port connection - Port C, pin 7 and 6 - Input Example: Initialize Stepper Motor ```c /// Initialize PORTE to control the stepper motor void stepper_init(void) { DDRE |= 0xF0; //Setting PE4-PE7 to output PORTE &= 0x8F; //Init position (0b1000) PE4-PE7 wait_ms(2); PORTE &= 0x0F; //Clear PE4-PE7 } ``` Shaft encoder port connection - Port C, pin 7 and 6 - Output - Wait for 2 ms for stepper model to settle Lab 3 - Overview of hardware - Push Buttons (Switches) - Shaft Encoder (Control Knob) - Stepper Motors Lab 3 Memory-Mapped I/O Now write your own API functions for I/O devices Part I. Push button To detect which buttons are being pushed Part II. Shaft Encoder To take input of a shaft and emulate its behavior Part III. Stepper Motor To control motor movement precisely Lab 3 Memory Mapped I/O Part I. Push button Return the position of the leftmost button that is being pressed. The rightmost button is position 1. Return 0 if no button is being pressed. ```c char read_push_buttons(void); ``` Six push buttons, connected to PINC bits 5-0 Active low – if a button is pushed, the corresponding bit is 0, otherwise 1 Q1: How does it work mechanically and electronically? Q2: How to read the raw input from the push buttons? Q3: How to read a port? Part II. Shaft Encoder - The device generates two waveforms to two input pins of ATmega128 (PC6 and PC7) - The direction of the shaft encoder is reflected by the ordering of the two waveforms - A leading B is clockwise, B leading A is counter-clockwise - Channel B connected to PINC bit 7, Channel A connected to PINC bit 6 Stepper Motor (Wikipedia) - Full rotation divided into multiple steps. - Motion is controllable one step at a time without need for feedback. - Four coils giving four magnetic axes. Stepper Motor Control - 200 steps per 360°: 1.8° per step. - 0001 -> 0010 -> 0100 -> 1000 -> 0001 -> .... - 0001 -> 1000 -> 0100 -> 0010 -> 0001 -> .... Lab 3 Memory Mapped I/O Part III. Stepper Motor To rotate clockwise: send to PE7-PE4 the following sequence: 0001, 0010, 0100, 1000, 0001, ... Allow 2ms gap between two outputs http://class.ece.iastate.edu/cpre288 Lab 3 Memory Mapped I/O Q1: How to rotate the four bits? Q2: How to send out the four bits to PE7-PE4 without affecting the other four bits of PORTE? Q3: How to couple the shaft encoder with the stepper motor?
{"Source-Url": "http://class.ece.iastate.edu/cpre288/lectures/lect7_8.pdf", "len_cl100k_base": 4676, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 34366, "total-output-tokens": 5630, "length": "2e12", "weborganizer": {"__label__adult": 0.0012483596801757812, "__label__art_design": 0.0013561248779296875, "__label__crime_law": 0.001094818115234375, "__label__education_jobs": 0.021240234375, "__label__entertainment": 0.00023376941680908203, "__label__fashion_beauty": 0.0007605552673339844, "__label__finance_business": 0.0005545616149902344, "__label__food_dining": 0.001312255859375, "__label__games": 0.0025119781494140625, "__label__hardware": 0.1796875, "__label__health": 0.0015878677368164062, "__label__history": 0.0008563995361328125, "__label__home_hobbies": 0.001613616943359375, "__label__industrial": 0.007061004638671875, "__label__literature": 0.00048232078552246094, "__label__politics": 0.0007905960083007812, "__label__religion": 0.0016546249389648438, "__label__science_tech": 0.2218017578125, "__label__social_life": 0.0003724098205566406, "__label__software": 0.01055145263671875, "__label__software_dev": 0.5361328125, "__label__sports_fitness": 0.0018167495727539065, "__label__transportation": 0.00469970703125, "__label__travel": 0.0004949569702148438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14028, 0.05328]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14028, 0.41661]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14028, 0.74779]], "google_gemma-3-12b-it_contains_pii": [[0, 931, false], [931, 2140, null], [2140, 3831, null], [3831, 5352, null], [5352, 6265, null], [6265, 7828, null], [7828, 8836, null], [8836, 9717, null], [9717, 11084, null], [11084, 12802, null], [12802, 13443, null], [13443, 13816, null], [13816, 14028, null]], "google_gemma-3-12b-it_is_public_document": [[0, 931, true], [931, 2140, null], [2140, 3831, null], [3831, 5352, null], [5352, 6265, null], [6265, 7828, null], [7828, 8836, null], [8836, 9717, null], [9717, 11084, null], [11084, 12802, null], [12802, 13443, null], [13443, 13816, null], [13816, 14028, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 14028, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14028, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14028, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14028, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 14028, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14028, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14028, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14028, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14028, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14028, null]], "pdf_page_numbers": [[0, 931, 1], [931, 2140, 2], [2140, 3831, 3], [3831, 5352, 4], [5352, 6265, 5], [6265, 7828, 6], [7828, 8836, 7], [8836, 9717, 8], [9717, 11084, 9], [11084, 12802, 10], [12802, 13443, 11], [13443, 13816, 12], [13816, 14028, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14028, 0.00655]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
4850c6b2cba6b0cb3d2ddf0bf9c3935dbf332d7c
[REMOVED]
{"Source-Url": "http://hpc.isti.cnr.it/~dazzi/pdf/Aldinucci07UniPiTRBeSke.pdf", "len_cl100k_base": 5840, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 34743, "total-output-tokens": 7908, "length": "2e12", "weborganizer": {"__label__adult": 0.0002448558807373047, "__label__art_design": 0.00029969215393066406, "__label__crime_law": 0.0002269744873046875, "__label__education_jobs": 0.00033283233642578125, "__label__entertainment": 4.661083221435547e-05, "__label__fashion_beauty": 0.00011050701141357422, "__label__finance_business": 0.00015079975128173828, "__label__food_dining": 0.0002181529998779297, "__label__games": 0.0003299713134765625, "__label__hardware": 0.0006165504455566406, "__label__health": 0.0003063678741455078, "__label__history": 0.00017011165618896484, "__label__home_hobbies": 6.502866744995117e-05, "__label__industrial": 0.0002722740173339844, "__label__literature": 0.00017654895782470703, "__label__politics": 0.0002065896987915039, "__label__religion": 0.0003409385681152344, "__label__science_tech": 0.0093841552734375, "__label__social_life": 5.9604644775390625e-05, "__label__software": 0.005962371826171875, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.0002052783966064453, "__label__transportation": 0.00035190582275390625, "__label__travel": 0.00017189979553222656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31359, 0.02058]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31359, 0.44416]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31359, 0.86949]], "google_gemma-3-12b-it_contains_pii": [[0, 80, false], [80, 80, null], [80, 2007, null], [2007, 5450, null], [5450, 8328, null], [8328, 11080, null], [11080, 13876, null], [13876, 16879, null], [16879, 19644, null], [19644, 21261, null], [21261, 23950, null], [23950, 26770, null], [26770, 29132, null], [29132, 31359, null]], "google_gemma-3-12b-it_is_public_document": [[0, 80, true], [80, 80, null], [80, 2007, null], [2007, 5450, null], [5450, 8328, null], [8328, 11080, null], [11080, 13876, null], [13876, 16879, null], [16879, 19644, null], [19644, 21261, null], [21261, 23950, null], [23950, 26770, null], [26770, 29132, null], [29132, 31359, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31359, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31359, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31359, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31359, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31359, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31359, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31359, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31359, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31359, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31359, null]], "pdf_page_numbers": [[0, 80, 1], [80, 80, 2], [80, 2007, 3], [2007, 5450, 4], [5450, 8328, 5], [8328, 11080, 6], [11080, 13876, 7], [13876, 16879, 8], [16879, 19644, 9], [19644, 21261, 10], [21261, 23950, 11], [23950, 26770, 12], [26770, 29132, 13], [29132, 31359, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31359, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
1281aea2ab6ab5c96e9b06ccb789354e32d2475b
[REMOVED]
{"Source-Url": "http://oro.open.ac.uk/44264/1/Untitled.pdf", "len_cl100k_base": 5051, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25393, "total-output-tokens": 6662, "length": "2e12", "weborganizer": {"__label__adult": 0.0005888938903808594, "__label__art_design": 0.001277923583984375, "__label__crime_law": 0.003602981567382813, "__label__education_jobs": 0.004878997802734375, "__label__entertainment": 0.0002598762512207031, "__label__fashion_beauty": 0.00035643577575683594, "__label__finance_business": 0.003231048583984375, "__label__food_dining": 0.0005307197570800781, "__label__games": 0.0011310577392578125, "__label__hardware": 0.0006976127624511719, "__label__health": 0.0011434555053710938, "__label__history": 0.0009555816650390624, "__label__home_hobbies": 0.00021207332611083984, "__label__industrial": 0.0008053779602050781, "__label__literature": 0.001995086669921875, "__label__politics": 0.0013866424560546875, "__label__religion": 0.0006937980651855469, "__label__science_tech": 0.401123046875, "__label__social_life": 0.0003695487976074219, "__label__software": 0.08197021484375, "__label__software_dev": 0.491455078125, "__label__sports_fitness": 0.0002472400665283203, "__label__transportation": 0.000919342041015625, "__label__travel": 0.0003616809844970703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27252, 0.03015]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27252, 0.35368]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27252, 0.89723]], "google_gemma-3-12b-it_contains_pii": [[0, 784, false], [784, 2953, null], [2953, 5972, null], [5972, 9053, null], [9053, 12189, null], [12189, 14490, null], [14490, 16035, null], [16035, 17368, null], [17368, 19364, null], [19364, 21375, null], [21375, 24346, null], [24346, 27252, null]], "google_gemma-3-12b-it_is_public_document": [[0, 784, true], [784, 2953, null], [2953, 5972, null], [5972, 9053, null], [9053, 12189, null], [12189, 14490, null], [14490, 16035, null], [16035, 17368, null], [17368, 19364, null], [19364, 21375, null], [21375, 24346, null], [24346, 27252, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27252, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27252, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27252, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27252, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27252, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27252, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27252, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27252, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27252, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27252, null]], "pdf_page_numbers": [[0, 784, 1], [784, 2953, 2], [2953, 5972, 3], [5972, 9053, 4], [9053, 12189, 5], [12189, 14490, 6], [14490, 16035, 7], [16035, 17368, 8], [17368, 19364, 9], [19364, 21375, 10], [21375, 24346, 11], [24346, 27252, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27252, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
e20ae32285c626a4766fbc097b39b6ee1f6a9292
Experience of Implementing OSI Management Facilities Graham Knight, George Pavlou and Simon Walton Department of Computer Science, University College London, Gower Street London, WC1E 6BT, United Kingdom ABSTRACT The Computer Science department at UCL has experimented with OSI management systems for several years and has implemented a pilot management system on a Unix workstation. A second version of this system is now being implemented. This paper briefly reviews our experience with the pilot system and outlines the capabilities that were felt desirable in its successor. The architecture of the successor system is then described. 1. INTRODUCTION The Computer Science department at UCL has experimented with OSI management systems for several years and has built a pilot system. We have been interested in investigating how practical is the OSI approach when compared with that of (say) SNMP[1]. In particular, we have wanted to experiment with the complex filtering and event control facilities that OSI management provides. Section 2 describes the key features of the pilot system and Section 3 outlines the areas in which development was felt to be necessary. Section 4 gives an overview of the architecture of the updated system whilst Sections 5-7 provide a detailed description of its internal operation. It is assumed that the reader is familiar with the basic concepts of OSI management. A good tutorial introduction to these may be found in [2]. 2. THE PILOT SYSTEM Our first implementation "Osimis" [3] was developed under the ESPRIT INCA project[4] and was designed to provide OSI Management facilities for a Unix workstation. It included a CMIP implementation to the DP of 1988 built upon ISODE[5]. This system is illustrated in Figure 1; it provided communication between a Unix "Systems Management Agent" (SMA) and three clients: an event logger ("Osilog"), a status monitor ("Osimon") and an MIB browser ("Osimic"). The MIB was restricted solely to the ISODE Transport Layer. Some of the key features of this system are described below. 2.1. Managed System Internal Communication The ISO/IEC Transport protocol code that was used ran in user space. As any given instance, there could be several instances of the protocol active within several Unix processes. In order to de-couple management protocol operation from communications protocol operation and to co- select management services in one place, it was decided to implement the management services in a single process - the "Systems Management Agent" (SMA). The problem then arose of how and when management information should pass between the communications protocol processes and the SMA. In the event, a UDP socket was chosen, with all communication being initiated by the Transport protocol processes. The Transport protocol code was liberally seeded with entry points to the management code that would trigger the dumping of management information to the SMA at significant moments. Triggering events included T-Connects, T-Discconnects, a fixed data quantity transferred etc. This avoided the problem of having to interrupt Transport protocol in order to deal with asynchronous requests from the SMA - a procedure that was held likely to have unpredictable effects on Transport (and other) protocol operation. Unfortunately, this arrangement also made it difficult to pass information from the SMA to the protocol processes. Since our main interest at the time was in event-driven management we decided that one-way communication was acceptable. 2.2. Event Report Control At the time of the original implementation, the management of event reporting was not very complete in the standards. There was a notion of a "Defined Event" - an attribute of a Managed Object (MO) which corresponded to some event in the real world, a Transport Disconnect for example or a management event such as a threshold being exceeded. There was also a "Report Control" MO which was tied to a Defined Event and specified the information to go in the report, together with a list of recipients. Having Report Control and Thresholds as MOs in their own right meant that, in principle, these could be created and deleted dynamically and that many-to-many relationships could exist between Defined Events, and Threshold and Report Control MOs. This suited the way in which we proposed to use event reports which were seen as a general purpose tool for driving loggers, status displays and diagnostic tools. Although the main purpose of the system was to support human "managers", it also needed to cater for the requirements of our relatively sophisticated users who like to know what is happening on the department's systems. If performance seems poor, they like to be able to find out why by getting an up-to-date picture of activity. The "netstat" program on Berkeley Unix systems is an example of how this addresses this need. We wished to satisfy a similar requirement for ISODE. If facilities like these are to be event-driven, then it must be possible for several remote systems to receive event reports from a target system simultaneously and for the event reporting criteria (thresholds etc.) to be individually tailored. This requirement is in contrast to the more conventional one in which is only necessary to report events to a single remote sink. 2.3. The MIB Browser The aim of the browser was to provide a tool that would be useful to system developers and maintainers who needed to focus on some aspect of the operation of an OSI component and view its operation in detail. In OSI Management terms this means providing a detailed view of a single MO. The problem we faced was in the naming of transient MOs such as those representing Transport Connections. These are named by identifiers (usually integers) allocated by the parent system at "create" time. Their scope is purely local and there is no way that an external management process can know of these a priori. The browser provided a graphical interface to the tree of MOs on a remote system. From any object, it was possible to list the MO classes and Relative Distinct Names of the subordinates. In the case of transient objects, the subordinates could be examined in turn until the required one was found. 3. REQUIREMENTS FOR THE SUCCESSOR SYSTEM When the time came to update the pilot system we had several things in mind: i) The standards had developed further, this was particularly noticeable in the area of event control, "discriminators" having been introduced in order to enhance this. These had some of the properties of our Report Control MOs but with much more sophisticated filtering. ii) Our original CMIS/ implementation was not quite a full one. The M-CREATE, M-DELETE and M-ACTION services were omitted as was filtering, scoping and wildcarding. CMIS and CMIP had now stabilised at the Final DIS stage and differed substantially from the versions we had used. An update was essential. iii) It was clear, even from our small-scale pilot scheme, that OSI management systems were inclined to be large - too large for many small but high-performance network components. We were keen to investigate the use of proxy systems in these circumstances. iv) Though it was clear that the MIB we managed would need to be expanded to include additional components, we could not say in advance precisely what these should be. v) Our own MO definitions were "internal" at best, skeletal at worst. We needed to be able to adapt our MO definitions as the standards process proceeded, with a minimal disruption to existing code. vi) The proprietary management facilities inherent in network components are rarely designed with OSI management in mind - nevertheless we wanted to incorporate these. Quite complex data mappings are sometimes needed in order to massage native management information into an OSI-like form. Further, proprietary management protocols vary in the messages they offer and in their foundations such as which party initiates communication. We needed to construct systems which were flexible enough to allow new components to be incorporated without what proprietary facilities were on offer. vii) Now understanding the variety of components alluded to above, many features of OSI management are common to all components. We wished to extract the common facilities into a few tightly-defined modules that could be used as a basis for building a variety of systems by a variety of people. 4. GENERIC MANAGED SYSTEM OVERVIEW To date, in our design of a successor system, we have concentrated on the Managed System and how this can be structured with extensibility in mind. We have attempted to provide the generic features of an OSI Managed System together with a support framework which may be used by the implementors of MOs. The target environment is a Unix workstation requiring OSI management facilities and which may also act as a proxy for another machine. The result we call the "Generic Managed System" (GMS). The internal structure of GMS is shown in Figure 2. To a large extent, it reflects the OSI model of a Managed System but the major software interfaces correspond to those of the OSI model. For example, the external CMIS interface specified in [6] and the internal "object boundary" interface outlined in [7] are both represented by software interfaces. However, the OSI model was never intended to be an implementation model and significant divergencies have been made in order to arrive at a practical design. The GMS is implemented as a single Unix process, the implementation language being C++. There are five major software components, each realised as a C++ object or set of objects: i) Real Resource Managed Objects ii) Management Control Managed Objects iii) Internal Communication Control Objects iv) The Coordinator v) The OSI Management Agent These are described in the sections below. 5. INTERNAL STRUCTURE AND FUNCTIONALITY 5.1. Managed Objects From the implementation point of view, two sort of MO classes may be identified. The first are abstractions of "real resources" (Transport connections for example) which need to be managed; these we call Real Resource MOs (RRMO). The second relate to features of the management system itself and exist so as to allow the operation of the management system to be controlled via standard management operations; we call these Management Control MOs (MCMO). One example of a MCMO is an Event Report Forwarding Discriminator (ERFD). This contains information specifying which events should be reported and to where. Event reporting behaviour can be modified by changing this information through the use of CMIS M-NET operations. 5.1.1. Real Resource MOs Implementations of "Real Resource" MO classes (RRMO) may be considered to have two parts (see Figure 2): i) A part common to all RRMO classes. This is provided by the GMS and includes: - A C++ object class for a generic MO. This has methods corresponding to the MO boundary interface plus some additional ones to assist with maintenance. Specialised subclasses of this may be derived as required. - C++ object classes for commonly occurring attributes such as counters, gauges, thresholds and telenods. In fact, all the attributes in [8] are supported in this way. - A support environment to assist with and coordinate communication with the real resource. This environment is described more fully in Section 5.2 ii) A part specific to a particular RRMO. This must be tailored not only to the real resource type but also to the means the real resource uses to present management information. These "resource specific" parts are not provided by the GMS and must be supplied by the individual implementors of RRMO classes. 5.1.2. Management Control MOs MCMO classes are common to all management systems, no matter what real resources are being managed. Hence, GMS provides implementations of MCMO classes in their entirety. At present, the only MCMO class provided is the ERFD one. ERFD objects may be created, destroyed and updated as a result of CMIS messages from a remote manager. 5.2. Communication with Real Resources We now consider the ways in which management information may be obtained from real resources. Real resources may reside in the operating system's kernel, on communications boards, in user-space processes or even at remote systems which are managed via proxy management. The information they contain may be accessed by reading the kernel's virtual memory, talking to a device driver, communicating with another user-space process using an IPC mechanism or - in the case of proxy management - with a remote system using a communications protocol. In general, communication needs to be two-way as it should be possible to perform "intensive" management by setting management information values in the real resources. From the point of view of the GMS, information flow may be triggered: i) Asynchronously as a result of some activity on the real resource. ii) By a timeout indicating that a real resource should be polled. iii) As a result of a CMIS request from a remote manager process. Each of these embeds a trade-off between the timeliness of the management information that the GMS can make available and its responsiveness. Used exclusively, ii) makes event reporting inflexible and implies the operation of a pure polling regime by the manager such as is favoured for SNMP. 5.2.1. Internal Communications Control In general, a GMS must support all the communications methods above, maintaining at the same time well-defined and uniform interfaces between the RRMOS and the rest of the system. It must be remembered that several RRMOS may be associated with a single real resource; ideally such a "family" of RRMOS should share a single communications path to the real resource. In order to achieve this, the notion of an Internal Communications Control (ICC) object is introduced. An ICC object coordinates the updates of a family of RRMOS that are realised in a similar fashion. ICC objects are repositories for information about the mode of communication to be employed, they initialise this communication and understand conventions such as the nature and structure of the messages exchanged, i.e. the protocol used. ICCs are created at system start-up time for each RRMOS family that is to be managed. As there are no real resources associated with MCMMs these do not have corresponding ICC objects. 5.2.2. The Coordinator Given that several real resources are being managed and that messages are also being sent and received across the CMIS interface, it can be seen that some organisation is necessary to ensure that incoming messages are delivered to the correct objects and that no object can do a blocking read thus disabling the whole system. This is achieved by ensuring that all incoming messages are delivered first to a "Coordinator" object which then distributes them. When the first RRMOS in a family is created (either as a result of a CMIS M-CREATE request or of some activity on the real resource), its ICC interacts with the Coordinator in order to register an endpoint of communication (typically a Berkeley socket) to the real resource through which asynchronous messages may be expected. An ICC may ask the Coordinator to call one of its methods at regular intervals so that it may poll the real resource. Alternatively, it may ask that whenever data becomes available at the communication endpoint a method should be called. Typically, this method will read the incoming data and pass this to the correct RRMOS. The only case when RRMOS interact directly with the real resources is when they set management information. 5.3. The OSI Agent The other major component of the OSI managed system model is the "OSI Agent". This too is represented by a C++ object and handles wild-card naming, scoping, filtering and (eventually) access control. The OSI Agent services the messages it is handed by the Coordinator. These may be either association establishment/release requests or CMIS operation requests. In the latter case it first performs access control functions and then synchronises the potentially multiple replies according to the scoping and filtering parameters. In order to perform CMIS requests, it interacts with the selected MOs to get, set, etc. management information. The OSI Agent may also receive event "notifications" from the RRMOS. According to the OSI management model, MOs issue notifications to the agent which then checks with the event filtering information in the ERFDs to determine whether the notification should result in a CMIS M-EVENT-REPORT. Unfortunately, if an implementation follows this model it results in a great deal of wasted processing in the case that no remote manager is interested in the event in question. There are also some logical problems, for example, the filtering expression may reference the MO that issued the notification but this may, by now, have been destroyed. Within the GMS, ERFD filtering information is applied in advance and notifications are only issued by RRMOS if it is known that M-EVENT-REPORTs will result. Although we have implemented the full generality of the filtering mechanism specified in the draft standard[9], we can see that certain filters will be extremely expensive to process. We expect that, in practice, only quite simple filtering expressions will be used. 6. METHODS As an aid to understanding the information flow within the GMS we now summarise the methods applicable to the objects above. The C++ object method notation is used (somewhat loosely) to indicate a method method being applied to an object with id obj: 6.1. The OSI Agent Three methods are used by the Coordinator to report incoming messages from the CMIS interface: ```c assoc_id = agent.cmis_connect (connect_parameters) agent.cmis_work (assoc_id) agent.cmis_loss (assoc_id) ``` The first is used to notify a request for the establishment of a CMS association, the second to indicate that a message has arrived on an existing association (including a disconnect request), and the third to indicate that an existing association has been abnormally released. A further method is used by the RRMOs to notify events: ```cpp agent.notify (my_class, my_name, event_type, event_report_info, destination_address_list) ``` 6.2. Managed Objects The OSI agent interacts with the RRMOs and MCMOs to perform requested CMS operations. The procedures and methods used are: ```cpp result = Create (parent_MO, rdn, init_info) result = mo.set (attribute_ids) result = mo.set (attribute_id/value pairs) result = mo.action (action_type, action_information) result = mo.delete () ``` Create is a static method which checks whether a create request is valid and, if it is, calls the constructor for the appropriate C++ class. The identity of the parent MO in the containment hierarchy and the Relative Distinguished Name (RDN) of the new MO are supplied as parameters. The four methods shown above embody the interface defined in [7]. The Four other methods are provided to assist the OSI agent in locating the required MO: ```cpp target_mo = mo.find (name) target_list = mo.scope (scope_info) target = mo.filter (filter) target = mo.check_class (my_class) ``` The first searches the subtree below `mo` for a MO called `name`, the second returns a list of MIs which are in `scope`, the third applies a filter to a MO and returns a boolean value, the fourth checks that the class `my_class` is appropriate for the MO in question. An ICC may need to create or delete transient RRMOs and to refresh the RRMOs it controls with new management information according to activity in the real resources. RRMOs are created by the ICC calling the constructor directly. The methods used by ICCs are: ```cpp mo.do_update (management_information) mob.destructor () ``` 6.3. The Coordinator The ICCs tell the coordinator to register or de-register endpoints of communication to the real resource and are subsequently informed of activity on these. They also tell the coordinator to schedule and cancel periodic polling signals. The methods used are: ```cpp coord.register_ccep (icc, ccep_id) coord.deregister_ccep (icc, ccep_id) coord.schedule_poll (icc, interval, MOclass) coord.cancel_poll (icc, MOclass) ``` 6.4. ICCs RRMOs in a family register and de-register themselves with their ICC as they are created and deleted: ```cpp icc.register_object (mo, mo_class, rdn) icc.deregister_object (mo) ``` Note that the first RRMO to register triggers the establishment of communication to the real resource. RRMOs may optionally request a special polling regime for a particular MO class and these requests are passed to the Coordinator via the ICC: ```cpp icc.schedule_poll (interval, MOclass) icc.cancel_poll (MOclass) ``` A RRMO may need to talk directly to the relevant real resource - for example when the setting of some attribute value should rapidly be reflected in system operation. In this case, the RRMO must ask for its communication end-point from the ICC: ```cpp ccep_id = cce.get_ccep () ``` The Coordinator needs to inform ICCs of the arrival of a message from a real resource or of the necessity to issue a poll. These two methods are used: ```cpp icc.do_ccepread (ccep_id) icc.do_poll (MOclass) ``` The object class parameter in the poll method is only used when the polling takes place for a single MO class within a family rather than for the family as a whole. 7. THE GMS IN USE - AN EXAMPLE The first RRMO to be implemented was a part of the ISODE TFO management functions from the OSIMIS system. An important test of the GMS structure was the ease with which this could be done. In the case of ISO TFO, we identified two MO classes: the T-Entity class and the T-Connection class. There may be one and only one static instance of the T-Entity class. This summarises activity for all incarnations of the protocol and contains information such as the number of current and previous connections, the amount of data transferred, and error counters. Instances of the T-Connection class are transient - existing only during the lifetime of a connection. They contain information such as creation time, source and destination TSAP addresses and traffic counters. These are subordinate to the T-Entity instance in the MIB containment hierarchy. ISODE TFO is implemented as a set of library routines that are linked with the applications, this means that it runs in user space - the "real resource" in this case is effectively a Unix process. The IPC method used to communicate management information is a UDP socket - communication is only possible from the real resource to the GMS at present. Implementation was straightforward. C++ object classes for the T-Entity and T-Connection RRMO classes were derived from the generic C++ MO class. Many of the additional attribute types required were instances of C++ classes already available within the GMS. An ICC object class was written (again, derived from the generic C++ one). This registers a socket bound to a well-known port with the Coordinator. An ICC method (iso_do_register_object ( )) is then called by the Coordinator each time a message arrives at the socket. Detailed operation is as follows (the numbers in the text are references to Figure 2). When the T-Entity RRMO is created, (which happens either at initialisation time or through a CMIS M-CREATE request), it registers itself (iso_register_object ( )) with the "ISODE" ICC object (1). If it is the first ISODE-related RRMO to register, the ISODE ICC object initialises the UDP socket, so that it may be contacted by active ISODE processes. It also registers itself with the Coordinator (iso_register_endpoint (1)) so that it will be notified in case of activity (2). After this, all T-Connection RRMOs may be created and these too will be registered with the ISODE ICC. When a message arrives at the UDP socket from an ISODE process (3), it is fielded by the Coordinator which recognises the socket as being managed by the ISODE ICC which it then informs (iso_do_register_object ( )) (4). The ISODE ICC then passes the incoming information to the relevant RRMOs (5) (iso_do_update ( )). A CMIS M-GET request will also be fielded by the Coordinator (6) and, in this case, will be passed to the OSI Agent (7) (agent_get (1)) which will perform the required filtering tasks (2). It then performs the requested operation (8) (agent_mmap (1)) on these. As a result of processing information received from an ISODE process, an RRMO may determine that a threshold has been exceeded and that a notification may be required. If the RRMO determines that an ERPD filter is set to forward such a notification, it informs the OSI Agent (9) (agent_notify (1)). Finally, when an ISODE RRMO is deleted, (which happens, for example, when a T-Connection closes or as a result of a CMIS M-DELETE request), it deregisters itself with the ISODE ICC object. If it was the last RRMO, the UDP socket is closed and the Coordinator is notified accordingly, so that future protocol instances will not talk to the agent. 8. CURRENT STATUS AND FUTURE PLANS We have at present, an implementation of the GMS as described. The only RRMOs supported so far are those related to the ISO implementation of ISO TFO described above. The next step will be to add RRMOs related to other parts of the real resource. One of the first of these will be the Berkeley Unix TCP/IP implementation. Although it might seem odd to manage a non-OSI protocol suite in this way, its extensive use in our environment means that it will exercise the GMS in a realistic way. The OSI management work is being undertaken as part of the ESPRIT project "PROOF" [1]. This project is building a connection-oriented Ethernet ISDN gateway, UCL is also building a connectionless version. Both of these gateways will be managed by using the GMS as a proxy; ISDN, X.25 and Ethernet RRMOs will be needed. Another possibility we will look at is the implementation of RRMOs with SNMP back-ends. These would enable the GMS to operate as a proxy system for SNMP agents, enabling these to be managed by OSI management processes. The standards needed for this development were specified in [8]. An additional point to note is that the standards of the OSI management will allow the GMS to be used to provide a common management interface to a wide variety of different back-ends. Finally, future work will include the use of a managed system specification language with a compiler to generate code for the generic parts of the RRMOs, to define MIB modules and to provide initialisation data for static MOs. In this way, only the "back-end" code for interacting with the real resources will need to be hand written. We do not claim that this system provides all the answers for OSI management. The GMS was built as an experimental tool with flexibility rather than performance and compactness in mind. However, we do hope that it will give us some practical insights into the problems of OSI management which will be valuable in the future, and that our experience will be useful to others facing similar problems. 1. The PROOF partners are: SN (UK - Prime Contractor), SN (Germany), System Wizarrds (Italy), University College London (UK). 289 REFERENCES
{"Source-Url": "http://www.ee.ucl.ac.uk/~gpavlou/Publications/Conference-papers/Knight-91.pdf", "len_cl100k_base": 5695, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 8015, "total-output-tokens": 6586, "length": "2e12", "weborganizer": {"__label__adult": 0.000331878662109375, "__label__art_design": 0.00032448768615722656, "__label__crime_law": 0.00041556358337402344, "__label__education_jobs": 0.0007486343383789062, "__label__entertainment": 9.03010368347168e-05, "__label__fashion_beauty": 0.0001615285873413086, "__label__finance_business": 0.0004134178161621094, "__label__food_dining": 0.00029540061950683594, "__label__games": 0.0005240440368652344, "__label__hardware": 0.006008148193359375, "__label__health": 0.0005078315734863281, "__label__history": 0.0004227161407470703, "__label__home_hobbies": 0.00011008977890014648, "__label__industrial": 0.0009007453918457032, "__label__literature": 0.00023496150970458984, "__label__politics": 0.0002548694610595703, "__label__religion": 0.0004897117614746094, "__label__science_tech": 0.2193603515625, "__label__social_life": 9.840726852416992e-05, "__label__software": 0.039459228515625, "__label__software_dev": 0.7275390625, "__label__sports_fitness": 0.0002837181091308594, "__label__transportation": 0.0008168220520019531, "__label__travel": 0.0002453327178955078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29075, 0.02227]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29075, 0.42395]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29075, 0.93594]], "google_gemma-3-12b-it_contains_pii": [[0, 2396, false], [2396, 8470, null], [8470, 11749, null], [11749, 17932, null], [17932, 21530, null], [21530, 27369, null], [27369, 29075, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2396, true], [2396, 8470, null], [8470, 11749, null], [11749, 17932, null], [17932, 21530, null], [21530, 27369, null], [27369, 29075, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29075, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29075, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29075, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29075, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29075, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29075, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29075, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29075, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29075, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29075, null]], "pdf_page_numbers": [[0, 2396, 1], [2396, 8470, 2], [8470, 11749, 3], [11749, 17932, 4], [17932, 21530, 5], [21530, 27369, 6], [27369, 29075, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29075, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
6e508333692d2d5f71ef405503779c1b7f03c6bb
ScyFlow: An Environment for the Visual Specification and Execution of Scientific Workflows Karen M. McCann Maurice Yarrow Adrian DeVivo Piyush Mehrotra ABSTRACT With the advent of grid technologies, scientists and engineers are building more and more complex applications to utilize distributed grid resources. The core grid services provide a path for accessing and utilizing these resources in a secure and seamless fashion. However, what the scientists need is an environment that will allow them to specify their application runs at a high organizational level, and then support efficient execution across any set or sets of resources. We have been designing and implementing ScyFlow, a dual-interface architecture (both GUI and API) that addresses this problem. The scientist/user specifies the application tasks along with the necessary control and data flow, and monitors and manages the execution of the resulting workflow across the distributed resources. In this paper, we utilize two scenarios to provide the details of the two modules of the project, the visual editor and the runtime workflow engine. INTRODUCTION NASA's Earth and Space scientists are finding that the raw data being acquired and downloaded by satellites and deep-space missions is accumulating on data archive systems much faster than it can be transformed and processed for scientific experiments. In addition, scientists lack the visual tools that might allow them to rapidly design the appropriate transformation sequences for their data. The requirements for such a tool include: rapid visual prototyping of transformation sequences, the designation of processing pathology handling, the ability to automatically apply designated workflows to the processing of large sets of raw data files, the collation and post-processing of results, the advance and repeated scheduling of processing sequences, and the ability to easily modify these data transformation sequences: to accommodate changes in the intent of experiments, and to rapidly test new hypotheses. To provide scientists with a capability that addresses these needs, in this task we are building a GUI-enhanced tool, ScyFlow, that will allow flow-chart-like assembly of data acquisition and transformation steps, and will then instantiate this processing. This tool will support all stages: the automated acquisition of raw data, the distributed processing of this data, and the final archiving of results. In this paper, after a brief overview of the overall architecture, we describe two of the modules, the visual editor and the runtime engine, in more detail. We present two usage examples to describe the two modules. Computer Sciences Corporation NASA Ames Research Center Mail: MS T27A-1, NASA Ames Research Center, Moffett Field, CA 94035 Email: {mcann, yarrow, devivo, pm}@nas.nasa.gov Motivating Scenarios In this section, we present two scenarios that represent typical problems faced by scientists who need to process large amounts of information, and/or execute large programs, in a distributed environment that might range from workstations to supercomputers. These two cases are modeled in a ScyFlow chart; see Figures 1 and 2 below. Scenario 1: Parametric Study: A CFD scientist wishes to vary the position of flaps on an aerospace vehicle geometry, and then apply a CFD flow solver code to the resulting geometry configurations for varying values of Reynolds number (aircraft speed) and alpha (angle of attack). The scientist may wish to run this workflow several times, changing configurations and parameter values; also, the scientist may wish to introduce some sort of test that will modify the parameter values themselves in order to "zero-in" on parameter sets of interest, or eliminate certain configurations that fail to meet specified criteria. Note that this represents 2 levels of parameterization: "i" geometry configurations, followed by (for each configuration) "j" (values of Reynolds number) times "k" (values of alpha), where total runs = i + (i * j * k); where there are i runs for the first level, then i * j * k runs for the second level, and the first level represents a meshing program that varies the flap positions, and the second level represents a computation-intensive CFD flow solver to be run on a supercomputer system. In our example, let i = 3, j = 3 and k = 4, producing a total of 36 input data sets and 36 jobs, one for each input set. Scenario 2: Processing of Mars data: A scientist working on Mars mission data needs to download HDF files (latitude and longitude based) as they become available on agency web sites. These files typically contain some kind of gathered spectral data in two-dimensions; where the first dimension is radar, light, or heat, etc., and the second dimension is a lat-long location for each of the first dimension values. In general, there will be many small HDF files, and a significant proportion of these HDF files will have problems or errors: for example, misaligned "corners" of lat-long sections, non-numerical values in numerical fields, or errors in data values that can be discovered and eliminated by some mathematical method, etc. After downloading, the files need to be examined for errors, and then each file with a particular error will need to be processed by the appropriate "fixer" code. Then the corrected files are "stitched" together into a single large file of corrected spectral data, which is then subjected to a compute-intensive spectral processing method. Typically, the results of the processing will be displayed as contours, or color-mapping, overlaid on a map of the area of interest, which is delimited by the composited set of lat-long values. Overall Architecture We are developing ScyFlow as part of a larger set of component applications. In order to allow our component applications to "talk" to each other, we have created a "container" application called ScyGate that manages and coordinates the interactions between the components. Initially, these contained applications will be: - ScyFlow: workflow editor, generator and execution engine (the focus of this paper); - ScyLab: parameter study capability (next generation of ILab[1], with some additions so that it can be used by ScyFlow); - UniT: (Universal File Translator) program that generates translation code for transforming one data format into another data format; - FileTester: Perl/Tk utility that generates a Perl program that will parse any given ASCII data file and return designated values – can be used condition testing within a workflow. - ScySRB tool: getting SRB files into Experiments and WorkFlows, and putting output files from Experiments and WorkFlows back into the SRB. - Any additional modules: our architecture will allow additional component applications to be incorporated into the system at both an icon-based (data configuration) level and a programming-based level (use of a separate Registry API). A ScyGate local server runs in the background and component applications open sockets to this server. Shared data is kept in a “Registry” (which is both an data tree object and a data file) and is managed by the background server. Our aim in creating this application environment is to provide a framework for the deployment of several related applications, and to extend this framework in two ways: first, allow users to attach their own icons to the ScyGate “icon corral” so that they can be launched from ScyGate (and, possibly, passed as input some file paths known to ScyGate); second, to allow developers to write code that attaches to the Registry server, so that the code produced by these developers can also “talk” to other ScyGate applications. The framework also makes it easy for us to handle the updating of individual codes, since version information is one of the things handled by the Registry. Architecture of ScyFlow ScyFlow is an environment that supports the specification, execution and monitoring of workflows. It consists of two major components: - ScyFlowVis provides the visual interface for specifying workflows. It also translates the visual flow graph into internal formats (XML, dependency list, pseudo-code) for storing and for communicating with the other components. The GUI can also be used as a visual interface to monitor the execution of workflows. - ScyFlowEngine provides the set of services required for the overall execution of the workflow across distributed grid resources, taking into account the specified control and data dependencies. A set of APIs will allow applications other than ScyFlowVis to also connect to the ScyFlowEngine in order initiate and monitor executions of workflows. Definitions of ScyFlow Terminology Below we define some terms that we use for data objects in the context of ScyFlow. **Process:** Basic unit of execution in any workflow; can be any type of script, or any type of compiled file, or a single command. The Process Object contains the data necessary to execute any given process - the type of Process, paths to necessary files and any other required data: command line arguments, special queue directives, etc. **Experiment:** A “railroad” sequence of processes (no control structures) that is to be executed sequentially. Any process in an Experiment can be parameterized by a sequence of input values. The cross product of the input values gives rise to a set of independent input parameters that can be executed independently, essentially as “job” units of an experiment in a parametric study. The Experiment Object contains pointers to the constituent Process objects, along with information about the parameterized inputs such as the input files, the locations within the files and the values to be used for these locations. For the purpose of explaining ScyFlow, an Experiment and a Process can be regarded as interchangeable, since an Experiment is just a set of Processes; we use the term "executable" to interchangeably refer to an Experiment or a Process. **Workflow:** A set of Experiments and/or Processes along with any control structures and data dependencies necessary for executing the subtasks. Workflow objects are really “container” objects since they point to other Experiment and Process objects, but they also contain control flow information in the form of specific flow structures. All executable data, and control flow data, is represented as vertices in a directed graph, where each vertex is either an executable, or a control structure (SplitPath, Join, Cycle, or Pause). **Job:** A Job object represents a single input data set, attached to an associated workflow graph or Experiment object. Thus, an Experiment will result in multiple Jobs, one for each execution of the parameterized data set. A Workflow will have at least one Job, and each Job represents one traversal through the Workflow directed graph. A Workflow that has no Experiments (parameterized processes) will give rise to one Job in which (some sub-set) of Processes will be executed only once; a Workflow with Experiments will have many Jobs, and each Job will execute (some sub-set of) Workflow executable vertices once. (Note that in each case the sub-set of executed Experiments or Processes may include all Workflow vertices.) IMPORTANT: an Experiment object contains data flow information, i.e., information concerning parameterization, whereas a Process does not. ScyFlow’s data flow specification – as far as parameterization goes - takes place at the Experiment level. However, SplitPath and Join control structures at the ScyFlow level can cause data sets (jobs) to be multiplied or joined; when the workflow is executing, the ScyFlow execution manager keeps track of the data set totals, applying control flow variants to parameterized set specifications, in a manner completely transparent to users. The ScyFlow monitor display will feature data set totals, since these are determined at run time if any control structures are present in the directed graph. **SCYFLOW VISUAL INTERFACE: ScyFlowVis** The ScyFlow system provides a visual interface for specifying the graph that represents the WorkFlow. Along with manipulating such graphs (e.g., creating, storing, modifying), a companion interface allows users to utilize the workflow graph to monitor the progress of the execution of the workflow. In this section we focus on the specification interface, providing details of the types of workflows that can be specified within ScyFlow. At the top level, the directed graph in ScyFlow has been designed to model the flow of operations only; data specification appears within the context of the Processes and Experiments (see below.) There are only 5 types of vertex, one for executables (Experiments or Processes), and 4 for control operations: SplitPath, Join, Cycle, and Pause; these are represented within the directed graph display by different icons. There are no vertices representing data, only one type of vertex which can represent operations upon data. Arrows indicate the flow of operations. **Data Dependencies** In order to minimize user input, data dependencies are handled in the following ways. First, data dependencies between Processes are specified within ScyLab merely by the order in which Processes are entered (this order can be easily modified by user.) If Process B is entered after Process A, ScyLab code “assumes” that B is dependent on some files output by A, and the execution code handles output files from A accordingly. A similar model is followed by ScyFlow; whenever user adds a vertex to the directed graph, that vertex is either the first vertex (no dependencies), or is being added to a pre-existing vertex. For example, if user adds vertex B to vertex A, ScyFlow also assumes that B is dependent upon A, and that A must be executed before B can be executed. Second, between executables in a Workflow, at the ScyLab level user must mark certain Process-specific files as “Input to Next Experiment ...”. This information is used by the ScyFlow execution manager to assure that essential files are correctly copied and/or archived in the sequence of directed graph specified executions. For user’s convenience ScyFlow will include a data-dependency modeling display that will allow user to easily view and edit the data dependencies between portions of a Workflow; ScyLab will include a similar feature. The APIs for both ScyLab and ScyFlow will include functions that will return or change this information. **Control Vertices in ScyFlow** The small number of types of vertices in ScyFlow was achieved by making the control vertex types represent sets of similar operations. For instance, there is only one Cycle vertex, but the Cycle Properties contains options which can be set to execute different kinds of loops: counter loops, “while” loops, and “do ... while” (or “do ... until”) loops. A SplitPath vertex has similar options. It can have no test at all: this is an “AND” condition that indicates that the output from the previous executable should be used as input for ALL the following executables. This option multiplies the Jobs (data sets) and may be set to execute simultaneously on different systems. Secondly, the SplitPath can have a test based on either the return value of a program, or the existence or non-existence of a file; this is an “OR” condition. The return value of the specified OR condition test is then used to determine the following path, which will be the only path (out of all the possible paths) that gets executed for any given data set. Note that an “AND” SplitPath introduced at any point in a workflow – including the beginning of a Workflow - can also have the effect of starting up separate simultaneous path(s) of execution at that point. The Join vertex is simpler: there are only two types of Join (1) an “AND” Join, where execution at the Join must wait for input from all previous paths in order to continue; (2) an “OR” Join, where input from any previous path can be passed to next executable, with no waiting. (Note in future we will probably add some Join options concerning file staging, depending on user requirements; for example, user may wish to copy/move/mark sets of files from previous executables in some way required by the executable following the Join vertex.) The Pause vertex is procedural only; it is intended to provide for cases where some data examination has to be performed before continuing the execution of the Workflow, or to provide for pathological cases, etc. Pause options include email, beep, etc.; a special Pause pop-up display will be provided so that users can examine paused data sets (Jobs) and mark these for discarding or continuing through the workflow. In ScyFlow, SplitPath and Join vertices are always labeled with their types: “OR” or “AND”, appearing in different colors. This disambiguates the meaning of these vertices, as far as control flow is concerned, in any given display. An alternative to the “OR” and “AND” labels might have been to have different types of vertices, drawn in different shapes, representing the varieties of Split and Join operations; but, this would have made the display more confusing, and would probably have required (in order to carry out such a design decision to its “logical” conclusion) many more different shapes and types of icons in order to distinguish various operations. The Data Train Analogy The closest “real-life” analogy to ScyFlow’s modeling notation is to compare the Workflow transit of the directed graph to the path a train would take through a series of railroad switches, while visiting several destinations before the end of its journey. Each separate train is a Job (or data set); each stop is an executable where data is unloaded and processed, and then different data is loaded back onto the train. SplitPath vertices are switches to several other tracks; Join vertices are switches that join two or more sets of tracks. Some options set by ScyFlow users can cause one train to get cloned (copied) into two or more trains; and, one or more trains can “wait” at Join vertices until other trains arrive, so that all Join-specified trains can merge back into one train. In the simplest SplitPath case, with only OR conditions in any SplitPath vertices, a train sets out on a journey but its exact path and destination are not determined until each SplitPath test is done and the train is sent down one path instead of another. (Note that this presumes an exclusive OR only; ScyFlow does not currently model an inclusive OR, but this will be added.) Design Motivation The motivation behind our design is as follows: it was important to create a notation system that was both easy to understand, and at the same time flexible and powerful enough to model complex workflows. As with most (if not all) directed graph problems, granularity is a very important issue; the ScyFlow design would not be successful if the ScyFlow workflow charts are ambiguous or too difficult to create and understand. The problem is that any representation of a complex real-life workflow problem will expand to be so visually complex as to be unsuitable for use and for visual "cues" about structure. Even simple workflows can be very difficult to model. In terms of programming implementation this is an ongoing problem, made even worse by the graphical difficulties of creating a display where two or more vertices do not overlay each other. Our aim was to create a more visually intuitive display by simplifying the workflow representation: we have no data vertices and the number of vertex types has been minimized. We make the assumption that users will impose their own complexity on any given problem; it is important to keep the basic design as simple as possible in order to accommodate this. The "collapse/expand" paradigm for a graphical interface needs to be used as much as possible, so that detail can be hidden in order to avoid a very large and overly complex display. ScyFlow implements this by allowing an executable vertex to "expand" into a ScyLab Experiment which can be edited/created separately in ScyLab, then collapsed back into a ScyFlow executable vertex. ScyFlow Representations of the User Scenarios We now describe how the two scenarios presented earlier would be represented using the ScyFlow notation. Figure 1: ScyFlow CFD user scenario Figure 1 shows the ScyFlow workflow representation of the first user scenario, multiple levels of parameterization. Note that this workflow is a "railroad", that is, a straight line: this is because there are no control structures in this workflow; no decisions are made at run time, and all possible data sets (Jobs) will be executed since there will be no tests executed to eliminate any of them. The first executable, labeled “Flaps”, would produce 3 geometries with varying flap positions; since this is not compute-intensive, user might run it locally, perhaps on a workstation. The following “Pre-Processing” might involve fixes and/or changes to the geometry grids; also, local. The FlowSolver would probably be executed on a supercomputer; and the post-processing and graphics might run on some set of graphical systems. Figure 2: ScyFlow Mars data user scenario Figure 2 illustrates the second user scenario. The user would construct a ScyFlow Workflow where the first executable is a “web scraper” that will automatically download files from a site according to some constraints (e.g., all new files since a certain date, all files in some directory, etc.). Note that the ScyGate suite of utilities will contain a program that will generate this sort of program or script, either in Perl or in Korn shell.) The second vertex in user’s workflow would be a SplitPath control structure vertex, that will run a program that will apply several tests to each downloaded file, and return certain values indicating the appropriate “fixer” program that should be run on each file. The SplitPath vertex will have several branches, one for each fixer program. Then, all paths leaving the SplitPath vertex will be followed by a Join control vertex with an “OR” condition, indicating that any data set processed by any of the paths following the SplitPath vertex can then be passed to the next processing node following the Join control vertex. Following the Join vertex, the user would add an executable that would take all “fixed” files as input, and use them to composite a large file. Adding another SplitPath test would determine whether the composition had accumulated a sufficient number of files to continue to the next executable. Once this test succeeds, the next executable would apply a compute-intensive spectral decomposition operation to the large data file, presumably on some available supercomputer resource. Finally, a graphics post-processor executable would get the results of the spectral program, and output graphics files for display. Each stage of this Workflow might or might not be accompanied by archiving instructions which will be performed at run-time by the Workflow execution manager. Note that the second SplitPath test has only one path, but, there is an “implied” first path which is to do nothing unless the SplitPath test succeeds. The first "WebScraper" executable would continue to send data sets down the workflow, but the final spectral and graphics executables will not be executed until the final test succeeds. **GUI vs. API Operation of ScyFlow and ScyLab: Automated Scripting** Both ScyFlow and ScyLab will provide a data-file based API, plus the ability to generate this file from any GUI-constructed Workflow or Experiment object. This gives programmers/users the ability to "script" any given execution of a Workflow or Experiment, since the generated data file – which will be used as input for the API - can be parsed, and re-run, by other programming modules. **SCyFLOW Runtime System: ScyFlowEngine** The workflow execution engine consists of two components. The first of these is the dependency handler, which expands compiled dependency data. The second component of the execution engine is the server/sub-server architecture, which spawns the work indicated by the directed graph and subsequently monitors the progress of jobs and determines which dependent work may be run from the completion-status information of finished dependencies. ![Figure 3: Operation of the Workflow Execution Manager](image) The workflow execution engine is a server/sub-server architecture. It utilizes socket communications for the transfer of job instructions from the server to the sub-servers and for transfer of job completion information from the sub-servers to the server. The sub-servers are basically "dumb" and do no dependency analysis or runtime decisions. The server contains the directed-graph compiler, which produces a dependency list. This list is used to determine which components of the user work may be processed (dependents) once any prior component (a dependency) has been successfully completed. Note that currently, control flow capability has been developed and is working, though both control and data flow specification are done in the GUI graph editing. Thus each graph traversal implies a single path of execution of user work through the graph. The server also contains the apparatus which spawns sub-servers onto user- or resource-broker-designated compute machines. The spawning of a process can be accomplished by either ssh, gsssh, or the NAS JobManager. Once a sub-server has been spawned onto a designated resource, the sub-server will thereupon open a connection to the server’s listener-socket and request from it the instructions to be performed. The server is based on a multiplexing architecture and can simultaneously handle any number of user jobs flowing through any number of graphs whose vertices can be partitioned onto any number of remote resources. At this time, each user job is serviced by one sub-server, but our architecture will permit a single sub-server to service multiple jobs running on its particular resource. Because the sub-servers communicate back to the server immediately with any vertex completion statuses, monitoring of job status is real-time. Here is a detailed look at the life cycle of a workflow graph in our server/sub-server architecture. Once the graph has been constructed (either by the GUI or API) and user-process attributes given appropriate values, the GUI/API will open a connection to the server listener-socket and pass to it a serialized graph object or graph file-package location. The server then compiles the graph and produces a dependency list for the graph vertices. Then, the first dependent with no dependencies is chosen and a "HostServer" is spawned onto the remote machine indicated for this vertex (chosen either by user designation or resource-broker). The HostServer opens a connection to the server, identifies itself by JobName and requests work to be performed. The server passes to it instruction(s) to perform. The sub-server chooses an appropriate "job handler" to perform the work and thereupon forks and execs this job handler, thereby allowing the sub-server to avoid blocking while the work is being performed. The sub- server opens its own drop-down listener-socket loop and waits for either completion status from its job handler or for possible shutdown/discontinue requests from the server. When the job handler has completed the user work and notified the sub-server, the sub-server exits its listener loop and reports completion status for this job to the server. The server updates its job/vertex data object with completion or failure status for the vertex, and then determines, using the dependency list, which next dependent task may be run, if any. If the next valid dependent vertex is on the same machine as last, the reporting sub-server is given this work, otherwise the sub-server is given shutdown instructions, and a new sub-server is spawned onto the resource selected for this next vertex. Subsequently, this new sub-server will open a connection to the server and obtain the vertex work, etc. Related Work Some related projects are: SciRun, developed at the University of Utah and used for a variety of applications including molecular biology [2]; Cantata / Khoros, commercially developed in Utah and used for image processing[3]; UNICORE, developed in Germany as a generic workflow package[4]; and DAGman, part of the CONDOR project, developed by Myron Livny at the University of Chicago [5]. Conclusions The new ScyFlow system presents two innovations: a new design for workflow modeling, and the ability to handle both control flow and data flow within any given workflow. This powerful and flexible system provides generic capability for creating and running workflows, without being domain-specific; in addition, ScyFlow requires absolutely no “instrumentation” of any participating codes, that is, users can run ScyFlow without doing any programming whatsoever. Our object oriented design restores state for workflow data sets, since WorkFlow objects are easily serialized to/from files, and transported over sockets. ScyFlow’s “scripting” ability (the API, Applications Programming Interface) is not only convenient for users – since WorkFlow objects can be “dumped” to a script file and used to re-run the workflow – but is also usable by external programming modules for deployment in many distributed environments. References [2] software.sci.utah.edu/scirun.html
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20040084570.pdf", "len_cl100k_base": 5913, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 25562, "total-output-tokens": 6439, "length": "2e12", "weborganizer": {"__label__adult": 0.00031304359436035156, "__label__art_design": 0.0006394386291503906, "__label__crime_law": 0.00036454200744628906, "__label__education_jobs": 0.00196075439453125, "__label__entertainment": 0.00013577938079833984, "__label__fashion_beauty": 0.00018358230590820312, "__label__finance_business": 0.00037741661071777344, "__label__food_dining": 0.00036025047302246094, "__label__games": 0.0005054473876953125, "__label__hardware": 0.00197601318359375, "__label__health": 0.0005831718444824219, "__label__history": 0.00044465065002441406, "__label__home_hobbies": 0.00013828277587890625, "__label__industrial": 0.0008983612060546875, "__label__literature": 0.0002918243408203125, "__label__politics": 0.0003116130828857422, "__label__religion": 0.0005288124084472656, "__label__science_tech": 0.320068359375, "__label__social_life": 0.0001646280288696289, "__label__software": 0.036895751953125, "__label__software_dev": 0.63134765625, "__label__sports_fitness": 0.0003268718719482422, "__label__transportation": 0.000720977783203125, "__label__travel": 0.0002474784851074219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29832, 0.0055]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29832, 0.50042]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29832, 0.91915]], "google_gemma-3-12b-it_contains_pii": [[0, 2844, false], [2844, 6140, null], [6140, 8698, null], [8698, 12017, null], [12017, 15245, null], [15245, 18605, null], [18605, 20771, null], [20771, 23217, null], [23217, 25580, null], [25580, 27282, null], [27282, 29832, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2844, true], [2844, 6140, null], [6140, 8698, null], [8698, 12017, null], [12017, 15245, null], [15245, 18605, null], [18605, 20771, null], [20771, 23217, null], [23217, 25580, null], [25580, 27282, null], [27282, 29832, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29832, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29832, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29832, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29832, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29832, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29832, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29832, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29832, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29832, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29832, null]], "pdf_page_numbers": [[0, 2844, 1], [2844, 6140, 2], [6140, 8698, 3], [8698, 12017, 4], [12017, 15245, 5], [15245, 18605, 6], [18605, 20771, 7], [20771, 23217, 8], [23217, 25580, 9], [25580, 27282, 10], [27282, 29832, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29832, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
36e90ccd94e78a7637745cfda57261ae7a2bb270
Some Useful Information 1. \[ \sum_{n=1}^{N} f(i) = c \sum_{n=1}^{N} g(n) \] 2. \[ \sum_{n=1}^{N} f(i) + g(i) = \sum_{n=1}^{N} f(i) + \sum_{n=1}^{N} g(i) \] 3. \[ \sum_{n=1}^{N} f(i) = \sum_{n=1}^{N} (i - \sum_{i=n}^{\infty} a \cdot s \cdot m \cdot n) \] 4. \[ \sum_{n=1}^{N} c = c \sum_{n=1}^{N} = c(n-m+1) \quad m \cdot n \quad c \text{ a constant} \] 5. \[ \sum_{n=1}^{N} = n \cdot 1 \] 6. \[ \sum_{n=1}^{N} = n(n+1) = \frac{n^2 + n}{2} \] 7. \[ \sum_{n=1}^{N} = \frac{n(n+1)(2n+1)}{6} = \frac{n^3 + n^2}{2} + \frac{n}{6} \] 8. \[ \sum_{n=1}^{N} \text{ (lower powers of } N) \] 9. \[ \sum_{n=1}^{N} = \frac{c^{n+1} - 1}{c-1} \quad \text{if } c \neq 1 \] 10. \[ \sum_{n=1}^{N} = \frac{c^N - 1}{c-1} \quad \text{if } c = 1, \text{ use (5)} \] (note: this is really the same as (9) with c replaced by 1/c) 11. \[ \sum_{n=1}^{N} \log(i) = \log(n) + \log(n) \] remember: \( n = 1 \cdot 2 \cdot 3 \cdot \ldots \cdot n \) \[ \log \text{ is to any base} \] \[ \log (a+b) = \log(a) + \log(b) \] \[ \log (ab) = \log(a) + \log(b) \] \[ \log (a) = 1 \] \[ \log (1) = 0 \] 12. \[ \sum_{i=1}^{n} \frac{1}{i} \quad \text{is called the "harmonic series" and is often denoted by } H_n = \frac{1}{1} + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} \quad \text{It has no known closed form solution but } H_n \approx \log(n) \quad \text{for large } n. \quad \text{In terms of log base 2, it can be shown} \] \[ \frac{\log_2 n}{2} < H_n \leq \frac{\log_2 n}{2} + 1 \quad \text{In terms of natural logarithms } \ln(n) < H_n < \ln(n) + 1. \] 13. The number of ways to select \( m \) objects from a set of \( n \) objects, with replacement, is denoted by \( C_n^m \) or \( \binom{n}{m} \) and has the value \( \frac{n!}{m!(n-m)!} \) remember \( 0! = 1 \). 14. \[ \binom{n}{m} = \binom{n}{n-m} \] 15. \[ \sum_{k} = 2^n \] 16. If \( A \) is a set of \( n \) objects, the "power set of \( A \)" is denoted \( p(A) \) and is the collection of all subsets of \( A \). Then \( p(A) \) has \( 2^n \) elements. Notation: if \( A \) has \( n \) elements then we write \( |A| = n \). 17. \[ \sum_{k=1}^{n} (k+1) = \sum_{k=1}^{n} k \] 18. \[ \sum_{k=1}^{n} k = \sum_{k=1}^{n} (k+1) \] An "axiom" is an accepted true statement; i.e. a statement that needs no proof of validity. A "lemma" is usually a statement that needs a proof of its validity, often though, it is rather simple to prove and its usefulness is usually that it is used as a step in the proof of a more complex statement. A "theorem" is more complex thing you are trying to prove. Its proof is usually the thing you are trying to solve or bears directly on the problem you are trying to solve. A "corollary" is usually a straightforward consequence of a theorem, possibly a special case, and is of some interest in itself but may not really be related to the problem you are trying to solve. Note: these definitions are not definitive in the sense that one person's corollary might be another person's theorem, etc. Also note that definitions are often thought of as axioms. Common "Proof" Techniques The object is to "prove" the validity of some statement \( S \). (Sometimes you may want to "disprove" the validity of \( S \), but that is the same as "proving" the validity of not \( S \).) A proof that statement \( S \) is true consists of using a set of axioms, previously proven lemmas, theorems and/or corollaries, and the rules of logic to arrive at a conclusion that \( S \) is unequivocally and undeniably true. A "proof" is invalid (really, a contradiction in terms, since an invalid proof is not a proof) when either (1) an assumed axiom is not really an axiom, (2) an assumed theorem is not really true (i.e., its "proof" is really invalid), or (3) a rule of logic was improperly used or applied in the steps leading to the conclusion. To prove \( S \) is true we are really trying to prove the statement "If \( A \) then \( S \)" is true, where \( A \) is a set of all relevant axioms and previously proven Theorems (and Lemmas and Corollaries). The validity of this statement is equivalent to the validity of \( S \). The trick is to find the statements needed in the set \( A \) and the sequence of logic rules. There are several ways to go about doing this. Three of the most common and most useful are (1) Direct Proofs, (2) Contradiction Proofs, and (3) Induction Proofs. (1) Direct Proofs. The idea here is to start with known definitions, axioms and proven theorems and then to try to prove a sequence of new results (lemmas) which lead closer and closer to Statement S. And finally you have built enough "tools" to draw the conclusion that S must be true. For example - If A then L₁ - If A \cup L₁ then L₂ - If A \cup L₁ \cup L₂ then L₃ - If A \cup L₁ \cup ... \cup L₄ then S The trick is to determine what each Lᵢ statement should be and to prove the corresponding "If A \cup L₁ \cup ... \cup L₄ then L₄" statement. (Notice: this is a classic case of Divide-and-Conquer applied to proving theorems) (2) Contradiction Proofs To prove "If A then S" true is equivalent to proving "If not S then not A" is true. Notice — this is not proving "not A" is true. This is proving the statement "If not S then not A" is true. When this statement is proven then, since we know not A is always false (since A is always true), we can conclude that not S must always be false, i.e., S must always be true. So, how do we go about proving "If not S then not A"? We do it by assuming not S is a previously proven theorem and then by, e.g., the techniques in (1), try to conclude some statement in A must be false. In essence the techniques used in a direct proof and a contradiction proof are identical, and usually it can be shown that if there is a direct proof there is also a contradiction proof and vice-versa. So, why use one over the other? Sometimes one way just seems to be clearer, easier or more straightforward than the other. The proofs of some theorems just seem to lend themselves to contradiction rather than a direct approach (and vice-versa). Sometimes it's simply a personal preference of the theorem prover. (3) Induction Proofs Proving the validity of a statement S by induction is probably the most misunderstood of all the proof techniques, yet it is also probably the easiest and most useful of the proof techniques for computer science. There are several variants and versions of the induction process. What I'll describe here is the most common. Induction is not applicable in all cases. Usually it is used to establish the validity of statements S where S claims some property holds for some function f, where f is a function of an integer variable n, e.g. possibly S: f(n) ≥ c for all n ≥ 5. where c is a constant, say. (C could also be variable and itself a function of n.) The idea is the following and requires two steps. (1) first you try to establish some relationship between f(i-1) and f(i) that holds for all integers i (at least for those i ≥ 5). This might be given to you or it may be something you have to derive and prove holds. This relationship must be strong enough for you to make the following conclusion (which also must be proven) "If the property were to hold at i-1 then the relationship allows you to prove that it would also hold at i." For example, with S as above S: f(n) ≥ c for all n ≥ 5 Suppose you are given, or can prove, that \( f(i-1) + 1 = f(i) \), (a relationship) for all integers \( i \). Now, "if the property were to hold at \( i-1 \), i.e. \( f(i-1) \geq c \), then the relationship", i.e. \( f(i) = f(i-1) + 1 \), allows us to prove that it would also hold at \( i \)." i.e. \( f(i) \geq c \). So, what we have proven is that if it holds at some \( i \) then it holds at all integers greater than \( i \), since transitivity holds here, i.e., \[ \text{if } f(i) \geq c \text{ then } f(i+1) \geq c, f(i+2) \geq c, \ldots \] There is one last step. (2) You must show it does indeed hold for some integer \( i \) and determine what that \( i \) is. (in this case we want to make sure \( i \) is \( \leq 5 \).) So what you try to do, usually, is to pick some value of \( i \leq 5 \). Now try to prove, for that specific \( i \), that \( f(i) \geq c \). That's it. Most discussions of induction give step (2) first and call it the "basis" step. In many ways it makes more sense to me to make it the second step. An old analogy is proving you can climb to the top of ladder (assuming no loss of energy or fear of heights). You must prove two things: 1. No matter where you are on the ladder, say rung \( i-1 \), you must prove you can make it up one more rung, rung \( i \). 2. You must prove you can get on the ladder in the first place. Notice: these two steps are sufficient to prove you can get to the top. They both are also necessary: if you haven't proven (1) you don't know you can even get to the second step above where you got on, and if you haven't proven (2) you surely can't guarantee you can get to the top because you haven't guaranteed you can even get on the ladder. Consider the following statements: "For any simple graph \( G \) with \( p \) nodes and \( e \) edges we must have \( e \leq P(P-1) \)." We may assume the nodes are labeled 1,2,...,p. One axiom we have is a definition --- a simple graph has at most one edge between any pair of nodes and no edge from a node back to itself. A Direct Proof: The "degree" of node \( i \), \( d_i \), is the number of edges incident to node \( i \). By looking at each node in turn and counting the number of edges coming into each node, we will count \( d_1, d_2, \ldots, d_p \) edges. Since each edge "comes into" exactly two different nodes, we will have counted each edge twice. Thus \[ e = 2(\sum_{i=1}^{p} d_i) \] or \[ e = \sum_{i=1}^{p} d_i \] Since no node can have more than one edge from each of the other \( p-1 \) nodes, we know \( d_i \leq p-1 \) for each \( i \). Therefore \[ e = \frac{1}{2} \sum_{i=1}^{p} d_i \sum_{j=1}^{p-1} \delta_{ij} = \frac{P(p-1)}{2} \] and we have proven the result. \( \Box \) A Second Direct Proof: There are \( p \) nodes and each edge can go between exactly two of them. So, the maximum number of ways you could place an edge in the graph is the number of ways you can select two nodes from a set of \( p \) nodes, i.e., \( \binom{p}{2} \). Therefore \[ e < \binom{p}{2} = \frac{p!}{2!(p-2)!} = \frac{p(p-1)}{2}. \] And again we have proved the result. □ A Proof by Contradiction We will prove the statement "if \( e > \frac{p(p-1)}{2} \) then some axiom or known theorem must be false." We assume the nodes are labeled \( 1, 2, ..., p \). For each edge make a copy of that edge. Label the first edge with the same label as one of the nodes it is adjacent to and label the second copy with the label of the other node. We now have \( 2e > p(p-1) \) edges each with a label on it. Notice that label \( i \) must occur exactly \( d_i \) times, thus \( 2e = d_1 + d_2 + \cdots + d_p \) (you've seen that before). Therefore \[ d_1 + d_2 + \cdots + d_p > p(p-1)\] or \[ \frac{d_1 + d_2 + \cdots + d_p}{p} > p-1 \] The term on the left is the average degree of the \( p \) nodes. So this says the average degree is greater than \( p-1 \). But that cannot be true since we know the maximum degree is \( \leq p-1 \). Since our argument is true and we have found a contradiction with a known result, we can conclude that \( e > \frac{p(p-1)}{2} \) must be false, i.e., \( e \leq \frac{p(p-1)}{2} \) must be true. The theorem is again proven. □ Note this proof is very much like the "reverse" of our first direct proof. This possibility was mentioned earlier. Both are valid but here it seems the contradiction proof is a little more awkward and/or not as intuitive as the direct proof (on other theorems the reverse may be true). An Induction Proof 1. Let \( e_i \) be the maximum number of edge a graph with \( i \) nodes can have. When you allow one more node you can add at most one edge from the new node to each of the first \( i \) nodes. Therefore \[ e_i+1 \leq e_i + 1 \] If the property were to hold for \( i \), that is \( e_i \leq \frac{(i-1)i}{2} \), does the property hold for \( i+1 \)? Notice that \[ e_i + 1 \leq \frac{(i-1)i}{2} + 1 = \frac{(i-1)i + 2i}{2} = \frac{(i+1)i}{2}. \] Thus \( e_{i+1} \leq \frac{(i+1)i}{2} \), and that's the property. So if we can show, for a particular \( i \), that \( e_i \leq \frac{(i-1)i}{2} \), the result will hold for all integers greater than that \( i \). 2. Consider \( i = 1 \), i.e., \( G \) has 1 node and therefore \( e_1 = 0 \). Note \( \frac{(0-1)0}{2} = \frac{0}{2} = 0 \) also equals zero. Therefore, 1 is our particular value. The theorem is true for all graphs with one or more nodes. □ Constructive Induction The goal, as before, is to obtain a closed form solution for a function given in a recursive form. Normal induction is a technique by which a 'quessed' closed form solution is verified. (Induction is not a method by which an answer is produced from scratch.) On the other hand, if one quest does not work out we can try another, etc. Hopefully the failure of one quest will lead to modifications that will provide a better next quest. Constructive Induction is a technique in which we leave certain elements of the quest unspecified, that is, as unknown constants. Then, after the inductive argument is carried out, try to determine values for the constants which are consistent with the specification of the problem. Consider \[ f(n) = f(n-1) + n, \text{ and } f(1) = 1. \] We know that this to be \( f(n) = n(n+1)/2 \), in closed form, but suppose we didn't and only suspected \( f(n) \) was some quadratic function of \( n \), i.e., \[ f(n) = an^2 + bn + c, \] where \( a, b, \) and \( c \) are not yet known. It is certainly possible to find values for \( a, b, \) and \( c \) so that \( f(1) = 1 \). So, suppose it is possible to find them so that \[ f(n-1) = a(n-1)^2 + b(n-1) + c, \] for some \( n > 1 \). Then \[ f(n) = f(n-1) + n = a(n-1)^2 + b(n-1) + c + n = an^2 + b(n-1) + a - b + c. \] The question now is: is it possible to find values of \( a, b, \) and \( c \) so that 1) \( a+b-c = 1 \), and 2) \( an^2 + b(n-1) + a - b + c \). The latter implies that we must have \( b-2a+1 = b - a + b - c = c \), or that \( a = 1/2 \) and \( a = 0 \). Therefore, with \( 1 \), we must have \( c = 0 \). The conclusion is then that \[ f(n) = n^2 + n/2 = n(n+1)/2, \] as we already knew, but were able to obtain by induction without actually utilizing that knowledge in the hypothesis. Notice, we do not need to still prove \( f(n) = n(n+1)/2 \) by standard induction. Constructive induction has limitations. For instance, one needs to know at least the general form of the solution. Further, this form cannot be too complex or the mechanics of manipulating the induction hypothesis to the desired form becomes unmanageable (as it also would for standard induction.) Notice that this technique is applicable when equality is replaced by inequality. This, and the comments in the last paragraph, might lead one to believe that constructive induction could be used in order analysis. Indeed that is the case since then our goal is to use rather 'simple' expessions. For example, showing \( f(n) \leq c(n^2) \), where \( c \) is a constant and \( g(n) \) is often of the form \( r^2 \log(n), n \log(n), \) etc. To be even more specific, suppose we simply wanted to prove \( f(n) = f(n-1) + n \) is order \( n^2 \). Then we merely need to show there exists a constant \( c \), for large enough \( n \) so that \( f(n) \leq cn^2 \). Letting the induction hypothesis be that \( f(n-1) \leq c(n-1)^2 \), we have \[ f(n) = f(n-1) + n \leq c(n-1)^2 + n = an^2 - 2bn + c + n, \] or \[ f(n) \leq cn^2 - 2bn + c + n. \] The right hand quantity is bound above by \( cn^2 \) for all \( n \geq 1 \) and \( c \geq 1 \). All that remains is to select \( c \) to satisfy the basis of \( f(1) = 1 \). Letting \( c = \max(1, f(1)) \) is sufficient. Other examples are admittedly not as straightforward. For instance, consider \[ f(n) = 4f(n/2) + n, \text{ for } n > 1 \text{ and } f(1) = 1. \] By rather simple techniques, it is possible to show \( f(n) \) is order \( n^2 \). But constructive induction, when blindly applied, may be misleading. Begin with the normal hypothesis that \( f(n+2) \leq c(n+2)^2 \), for some constant \( c \). Then we have \[ f(n) = 4f(n+2)+n \leq 4c(n+2)^2+n = cn^2+n. \] Closely, \( cn^2+n > cn^2 \), for all \( n > 0 \) and any positive constant \( c \). Notice, this does not say \( f(n) \leq cn^2 \), only that the right-hand side is larger than \( cn^2 \). So our desired conclusion may still be valid. If it is this only implies our hypothesis is simply not 'strong enough.' On the other hand it may be false — you just can't tell at this point. [Note: a common fallacy here is to take the right-hand side, \( cn^2+n \), and conclude that \( cn^2+n \leq cn^2 \) for some constant \( c \). Thus, that \( f(n) \leq cn^2 \) and is therefore order \( n^2 \). This is in error. We MUST use the same constant \( c \) throughout. Otherwise it would be possible to show things like \[ f(n) = f(n-1)+n \leq cn, \] which we know to be \( n(n+1)/2 \).] The remedy here is indicative of similar situations and lies in the statement above that our hypothesis was not strong enough, here, not strong enough means not restrictive enough. Towards that idea, let our constructive induction hypothesis be that \[ f(n+2) \leq c(n+2)^2-b(n+2), \text{ for positive constants } c \text{ AND } b. \] Then, \[ f(n) = 4f(n+2)+n \leq 4(c(n+2)^2-b(n+2))+n = cn^2-2bn+n. \] It is clear that this right-hand side can be bounded above by \( cn^2-bn \) for any positive constant \( c \) and constant \( b \geq 1 \). Now, from the base condition, \( f(1) \), we must have \[ f(1) = 1 \leq c(1)^2-b(1) = c-b. \] So we may choose \( b = 1 \) and \( c = 2 \), concluding that \[ f(n) \leq 2n^2-n. \] Notice that a standard induction argument would now show this to be a valid conclusion, but, as mentioned before, this is unnecessary. LOWER BOUND THEORY Searching ordered lists with Comparison–Based Algorithms Comparison–Based Algorithms: Information can be gained only by comparing key–to–element, or element–to–element (in some problems) Given: An integer \( n \), a key, and an ordered list of \( n \) values. Question: Is the key in the list and, if so, at what index? We have an algorithm. We don't know what it is, or how it works. It accepts \( n \), a key and a list on \( n \) values. That's it. It MUST, though, work pretty much as follows: 1) It must calculate an index for the first compare based solely upon \( n \) since it has not yet compared the key against anything, i.e., it has not yet obtained any additional information. Notice, this means for a fixed value of \( n \), the position of the first compare is fixed for all data sets (of size \( n \)). 2) The following is repeated until the key is found, or until it is determined that no location contains the key: a) If they are equal, the algorithm halts. b) If the key is less, then it incorporates this information and computes a new index. c) If the key is greater, then it incorporates this information and computes a new index. There are no rules about how this must be done. In fact we want to leave it wide open so that we are not eliminating any possible algorithm. After the first compare, there are two possible second compare locations (indexes). Neither depends upon the key or any item in the list: Just upon the result of the first compare. Every second compare on every set of \( n \) items will be one of these two locations. Every third compare will be one of four locations. Every fourth compare will be one of eight locations. And, so on. In fact, we may look at an algorithm (for a given \( n \)) as being described by (or, possibly, describing) a binary tree in which the root corresponds to the first comparison, it's children to the possible second comparisons, their four children represent the possible third comparison, etc. This binary tree, called in this context a "decision tree," then depicts for this algorithm every possible path of comparisons that could be forced by any particular key and set of \( n \) values. Observation 0: Every comparison–based search algorithm has its own set of decision tree's (one for each value of \( n \)) – even if we don't know what or how it does its task, we know it has one for each \( n \) and are pretty much like the one described above. Observation 1: For any decision tree and any root–leaf path, there is a set of date which will force the algorithm to take that path. The number of compares with a given data set (key and \( n \) values) is the number of nodes in the "forced" root–leaf path. Observation 2: The longest root–leaf path is the "worst case" running time of the algorithm. Observation 3. For any position \( i \) of \( \{1, 2, \ldots, n\} \), some data set contains the key in that position. So every algorithm must have a compare for every index, that is, the decision tree must have at least one node for each position. Therefore, all decision trees—for the search problem—must have at least \(n\) nodes in them. All binary trees with \(n\) nodes have a root-leaf path with at least \(\lceil \log_2(n+1) \rceil\) nodes (you can verify this by induction). Thus, all decision trees defined by search algorithms on \(n\) items, have a path requiring \(\lceil \log_2(n+1) \rceil\) compares. Therefore, the best any comparison-based search algorithm can hope to do is \(\log_2 n = \lceil \log_2 (n+1) \rceil\). This is the comparison-based lower bound for the problem of searching an ordered list of \(n\) items for a given key. Comparison Based Sorting Here, there is no "key." Typically, in comparison-based sorting, we will compare values in two locations and, depending upon which is greater, we might 1) do nothing, 2) exchange the two values, 3) move one of the values to a third position, 4) leave a reference (pointer) at one of the positions, 5) etc. As with searching, above, each sorting algorithm has its own decision tree. Some differences occur: Leaf nodes are the locations which indicate "the list is now sorted." Internal nodes simply represent comparisons on the path to a leaf node. As with searching, we will determine the minimum number of nodes any sorting algorithm (comparison-based) must have. Then, from that, the minimum height of all decision trees (for sorting) can be determined providing the proof that all comparison-based sorting algorithms must use at least this many comparisons. Actually, it is quite simple now. Every decision tree starts with an unordered list and ends up at a leaf node with a sorted list. Suppose you have two lists, one is a rearrangement of the other. Then, in sorting them, something must be done differently to one of the lists (done at a different time). Otherwise, if the same actions are performed to both lists in exactly the same sequence, then one of them cannot end up sorted. Therefore, they must go through different paths from the root of the decision tree. By the same reasoning, all \(n!\) different permutations of the integers 1, 2, \ldots, \(n\) (these are valid things to sort, too, you know) must go through distinct paths. Notice that distinct paths end in distinct leaf nodes. Thus, there must be \(n!\) leaf nodes in every decision tree for sorting. That is, their height is at least \(\log_2 (n!)\). By a common result (one of Sterling's formula's) \(\log_2 (n!) = n \log_2 (n)\). Therefore, all comparison-based sorting algorithms require \(n \log_2 (n)\) time.
{"Source-Url": "http://www.cs.ucf.edu/courses/cot6410/spr2010/doc/cot5400Info.pdf", "len_cl100k_base": 7089, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 7000, "total-output-tokens": 7789, "length": "2e12", "weborganizer": {"__label__adult": 0.0004394054412841797, "__label__art_design": 0.0005736351013183594, "__label__crime_law": 0.0006375312805175781, "__label__education_jobs": 0.0028076171875, "__label__entertainment": 0.00020241737365722656, "__label__fashion_beauty": 0.0002386569976806641, "__label__finance_business": 0.0003888607025146485, "__label__food_dining": 0.0007882118225097656, "__label__games": 0.001644134521484375, "__label__hardware": 0.00186920166015625, "__label__health": 0.0010957717895507812, "__label__history": 0.0005335807800292969, "__label__home_hobbies": 0.00027370452880859375, "__label__industrial": 0.0008325576782226562, "__label__literature": 0.0014009475708007812, "__label__politics": 0.0004041194915771485, "__label__religion": 0.0008797645568847656, "__label__science_tech": 0.39599609375, "__label__social_life": 0.0001811981201171875, "__label__software": 0.01080322265625, "__label__software_dev": 0.576171875, "__label__sports_fitness": 0.0004906654357910156, "__label__transportation": 0.0009622573852539062, "__label__travel": 0.0002574920654296875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23764, 0.02497]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23764, 0.6844]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23764, 0.92304]], "google_gemma-3-12b-it_contains_pii": [[0, 1775, false], [1775, 4388, null], [4388, 7364, null], [7364, 10091, null], [10091, 12783, null], [12783, 16204, null], [16204, 18160, null], [18160, 21234, null], [21234, 23764, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1775, true], [1775, 4388, null], [4388, 7364, null], [7364, 10091, null], [10091, 12783, null], [12783, 16204, null], [16204, 18160, null], [18160, 21234, null], [21234, 23764, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23764, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23764, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23764, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23764, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23764, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23764, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23764, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23764, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23764, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23764, null]], "pdf_page_numbers": [[0, 1775, 1], [1775, 4388, 2], [4388, 7364, 3], [7364, 10091, 4], [10091, 12783, 5], [12783, 16204, 6], [16204, 18160, 7], [18160, 21234, 8], [21234, 23764, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23764, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
9a13b877b1c79aa1990780a46a2804fc526091a1
Moonlight Chords Anolan Milanés¹, Roberto S. Bigonha¹ ¹Departamento de Ciência da Computação – Universidade Federal de Minas Gerais (UFMG) Belo Horizonte – MG – Brazil {anolan, bigonha}@dcc.ufmg.br Abstract. Concurrency is a fundamental trend in modern programming. Languages should provide mechanisms allowing programmers to write correct and predictable programs that can take advantage of the computational power provided by current architectures. Chords is a high level synchronization construction adequate for multithreaded environments. This work studies chords through the implementation of a chords library in Lua. 1. Introduction Concurrency is a fundamental trend in modern computer programming. However, multithreaded programming, the de-facto standard model for concurrent programming, is known to be hard and error-prone [Lee 2006]. The difficulty to design and write correct concurrent programs following that model comes from the conjunction of the shared memory model with the preemptive scheduling of multiple threads, that may produce unpredictable executions. When the execution of the processes interferes, programmers need to resort to synchronization mechanisms in order to prune the spectrum of possible histories to the desired ones. There are various approaches to cope with concurrency, some proposing alternatives to multithreading either based on non-preemption or avoiding shared memory. In any case, the idea consists in discarding threads as programming model, maybe building above it another model, deterministic and reliable. New asynchronous concurrency abstractions have been proposed for multithreading-based languages like Polyphonic C# [Benton et al. 2004]. One of those mechanisms is called Chords. A chord is a synchronization construction that allows coordinating events. It is composed of a header and a body. The header reflects the association of the body with a pattern composed of a set of methods: the execution of the body is deferred until all the invocations declared on the chord header are issued. Methods can be synchronous or asynchronous. Asynchronous methods return immediately, synchronous methods block until the chord is enabled (eg. all the methods in the header have been called). The effect is equivalent to the creation of a continuation on the execution of the synchronous method that is executed on the space of the synchronous process when the chord is enabled. A particular method may appear in several headers; if several chords are enabled, in theory an unspecified chord is selected for execution. Advantages of Chords include explicit declarative concurrency support and encapsulation of the underlined platform. That makes them a suitable construction for local concurrency and also distributed computing, while avoiding the problems related to lock-based programming. Chords are an object oriented version of the join patterns from the join-calculus process calculus [Fournet and Gonthier 2000]. They have been implemented in Polyphonic C# and its successor Cω [Benton et al. 2004], also in MC# [Guzev and Serdyuk 2003] and Parallel C# [Guzev 2008] that extend Polyphonic C#. There are also Java implementations, based either in the modification of the JVM (Join Java [Itzstein and Jasiunas 2003] or source-to-source transformation (JChords [Vale e Pace 2009]). This work studies the issues related to the project and implementation of chords through the construction of a chord library in Lua, available at [MB10 2010]. 2. Brief introduction to Lua Lua [Ierusalimschy et al. 2007] is an interpreted, procedural and dynamically-typed programming language. It is based on prototypes and features garbage collection. Tables are the language’s single data structuring mechanism and implement associative arrays, indexed by any value of the language except nil. Closures and coroutines are first-class values in Lua. Lua coroutines are lines of execution with their own stack and instruction pointer, sharing global data with other coroutines. In contrast to traditional threads (for instance, Posix threads), coroutines are collaborative: a running coroutine suspends execution only when it explicitly requests to do so. Lua coroutines are asymmetric. They are controlled through calls to the coroutine module. A coroutine is defined through an invocation of create with a function as parameter. The created coroutine can be (re)initiated by invoking resume, and executes until it suspend itself explicitly calling yield. The first value returned by coroutine.resume is true or false indicating whether the coroutine was resumed successfully, further results are the values passed optionally to yield. Listing 1. Example using Lua coroutines ```lua function func() coroutine.yield("Now I'm yielding") end co = coroutine.create(func) print(coroutine.resume(co)) --> true Now I'm yielding print(coroutine.status(co)) --> suspended print(coroutine.resume(co)) 3. Proposal Before initiating the implementation of our chord library, there are some aspects that must be defined: 1. Parameters substitution. How to behave on collision? Parameter substitution is done by name. That leads to the restriction that methods on the same header cannot have the same parameter names. On collision the behaviour is unspecified. 2. Since on the invocation of a method its parameters must be saved until the guard succeeds, when will actual parameters be evaluated? What will happen if they are modified in the meantime? In Lua, closures and coroutines are first class values. This means that the chord body can receive closures and coroutines as parameters. Also, Lua provides lexical scope and, as such, closures encapsulate the non-local variables to the function. Thus, if the parameters of the chord body include a coroutine or if the value of its non-local variables change, the result returned by the chord body may be different when executed on the moment a message is issued or after the chord is finally enabled. As result, for instance, a chord could try to execute a coroutine that was active at the moment of the call, but now is already dead. However, it seems that this problem comes naturally from separating the invocation of a method/function of its execution having concurrency in a stateful language with shared memory. A locking solution would not do better, because it could happen that the execution of a method/function delays while there are others accessing a common resource, which side-effects could include modifying a function non-local variable or executing a coroutine it received as argument. Our implementation executes the chord body when the guard is satisfied, using the parameters whose references were saved at the time of the invocation. 3. Synchronization methods are not mandatory in a header. In case all methods are asynchronous, should they be executed on a new coroutine (following the Poly- Phonic C#/C\ω solution)? Our implementation assumes that solution to preserve the asynchronous specification. This solution, however, leads to another problem, as we shall explain in Section 5. 4. The chord body is executed on the synchronous process that issued the call (if any), thus it must block. As noted by Benton et alii [Benton et al. 2004], permiting more that one synchronous process on a chord would allow the construction of a rendezvous like synchronization. However, as they explain and we show in the Example 2, (i) it is possible to construct such mechanism combining single- synchronous-method based chords (ii) the choice of the thread where the body is executed would influence the result of the execution. Our proposal allows coordinating synchronous and asynchronous methods. Asyn- chronous methods return immediately, while the synchronous method block until a guard is satisfied. After a chord is enabled, the body is executed on the thread of the synchronous method, which is unique for each chord. At least the synchronous method must execute on a different coroutine (otherwise, all processes would deadlock). In order to analyze the requirements of our implementation, we study a set of use cases we present in the sequel. 4. Use cases The producer-consumers problem with limited buffer (or bounded-buffer problem) de- scribes two types of processes, producers and consumers, sharing a common, fixed-size buffer with N positions. Producers generate data and put it into the buffer and consumers get it from the buffer, removing it. Producers cannot put data on a full buffer and con- sumers cannot read data from an empty buffer. In our example, in case a producer finds the buffer full, just like when a consumer finds it empty, it will block. The code in listing 2 shows a solution for the producers-consumers problem based on chords. This implemen- cation consists on two chords that express the synchronization requirements stating that producers produce only when there is an empty slot (line 3) and consumers consume only when there is a full slot (line 2). Producers and consumers should block when the guards are not satisfied, thus put() and get() methods are declared synchronous. The system is initialized by sending an empty() message. Note that, while this implementation works for a single-slot buffer, a N-slot buffer can be simply implemented by invoking N times Listing 2. Producers/consumers ```c 1 | chords = require "chords" 2 | chords.join("sync get()", "full(val)")({function(val) chords.empty(); return val; end}) 3 | chords.join("sync put(val)", "empty()")({function(val) chords.full(val); end}) 4 | chords.empty() ``` the method `empty()`. It is equivalent to creating N slots. Every method call will match only a chord, thus the behaviour will still be correct (the addition of `empty()` and `full()` messages equals the size of the buffer). As noted in [de Moura 2004], the producers-consumers problem is a pattern for many different problems. Thus, problems following the producers-consumers pattern can also be implemented using this mechanism. The *readers-writers* problem describes a system where many processes may ac- cess the shared memory concurrently, either for reading and writing the same location. Only a process can access the location when it is a writer. Several readers are allowed to access at the same time. The code in listing 3 implements a readers-writers lock with readers preference. When a writer wishes to write it asks for an `ExclusiveLockAdq()`. ``` Listing 3. A readers-writers lock 1 lchords = require "lchords" 2 lchords.join("sync ExclusiveLockAdq()", "idle()") 3 local function ExclusiveLockRel() lchords.idle() end 4 lchords.join("sync SharedLockAdq()", "idle()") (function() lchords.s(1) end) 5 lchords.join("sync SharedLockAdq()", "s(n)") (function(n) lchords.s(n+1) end) 6 lchords.join("SharedLockRel()", "s(n)") 7 function(n) if n==1 then lchords.idle() else lchords.s(n-1) end end 8 lchords.idle() ``` The lock is only granted when there is no other process accessing the buffer. This is controlled with the message `idle()`: when there are no accesses to the buffer, it is idle. The chord that controls writers access to the buffer is shown in line 2. When the writer finishes accessing the buffer, it calls the `ExclusiveLockRelease()` method to send a new `idle()` message, allowing access to the buffer. The readers ask for a `SharedLock()` to ac- cess the buffer which, as described in the chord in line 4, is only granted when the buffer is `idle()`. As shown, the private acquisition of the lock by readers and writers is controlled by means of consuming an `idle()` message. Since the access can be shared by several readers, the idle message is only sent when there are no more readers. This is expressed by the chord in line 7, where the message `s(n)` describe the state of the readers (number of readers processing) anytime. While the readers are still accessing the system, the counter in `s(n)` is incremented (line 5). 5. Implementation Our library provides a solely function: a `join`. Chords can be specified by declaring a header and a body using the syntax `join (method list) (body)`, where `method list` is a list of method names separated by commas. The modifier `sync` can be used to indicate that the method will execute synchronously. All the information necessary to control the chords are saved in tables `allChords` and `allMsgs`. Table `allChords` contains the list of chords and for every chord, the function body, the name of the synchronous method and a table whose keys are the messages specified in the chord header. Table `allMsgs` is indexed with the name of the messages and contains the number of active invocations, the list of arguments for every invocation and the parameter names. Basically, the library works by checking if any chord where the message was defined is enabled. If there is one, the body is executed with the parameters of some invocation. The code is basically divided in three parts or functions: - the `join` function receives a list of methods and returns a function to allow the syntax `join (method list) (function)`. The parameter of the returned function is the chord body. The declared methods are created in the chord namespace as functions that receive a variable number of parameters (in Lua is written as (...)) and execute the matching operation we explained before. ```lua local join = function(...) end local found, args local idx = #allChords + 1 allChords[idx] = {} allChords[idx]["msg"] = {} for i = 1, select("#", ...) do -- preparing the stage local msg = select(i, ...) msg, found = string.gsub(msg, patternSync, "%1") msg, args = string.match(msg, patternMsg) allChords[idx]["msg"][msg]=true -- The msg is in this chord if found ~=0 then allChords[idx].sync = msg end allMsgs[msg]["names"] = {args.gsub(patternArgs, "%1")} --Name of the parameters local chord = function(...) return faux(msg, ...) end allChords[idx].func = func end • The matching function looks for enabled chords. While the method is synchronous in some chord and no chord was enabled, it must block (yields). If the enabled chord contains only asynchronous methods, a new coroutine is initiated. ```lua local function faux(msg, ...) allMsgs[msg].called = allMsgs[msg].called + 1 table.insert(allMsgs[msg].args , {...}) --saving args local toReturn = true for i, chord in ipairs(allChords) do if chord."sync"==msg then toReturn = false if chkMatch(chord) == true then return execBody(chord) end elseif chord."sync"==nil and chord.msg[msg]==true then --we all async, check for match if chkMatch(chord) == true then --all async and chord matched local f = coroutine.wrap(execBody) -- we have a match f(chord) return end end if toReturn==true then return else coroutine.yield(); return faux(msg, ...) end end • The function that makes the appropriate arrangements for the parameters and executes the body. The original Lua does not provide information about the function arguments before the function is executed. To solve this problem we can use a C function that retrieves that information, or then to receive the function as a string and extract the argument names using pattern matching. Here we are using a Lua version (LuaNua [Milanés et al. 2010]) that provides internal information about the structure of the values. ```lua local function execBody(chord) local args = {} -- parameters list if chord.func then local params = debug.content(debug.content(chord.func).p).locvars or {} for i = 1, #params do if params[i].startpc==0 then params[i]=params[i].varname else params[i]=nil end end for msg in pairs(chord.msg) do allMsgs[msg].called = allMsgs[msg].called + 1 idx = next(allMsgs[msg].args) or {} for i, v in ipairs(allMsgs[msg].args[idx] or {}) do args[lookup(allMsgs[msg].names[i], params)] = v end end allMsgs[msg].args[idx] = nil return chord.func(unpack(args)) end end ``` This implementation poses some restrictions: - At most one method per chord can be declared synchronous (see Section 3). - The body of a chord with only asynchronous methods on the header, does not have a return statement or it is empty. - Method on the same header cannot have the same name. - All the formal arguments in a chord header have distinct names (if the names also appear as body function parameters), otherwise the behaviour is undetermined. - In all chords, the same method is invoked using the same parameters and order (this restriction belongs to the current implementation and is simple to eliminate). A problem that this version of the implementation is not able to resolve is how to execute a body of a chord when the header contains only asynchronous methods. In this case, on the invocation of the last needed method to enable the chord, the library must execute the function in another coroutine to avoid delaying the asynchronous method. However, Lua coroutines are asymmetric, that means that the method will not be able to return until the coroutine yields or returns. The method should return immediately and leave the execution of the body for an external scheduler or coroutine available, it should be aware of. We are analyzing the modifications to the syntax to allow for that behaviour, currently the function is executed on a new coroutine initiated by that method. 6. Analysis The reader can notice that our implementation enjoys the simplicity of the cooperative multithreading model, turning unnecessary to use locking mechanisms “under the hoods”. As Benton et alii [Benton et al. 2004] comment, implementing chords require atomicity to decide if a chord was enabled, to pop pending calls and when scheduling the chord body for execution. In Lua, atomicity is guaranteed since it is the programmer who explicitly give the control up by placing calls to `yield` on its program. Since our library is based on coroutines, we further analyze the advantages and disadvantages of offering chords in Lua as a concurrency mechanism compared to simple coroutine mechanisms. The criteria of the analysis include performance, abstraction level, error-proneness, readability and expression power. **Performance**: The performance of applications implemented using coroutines directly will be better than using chords, since we are building chords over coroutines. However, we consider that using chords does not represent a serious performance overhead. The reason is that in our implementation (i) the structure that keeps the information regarding every chord is filled initially when the join is called (that is, just once per chord) (ii) the matching operations consist in indexing the table `AllMsgs` with the message name to increment its call counter and to retrieve the list of chords containing the message last invoked. Then, we check if the invocation has enabled a chord, by traversing the table corresponding to the chord to check the messages counters. Since those lists are unlikely to have more than a few records, those operations should be inexpensive. As noted elsewhere, the most expensive operation related with this mechanism comes at the time of executing the body of a chord composed of asynchronous methods, because a new coroutine will have to be created and resumed. **Abstraction**: Abstraction is one of the benefits of chords: after the chord is declared and the synchronization rules are stated, the synchronization of the methods spec- ified in the header is controlled by the chord. The fact that it is build over coroutines or other model is hidden from the user. **Error-proneness**: Writing a chord is somewhat different to the usual locking, but after the chord is defined the programmer can forget about it. Coroutine calls must be inserted all across the code, thus the tendency to oblivion is expected to be higher. **Readability**: Chords allow to explicitly define and understand the synchronization rules of a system. In coroutine based programming, the synchronization is encoded implicitly inside the program, requiring to analyze the code correspondent to every coroutine to understand how the system works. **Expression power**: The listing 4 shows the code for the producers-consumers problem based on coroutines. This implementation has two main advantages: the pro- ```plaintext function produtor () return coroutine.wrap (function () while true do item = produz () coroutine.yield (item) end end end function consumidor (prod) while true do local item = prod () consome (item) end end ``` ducer does not need to know the consumer, and the consumer uses the producer as a function, thus it is transparent if the producer is a coroutine or not. On the other hand, the implementation based on chords requires that both the producer and the consumer are executed inside coroutines, although they are not aware of the other’s condition, or even, of its existence. The advantage of chords over the coroutine implementation is on the clarity the rules are declared. Also, this example is consumer driven, while the example shown for chords the processes run independently (“buffer-driven”): as soon as the buffer is full the consumer can consume, as soon as it is empty, the producer can produce. Also, to modify the example to implement a bounded buffer is simpler in the chord example in 2. The chord implementation is very compact, while the coroutine based is also short. 7. Final remarks Concurrency is an urgent issue in modern programming. Languages should provide mechanisms allowing programmers to write correct and predictable programs that can take advantage of the computational power provided by current architectures. This work studies chords, a high level synchronization construction adequate for multithreaded environments. Through the implementation of a chord library in Lua, we have analyzed the advantages and issues related with this mechanism. We have concluded that the main advantage of chords is in its capacity for declarative specification of synchronization. In this subject, readability, abstraction and error-proneness are better than using coroutines directly. The expressiveness of this construction is satisfactory, while performance should not be mostly affected. The implementation was simplified thanks to Lua features such as first-order functions and dynamic typing. Future works in this direction include a thoughtful performance analysis and the codification with chords of a real application currently implemented using a state machine. The library can also be extended in order to allow for a syntax where join statements could be used just like methods in a header definition. Also, quantifiers allowing, for instance, to specify that a particular number of times a message should be received in order to free the guard. Timeouts is another issue that could be considered, and policies to indicate how to conduct matching. We also intend to explore the advantages of that and other mechanisms can bring for the construction of truly concurrent systems, that is, based on preemptive multithreading. Acknowledgements This work was partially funded by CNPq Brasil. References
{"Source-Url": "https://homepages.dcc.ufmg.br/~anolan/research/_media/lchordstext.pdf?cache=cache&id=start", "len_cl100k_base": 4958, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 23002, "total-output-tokens": 6149, "length": "2e12", "weborganizer": {"__label__adult": 0.00035500526428222656, "__label__art_design": 0.0002236366271972656, "__label__crime_law": 0.0002892017364501953, "__label__education_jobs": 0.0003888607025146485, "__label__entertainment": 5.78761100769043e-05, "__label__fashion_beauty": 0.00011861324310302734, "__label__finance_business": 0.0001105666160583496, "__label__food_dining": 0.00032448768615722656, "__label__games": 0.00042128562927246094, "__label__hardware": 0.0006175041198730469, "__label__health": 0.0003838539123535156, "__label__history": 0.00016868114471435547, "__label__home_hobbies": 6.395578384399414e-05, "__label__industrial": 0.0002856254577636719, "__label__literature": 0.0001875162124633789, "__label__politics": 0.00022935867309570312, "__label__religion": 0.0004148483276367187, "__label__science_tech": 0.005916595458984375, "__label__social_life": 8.887052536010742e-05, "__label__software": 0.003337860107421875, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.0003192424774169922, "__label__transportation": 0.00039887428283691406, "__label__travel": 0.00018727779388427737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25426, 0.01109]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25426, 0.59336]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25426, 0.86887]], "google_gemma-3-12b-it_contains_pii": [[0, 2988, false], [2988, 6038, null], [6038, 9586, null], [9586, 13181, null], [13181, 16032, null], [16032, 19532, null], [19532, 22494, null], [22494, 25253, null], [25253, 25426, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2988, true], [2988, 6038, null], [6038, 9586, null], [9586, 13181, null], [13181, 16032, null], [16032, 19532, null], [19532, 22494, null], [22494, 25253, null], [25253, 25426, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25426, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25426, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25426, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25426, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25426, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25426, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25426, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25426, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25426, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25426, null]], "pdf_page_numbers": [[0, 2988, 1], [2988, 6038, 2], [6038, 9586, 3], [9586, 13181, 4], [13181, 16032, 5], [16032, 19532, 6], [19532, 22494, 7], [22494, 25253, 8], [25253, 25426, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25426, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
d84ce1b74e94e5793ad6b99823ed7c007eb76c3b
ANALYSIS AND DESIGN OF CREDIT CARD FRAUD DETECTION SYSTEM WITH OBJECT ORIENTED METHODOLOGY AMANZE, B. C 1, CHILAKA, U.L 2, AGOHA, U.K 3 1Department of Computer Science, Faculty of Science, Imo State University, Owerri, Nigeria. 2,3 Ph.D Students, Dept.of Computer science, Faculty of Science, Imo State University, Owerri, Nigeria. Abstract: Nowadays, the development of technology is rapidly increasing, including the credit card fraud. The credit card fraud (CCF) is one of the problem our banking system is facing today. Fraudsters used many methods to attack the customer. The growth in electronic transactions has resulted in a greater demand for fast and accurate user identification and authentication. Conventional method of identification based on possession of pin and password are not all together reliable. Higher acceptability and convenience of credit card for purchases have not only given personal comfort to customers but also attracted a large number of attackers. As a result, card payment systems must be supported by efficient fraud detection capability for minimizing unwanted activities by fraudsters. To deal with this problem, a computerized system is needed. Methods used in analyzing and designing of the credit card fraud is Object Oriented Analysis (OOA) with unified modeling language (UML). Keyword: Credit card, Object Oriented Methodology, Unified Modeling Language, Information system, Bank Staff and Bank Customer. INTRODUCTION: In credit or debit card based purchase, the cardholder presents card to a merchant for making payment. To commit fraud in this kind of acquisition, the fraudster has to steal the card. If the legitimate user does not understand the loss of card, it can lead to important financial loss to the credit card company and also to the user. In online payment mode, attackers need only little information for false transaction, for example, secure code, expiration date, card number and many other factors. In this purchase method, many transactions will be done through Internet or telephone. To obligate fraud in these types of purchases, an impostor simply needs to know the card details. Most of the time, the honest cardholder is not aware that someone else has seen or stolen his card information. The only way to detect this kind of fraud is to analyze the spending patterns on every card and to figure out any irregularity with respect to the “usual” spending patterns. The examination of existing purchase data of cardholder is a likely way to reduce the rate of positive credit card frauds. Since humans tend to display specific behaviorist profiles, every cardholder can be characterized by a set of patterns comprising information about the distinctive purchase category, the time since the last buying, the amount of money spent, and other things. Nonconformity from such patterns is reflected as fraud. Credit card frauds are increasing day by day as the use of credit card is increasing [1]. Occurrence of credit card fraud has increased dramatically both online and offline. Credit card based purchase can be done in two ways: (i) physical card (ii) virtual card. In physical card purchase, the cardholder presents his card physically to the merchant for making payment. For this type of fraud the attacker has to steal the credit card. In virtual card purchase only the information about the card is stolen or gathered like card number, secure code etc. Such purchases are done over the Internet. For this type of fraud the attacker needs only the card details so the only way to detect this type of fraud is to analyze the spending pattern of the card holder. When one’s credit card or credit card information is stolen and used to make unauthorized purchases on e-commerce systems on the Internet, one becomes a victim of internet credit card fraud or no card present fraud. This is nothing new and there is nothing unusual about this because identity theft and credit-card fraud are present-day happenings that affect many people and involve substantial monetary losses. Fraud is a million dollar business and increasing every year. Credit card is refers to a method of selling goods or services without the buyer having cash in hand [2]. A credit card transaction involves four entities. The first entity is the consumer, that is the person who owns the card and who carries out the legitimate transactions. The second entity is the credit card issuer; that is usually the consumer’s bank – also known as issuing bank – which provides the credit services to the consumer. The credit card issuer sends the bill to the consumer in order to request a payment for their credit card transactions. The third entity is the merchant who sells goods or services to the consumer by charging consumer’s credit card. This charge is achieved through merchant’s bank – the forth entity – which sends the request for the transaction to the issuing bank. The issuing bank will check whether the amount of the transaction does not reach the credit card’s limit before authorizing that transaction. If the transaction is valid, the issuing bank will block the requested amount from consumer’s credit card account and send an authorization response to merchant... bank. As soon as the authorization response is received by the merchant’s bank, the merchant is notified; the transaction is marked as completed and the consumer can take the goods. The blocked amount on consumer’s credit card account will be transferred into merchant’s bank account. Statement of the Problem: Information Technology (IT) has contributed to a great extent in mitigating fraud for banks that have embraced and implemented it. Credit card transaction frauds cost financial institutions millions of dollars per year. As a result, fraud detection has become an important and urgent task for this business. The incidents of loss of hard earned money to fraudsters have raised a lot of concern and portend serious danger to economic growth. Ordinarily, thieves invade homes and offices to steal physical cash from their victims. The rapid development in information and communication technology has introduced a cashless society where people can pay for goods and services using credit cards. This appears more secured as people no longer keep huge physical cash at home; leading to less incidents of theft. The recent development shows that hackers have device electronic means of stealing money from people’s account by stealing their credit card details and using same to transfer money to other accounts. This is heart breaking and requires an enhanced security system on communication channels to avert such financial loss. Another challenge for contemporary financial institutions is the ability to understand and deal with the high volume of data and information, and using knowledge from them to improve and make informed decisions. Credit card fraud detection is a pattern recognition problem. Every cardholder has a shopping behavior which establishes a profile for the cardholder. Currently, fraud detection system (FDS) identifies many legitimate accounts as fraudulent resulting in a large number of false positives (FPs). As every cardholder has a huge number of possibilities for developing new patterns of behavior, the types of transactions are widely variable. Hence, it is almost impossible to identify consistent and stable patterns for all the transactions. In fact, there are so many variations of behavior for each individual that are exponential in combination and the complexities of enumerating all combinations of cases are enormous. This ever changing pattern of behavior with the combination of legitimate and fraudulent cases has left the Financial Institutions (FIs) with a large number of FPs (approximately 90% of flagged accounts) for investigation. The above challenges can be addressed through the use of a multi-agents system that is based on artificial intelligence since it will provide managers with added value information reduce the uncertainty of the decision outcome and thereby enhance banking service quality. No doubt, the application of new technologies can give bank a competitive lead to a high performance and eliminate fraud associated with credit cards. Credit card frauds (CCF) have been a long –time headache for credit card companies. With the growth of online business in Nigeria, the number of credit card frauds has also increased drastically. REVIEW OF RELATED LITERATURE: [3] Stated that today it is easy to do banking transaction digitally, both on a computer or by using a mobile phone. As the banking-services increase and get implemented to multi-platforms, it makes it easier for a fraudster to commit financial fraud. In their research, they discovered the need to focus on investigating log-files from a mobile money system that makes it possible to do banking transactions with a mobile phone. They developed a system whose main objective is to evaluate if it is possible to combine two statistical methods, Benford’s law together with statistical quantiles, to find a statistical way to find fraudsters within a Mobile Money system. To achieve this, rules were extracted from a case study with focus on a Mobile Money system and limits were calculated by using quantiles. A fraud detector was implemented that uses these rules together with limits and Benford’s law in order to detect fraud. The fraud detector used the methods both independently and combined. Finally, the results obtained showed that it is possible to use the Benford’s law and statistical quantiles within the studied Mobile Money system. It is also shown that there is only a very small difference when the two methods are combined or not both in detection rate and accuracy/precision. Meanwhile, [3] concluded that by combining the chosen methods it is possible to get a medium-high true positive rates and very low false positive rates. The most effective method to find fraudsters is by only using quantiles. [4] Proposed credit card fraud detection model using Hidden Markov Model. Hidden Markov Models (HMMs) which is a statistical tool and extremely powerful method used for modeling generative sequences characterized by a set of observable sequences. Hidden Markov Model is probably the simplest and easiest model which can be used to model sequential data, i.e. data samples which are dependent on each other. An HMM is a double embedded random process with two different levels, one is hidden and other is open to all. The Hidden Markov Model is a finite set of states, each of which is associated with a probability distribution. Transitions among the states are governed by a set of probabilities called transition probabilities. In a particular state, an outcome or observation can be generated according to the associated probability distribution. It is only the outcome, not the state visible to an external observer and therefore states are “hidden” to the outside; hence, the name Hidden Markov Model [5]. HMM has been successfully applied to many applications such as speech recognition, robotics, bio- informatics, data mining etc. [4] Achieved their aim by storing all the information about credit card (Like Credit card number, credit card CVV number, credit card Expiry month and year, name on credit card etc.) in the credit card database. If details entered by User into the database are correct then it will ask for Personal Identity number (PIN). After matching of Personal Identity number (PIN) with database and account balance of user’s credit card is more than the purchase amount, the fraud checking module will be activated. The verification of all data will be checked before the first page load of credit card fraud detection system. If user credit card has less than 10 transactions then it will directly ask to provide personal information to do the transaction. Once database of 10 transactions is developed, then fraud detection system will start to work. Observation probabilistic in an HMM Based system is initially studied, spending profile of the cardholder and followed by checking an incoming transaction against spending behavior of the cardholder one can show clustering model is used to classify the legal and fraudulent transaction using data conglomeration of regions of parameter, HMM based credit card fraud detection during credit card transaction. [4] Presented experimental result to show the effectiveness of our approach. METHODOLOGY ADOPTED: Object-oriented analysis and design methodology (OOADM) which is adopted in this dissertation is a set of standards for analysis and development of the credit card fraud detection system. It uses a formal methodical approach to the analysis and design of information system. Object-oriented design (OOD) elaborates the analysis models to produce implementation specifications. The main difference between object-oriented analysis and other forms of analysis is that by the object-oriented approach one organize requirements around objects, which integrate both behaviors (processes) and states (data) modeled after real world objects that the system interacts with. In other traditional analysis methodologies, the two aspects: processes and data are considered separately. For example, data may be modeled by ER diagrams, and behaviors by flow charts or structure charts. The primary tasks in object-oriented analysis (OOA) are: a. Find the objects and organize them b. Describe how the objects interact c. Define the behavior of the objects d. Define the internals of the objects Common models used in OOA are use cases and object models. Use cases describe scenarios for standard domain functions that the system must accomplish. Object models describe the names, class relations (e.g. Circle is a subclass of Shape), operations, and properties of the main objects. Analysis of the Credit Card: However, the protocol for performing credit card transactions is composed of two query-response pairs. First, the Point-of-Sale solicits credit card number and expiration date, and the card responds with this information. In its response, the credit card also includes an iCVV, or integrated Card Verification Value: a dynamic security token intended to authenticate the message. Once this has been completed, the Point-of-Sale sends a charge request to the bank with the information received from the credit card, and then receives an authorization response to accept or reject the charge. Fig. 1.1 shows the current credit card (CC) protocol. ![Fig. 1: The Current Credit Card Protocol](image-url) The exchange of messages in the CC Protocol is shown in Fig. 1. They are: solicitation, card information, charge request and authorization. Note that after the card responds to the Point-of-Sale, its involvement in the transaction is complete. The contents of these messages are as follows: **Solicitation:** First, the Point-of-Sale solicits the credit card for its information. The solicitation is composed of a number of messages sent in both directions, identifying the Point-of-Sale type (e.g. 2PAY.SYS.DDF01) and the credit card type (e.g. VISA CREDIT). Since these messages are constant for a given Point-of-Sale and credit card, we abstract the solicitation messages as a single request from the Point-of-Sale to the credit card. **Card Information:** The credit card responds to the solicitation by sending back the following card information: 1. The credit card number 2. The credit card's expiration date 3. The iCVV 4. The name of the bank that issued the card The iCVV is an unpredictable 4-byte value freshly generated for every solicitation response, and is subsequently used by the bank to validate the transaction as described below. **Charge Request:** The Point-of-Sale issues a charge request to the bank. This request is composed of: 1. The credit card number 2. The credit card's expiration date 3. The iCVV 4. The amount to be charged **Authorization:** When the bank receives a charge request, it uses the credit card number to look up the account, verifies the expiration date, and then validates the iCVV to authorize the purchase. It will generally also perform some additional checks, such as verifying that the card was not reported lost or stolen, or matching this purchase's location against the known location of the card holder. Finally, it responds with its authorization decision. When the credit card is manufactured, a secret seed value is shared between the credit card and the bank. This enables the credit card and the bank to both generate the same iCVV sequence, unpredictable to any party that does not have access to this seed. The iCVVs are simply sequential elements of this sequence: each time the credit card responds to a solicitation, it generates the next iCVV in the sequence and includes it with the card information response. In order to make an authorization decision, the bank searches through its account database which is indexed by the credit card number. Once the bank locates the account, it verifies that the received expiration date matches the expiration date on file. In addition, it recalls the iCVV from this credit card's previous charge request and generates the next element in the sequence, then compares the received iCVV to the value it generated. It is possible that a card may generate an iCVV without communicating it to the bank. For example, a charge request may become corrupted in transit, or a Point-of-Sale may experience a network failure. As a result, a credit card's iCVV may have advanced further in the sequence than the bank expects. To handle this situation, the bank may generate several iCVVs in the sequence for comparison to the received value. If a match is found, the bank considers the iCVV to be valid. It updates its state into the pseudorandom sequence to reflect the received iCVV, and continues with any other checks to be performed before authorizing the charge. If no match is found, the bank considers the iCVV to be invalid and declines the charge. **Use Case Diagram of the Credit Card Fraud:** The model designed in this paper is divided into several modules that needs access restrictions. Different use cases were described in the way they were applicable in the software designed. Use cases are as listed below: 1. Bank staff Use Case 2. Credit card holder Use Case 3. Use Case diagram of the New System. **Use Case Boundary of Bank Staff:** The system identified total of two roles that functions as access levels in the diagrams. A use case is a function to be performed by the system from the user's perspective. Fig. 2 is the use case boundary diagram of the new system. Fig. 2 represents the bank staff use case diagram. The bank staff will have access to opening a new account to customers, issue credit card pin to customers, credit or debit customers account during normal banking transaction within the banking hall. ![Use Case Boundary of Bank Staff](image) **Use Case Boundary of Credit Card Holder:** The credit card holder can login to the system using credit card pin and username. The user can perform credit card transactions, view account balance, view account statement and also have access to changing the credit card pin as shown in Fig. 3. ![Use Case Boundary of Credit Card Holder](image) **Use Case Diagram of the New System:** Fig 4. Shows a use diagram of the new system, the large rectangle is the system boundary. Everything inside the rectangle is part of the system under development. Outside the rectangle are the actors that act upon the system. Actors are entities outside the system that provide the stimuli for the system. Typically, they are human users, or other systems. Inside the boundary rectangle are the use cases. These are the ovals with names inside. The lines connect the actors to the use cases that they stimulate. a) An <<includes>> relationship indicates that the second use case is always invoked by the first use case. b) An <<extends>> relationship indicates that the second use case may optionally invoke the first use case. Object Diagrams: Sequence Diagram of the Credit Card Transactions: Figure 5. Contains the following below: New Credit card: Given Input- Request from the user for the card. Expected Output-Assigning an account to requested user. Login: Given Input- Give username and password of particular user. Expected Output- Login to user’s account. Security information: Given Input- Give the security information by answering security questions. Expected Output-Update of account with the security details. **Transaction:** Given Input- Give the credit card details and performs transaction. Expected Output- Update database. **Verification:** Given Input- Checks with user’s stored details like security answers or previous spending profile. Expected Output- If the verification is success, user can perform transaction, else blocks the card. --- **Activity Diagram of the Credit Card Fraud Detection System:** Fig. 8 shows the various processes that lead to credit card transactions. It starts with opening a credit card account, making purchase online, effecting payment with your credit card which will be verified before the transaction is completed. **Collaboration Diagram of Credit Card Transaction:** Fig. 9 shows the various information that needed at each stage of the credit card transactions. --- **Event Package Diagram:** The event package diagram as shown in Fig. 10 shows the various stages of events in the process of using credit card for payment. It starts with entering the card details which needs to be verified before completing the transaction. Fig. 10: Event Package Diagram of Credit Card Transactions Class Diagram of the New System: Fig. 11 show the database class diagram of the new system. The line shows the associations between the various tables in the database. Fig. 12: Entity Relationship Diagrams of Credit Card Transactions a. The Structure of the Display: b. draft screen Entity Relationship Diagram of the New System: Entity Relationship diagrams is a specialized graphics that illustrate the interrelationship between entities in a database. Fig 12 shows the entity relationship in the database. The diagram above is an entity relationship diagram that is a major data modeling tool that helped database analysts to organize data into entities. CONCLUSION: The work developed a new object oriented approach of solving bank frauds problems especially in the area of credit card fraud. A conceptual framework for a system based on credit card fraud (CCF) process was developed. Various classes of object diagrams were proposed to provide a set of functionalities for CCF in electronic environment for banks. REFERENCES:
{"Source-Url": "http://www.iejse.com/journal/index.php/iejse/article/download/36/31", "len_cl100k_base": 4470, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23096, "total-output-tokens": 5242, "length": "2e12", "weborganizer": {"__label__adult": 0.002002716064453125, "__label__art_design": 0.0021762847900390625, "__label__crime_law": 0.0188140869140625, "__label__education_jobs": 0.01543426513671875, "__label__entertainment": 0.00043392181396484375, "__label__fashion_beauty": 0.0013866424560546875, "__label__finance_business": 0.1654052734375, "__label__food_dining": 0.002056121826171875, "__label__games": 0.005619049072265625, "__label__hardware": 0.005901336669921875, "__label__health": 0.00673675537109375, "__label__history": 0.0009469985961914062, "__label__home_hobbies": 0.0007381439208984375, "__label__industrial": 0.0024585723876953125, "__label__literature": 0.0011501312255859375, "__label__politics": 0.002384185791015625, "__label__religion": 0.0013523101806640625, "__label__science_tech": 0.1693115234375, "__label__social_life": 0.0004601478576660156, "__label__software": 0.0261688232421875, "__label__software_dev": 0.56494140625, "__label__sports_fitness": 0.0006837844848632812, "__label__transportation": 0.00269317626953125, "__label__travel": 0.0005784034729003906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24203, 0.01247]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24203, 0.35646]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24203, 0.93225]], "google_gemma-3-12b-it_contains_pii": [[0, 5231, false], [5231, 11253, null], [11253, 15927, null], [15927, 20102, null], [20102, 20604, null], [20604, 21675, null], [21675, 22397, null], [22397, 24203, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5231, true], [5231, 11253, null], [11253, 15927, null], [15927, 20102, null], [20102, 20604, null], [20604, 21675, null], [21675, 22397, null], [22397, 24203, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24203, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24203, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24203, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24203, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24203, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24203, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24203, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24203, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24203, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24203, null]], "pdf_page_numbers": [[0, 5231, 1], [5231, 11253, 2], [11253, 15927, 3], [15927, 20102, 4], [20102, 20604, 5], [20604, 21675, 6], [21675, 22397, 7], [22397, 24203, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24203, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
de56085fbf0551c8dacbfc8207d5fc95bb1007b3
The MODUS approach to formal verification Brewka, Lukasz Jerzy; Soler, José; Berger, Michael Stübert Published in: Business Systems Research Link to article, DOI: 10.2478/bsrj-2014-0002 Publication date: 2014 Document Version Publisher's PDF, also known as Version of record Link back to DTU Orbit Citation (APA): The MODUS Approach to Formal Verification Lukasz Brewka, José Soler, Michael Berger DTU Fotonik, Denmark Abstract Background: Software reliability is of great importance for the development of embedded systems that are often used in applications that have requirements for safety. Since the life cycle of embedded products is becoming shorter, productivity and quality simultaneously required and closely in the process of providing competitive products. Objectives: In relation to this, MODUS (Method and supporting toolset advancing embedded systems quality) project aims to provide small and medium-sized businesses ways to improve their position in the embedded market through a pragmatic and viable solution. Methods/Approach: This paper will describe the MODUS project with focus on the technical methodologies that can assist formal verification and formal model checking. Results: Based on automated analysis of the characteristics of the system and by controlling the choice of the existing open-source model verification engines, model verification producing inputs to be fed into these engines. Conclusions: The MODUS approach is aligned with present market needs; the familiarity with tools, the ease of use and compatibility/interoperability remain among the most important criteria when selecting the development environment for a project. Keywords: software quality, formal verification, embedded systems, translation tool selection JEL main category: Economic Development, Technological Change, and Growth JEL classification: O31 Paper type: Research article Received: 15, July, 2013 Accepted: 12, January, 2014 DOI: 10.2478/bsrj-2014-0002 Acknowledgments: This research activity is funded under the EU Research for SME associations FP7 project, MODUS-Methodology and supporting toolset advancing embedded systems quality (Project No.286583). Introduction The MODUS project was initiated to provide a sustainable and pragmatic tool set that will allow small and medium-sized businesses to improve their ranking in embedded systems engineering. By the use of formal description Techniques, MODUS will develop and validate a set of technical methods, as well as an open and customizable toolsets enhancing embedded systems by providing (MODUS (2013a)): - Model verification by performing the selection of the available open-source model verification engines can be supplied, based on the automated analysis of system characteristics and the production inputs in these engines. - Connection to standard simulation platform for HW / SW co-simulation. - Software to optimize performance through automated design transformations. - Customizable source-code generation in line with coding standards and conventions. The project will also provide features and open interfaces to customize and extend MODUS toolkit for use with various formal specification techniques, modeling techniques, programming languages, platforms, etc. MODUS is not intended to be low competition with CASE tools used in embedded software engineering at the moment. On the other hand, you want the project to allow the adoption of quality strategies by supplementing these tools and allow existing investments in technical know with continued use. In the next section, the tools for formal verification is described, followed by methods to control the selection of techniques and strategies for the selection tool. **Supported verification tools/languages and their properties** The selected model verification tools to be supported by the MODUS toolset are SPIN and RAISE. Both of the tools are LTL (Linear Temporal Logic) model checkers. The sections below present more details about the properties of these tools and the languages they use for model description. **SPIN** SPIN and PROMELA are focusing on the process interaction, i.e., describing how system components communicate with each other (Holzmann, 2003). As already mentioned not much attention is given to internal computation processes. The process interaction can be modelled in a number of ways: - rendezvous primitives (synchronous) - asynchronous message passing through buffered channels - access to shared variables - any combination of the above SPIN itself provides a methodology for matching the system design expressed in PROMELA language and the LTL formula describing desired/correct behaviour of the system. PROMELA is a language crafted for describing models of distributed systems. It expresses the model description using a language similar to C with some notation from the guarded command language by Dijkstra and the CSP language from Hoare (particularly to describe interaction between processes). A PROMELA model consists of (Holzmann, 2003): - variable declarations with their types - channel declarations - type declarations - process declarations - init process (optional) In PROMELA a process is a basic building unit of the system. It is defined by a so called “proctype” definition that contains the process’ name, the process’ list of parameters, its declaration of local variables, and the sequence of local statements. Meenakshi (2004) provides a following example of process definition: ```proctype Sender(chan in; chan out) { bit sndB, rcvB; do :: out ! MSG, sndB -> in ? ACK, rcvB; if :: sndB == rcvB -> sndB = 1-sndB :: else -> skip fi od } ``` Models written in PROMELA can (and usually will) contain more than one process. Multiple processes will run in parallel, communicating with each other using the interaction methods described at the beginning of this section. The state of the process is defined by its local variables and the process counter. Processes are invoked by using a run statement inside the init process or by adding to active keyword. The process creation can be placed in arbitrary places within the model. The variables in PROMELA require declaration defining the type and name of the variable, prior to its use. There are these five different types of variables available in PROMELA: - bit - bool - byte - short - int It is also possible to declare arrays and records. All variables (local and global) have the initial value of 0. Conflicts in type during value assignment are resolved at run-time. **Communication.** As indicated earlier channels are used to enable communication between processes. There can be the following two types of communications: - Message-passing or asynchronous - Rendezvous or synchronous Channels are of a FIFO (First-In First-Out) type and is defined using array type defining the number of messages that can be occupied by the channel. The declaration also specifies the type of elements that can be passed using this channel. Rendezvous communication is established using channels with dimension zero. If you send through a channel is enabled, and if there is a corresponding receiver that can be performed simultaneously, so both statements are enabled. Both statements will be able to perform a single transition. Statements. Promela statements are separated by a semicolon. One of the basic statements Promela projections (do something) to distinguish, printf, assert (expression) (Check whether the expressions property is valid in a state) if statements (executable if at least one of the options is non-blocking) and submit observations. The feasibility of Promela statement depends on its type and value - expression is evaluated. Assignment statements, skip, pause, printf are statements that are always executable. Expressions are executable if it does not evaluate to zero. This means that if or executable statement is, if at least one guard evaluates to true. The communication settings are executable statements, depending on the status of the channel, a transmit executable for non-empty channels and a receiver statement executable for non-empty channels. To the statements of the group for a specific process in a sequence in a single step, which does not include statements of other processes performed nested, one can use an atomic statement. The feasibility of the atomic statement depends executability to the first survey. If one of the following statements is not executable, the indivisibility of atomic propositions are broken and other processes can be nested. Another example of a one-step embodiment, a d-point. In contrast to the statement if the atomic d-step involves blocking mode, it causes a runtime error. Finally timeout statement is a statement that is executed if no other statements contained in any of the other processes are executable (this function can not be used to be modeled timeouts that are involved in the system design.) RAISE RAISE (Rigorous Approach to Industrial Software Engineering) is a tool-set that consists of: a method for software development, RSL (RAISE Specification Language) specification language, and a computer tools for automated model checking, analysis, and translation (George, 2008). The key for system verification using the RAISE tool-set is to obtain a specification in RSL. RSL as a specification language, provides a wide range of opportunities for expressing the modelled system: it allows property and model-oriented styles, applicative and imperative styles, as well as sequentiality and concurrency. Modules. A RSL specification is divided into modules. A module should capture the types, values and axioms that characterize the system or its parts. A generic module definition in RSL has the following form: ``` id = class declaration-1 ... declaration-n End ``` A declaration will start with a keyword identifying the kind of declaration (e.g. type, value, axiom). The following RSL specification of a database illustrates RSL type declaration. This is further clarified in the following subsections (George, 2008): DATABASE = class type Person Database = Person-set value empty: Database register: Person × Database → Database check: Person × Database → Bool axiom empty =\{\} ∀ p: Person, db : Database • register(p, db) =\{p\} ⊆ db, ∀ p: Person, db : Database • check(p, db) = p ∈ db, End Type declaration. A type is a collection of logically related values. There exists a number of build in types that are predefined in RSL. Beside those, a RAISE user can also define his/her own types. One can create an abstract type like Person in the example above. Abstract types do not have any predefined operators for manipulating their value (beside the “=” operator, used for comparison of two values). Another kind of type declaration is using the “=” operator. Using “=” expresses that the new type is representing the expression on the right side of the operator. Back to the previous example, using “=” declares the Database as a set of people i.e., the type containing all finite subsets of the set of values in Person. RSL Atomic types with their values and operators are listed below (Haxthausen, 2010): o Bool values: true, false operators: =, ≠, ∧, ∨, ⇒, ⇔, ∀, ∃ o Int values: ..., -2, -1, 0, 1, 2, ... operators: =, ≠, +, −, *, /, ↑, \, <, ≤, >, ≥, abs, real o Nat values: 0, 1, 2, ... operators: =, ≠, +, −, *, /, ↑, \, <, ≤, >, ≥, abs, real o Real values: ..., -4.3, ..., 0.0, ..., 1.0, ... operators: =, ≠, +, −, *, /, ↑, \, <, ≤, >, ≥, abs, real o Char values: ‘a’, . . . operators: =, ≠ o Text values: “Alice”, . . . operators: =, ≠ For declaration of composite types it is possible to choose amongst: o A product - an ordered collection of values, not necessarily distinct, of some given types (possibly different). o A list (sequence) - an ordered collection of values, not necessarily distinct, of the same type. o A set - an unordered collection of distinct values of the same type. o A map (or table) - an unordered collection of pairs of values. Value declaration. Values can be named in the value declaration. The simplest example of value declaration is in the form “id : value” and can be seen in the provided example of the RSL declaration of a database (see empty). The actual value that is identified by empty is described in one of the axioms (described later). The next value example (register) represented in that example, declares a function that adds a person to the database, it represents a database after performing the registration i.e., when the person was added. Finally, the third type of value definition in that example, defines a function check. When check is applied to database and a person, depending on whether particular person is part of Person-set defined by Database, a Boolean true of false is returned. Similarly like for previous value expression the axiom contains a detailed characterisation of this value. One can distinguish 3 three forms of value definition in RSL (Haxthausen, 2010): - explicit function definition: \[ \text{is\_in} : \text{Person} \times \text{Database} \rightarrow \text{Bool} \\ \text{is\_in}(p, db) \equiv p \in db \] - implicit function definition: \[ \text{square\_root} : \text{Real} \rightarrow \text{Real} \\ \text{square\_root}(r) \text{ as } s \\ \text{post } s \times s = r \land s \geq 0.0 \\ \text{pre } r \geq 0.0 \] - axiomatic definition: \[ \text{is\_in} : \text{Person} \times \text{Database} \rightarrow \text{Bool} \\ \text{axiom} \\ \forall p : \text{Person} \cdot \text{is\_in}(p, \text{empty}) \equiv \text{false,} \\ \forall p : \text{Person}, db : \text{Database} \cdot \text{is\_in}(p, \text{register}(p, db)) \equiv \text{true} \] **Axiom declarations.** Axioms are used to bring the property value names for expression. Let's go back to the database, for example, the first axiom is an expression of type bool and declares that the value expression is empty, and {} are the same. All axioms are Boolean expressions evaluate to true. The second axiom is presented in our example suppresses the register function. It uses level expressions show that for all people and all databases, the data for a couple of person p and database db is used, is equivalent to a union {p} ⊆ db (ie, a new series that both subsets). Axiom defines Check the function that will be registered if it belongs to the group hosting the database. **Module extension.** RAISE also allows for module extension where one can declare a new module that adds type, values, and axioms to existing module. For that purpose one should use extend <module_name> with expression. **Mechanisms for guiding the verification tool selection** **Relevant model properties** The most important feature that should be considered when deciding on the formal methodology tools is the possibility of representing the properties of the system in the formal language selected for performing the verification and validation. Different formal description languages have different set of features that they can express. During validation it is important to identify the crucial system properties and ensure that these properties can be expressed using a particular formal description. Otherwise the verification makes no sense since, when important parts of specification are lost; essentially it is a different system that is validated. UML and SysML diagrams are the sources of the model descriptions, as far as MODUS tool-set is considered, as presented in previous deliverables (MODUS 2013a, 2013b, 2013c, 2013d). As such, the set of the features contained in these diagrams will depend on the types of the diagrams that are integrated in the model. Definitely system is described in different degree of details depending if it contains structural or behavioural modelling data. This is clarified in the following: The most commonly used diagram during the software development is a class diagram. These types of diagrams present the system structure by showing classes with their properties and relations between them. On the other hand, behavioural diagrams can express a dynamic nature of the system. They describe states of an object during its lifetime. Two translation tools selected for the integration with the MODUS tool-set, i.e., UML2RSL and Hugo/RT have different capabilities when it comes to translation model. The following sections present the capabilities of the tools and describe certain translation possibilities and examples. **UML2RSL** UML2RSL used class diagram, to formulate RSL specification. As a result of the translation, in order to obtain a specification of a plurality of modular RSL files. A top-level module is stored in the file S.rsl. It contains a specification of the model with the whole class diagram. S.rsl uses one set of modules containing a specification of one of the classes from the diagram. The name of these modules is given to match the class name in capital letters, followed by "S_". Each RSL module for a class is created with a lower level module corresponds to an object of the given class, named after the class in capital letters, followed by "_". Each of these lower level modules used TYPES.rsl module where all the abstract types defined in the illustration (George, 2008). Considering a simple example of a class diagram presented below six files will be created (Figure 1): - CLASSOUTOFPACKAGE_.rsl - CLASSOUTOFPACKAGES_.rsl - MYCLASS_.rsl - MYCLASSS_.rsl - S.rsl - TYPES.rsl They can form the following dependency diagram presented in Figure 2. **Figure 1** Simple class diagram for UML2RSL translation Source: Authors As a result of the translation the MYCLASS_.rsl will take a form: ```rsl object MYCLASS_: with TYPES in class type MyClass value MyAttribute: MyClass -> MyAttribute, update_MyAttribute: MyAttribute >> MyClass update_MyAttribute(at, o) as o’ post MyAttribute(o’) = at pre preupdate_MyAttribute(at, o), preupdate_MyAttribute: MyAttribute >> Bool, Association: MyClass >> ClassOutOfPackage_Id, update_Association: ClassOutOfPackage_Id >> MyClass update_Association(a, o) as o’ post Association(o’) = a pre preupdate_Association(a, o), preupdate_Association: ClassOutOfPackage_Id >> Bool, consistent: MyClass -> Bool end ``` Source: Authors The remaining files that are output from the UML-to-RSL transition, listed in Appendix A: Translation results UML2RSL. A class, in addition to the normal function of returning the value of an attribute, very often contains operations change the attributes. It could be an event that occurs, this attribute modification in a particular state. This is handled by the compiler to generate the RSL functions for this purpose. Their claim is to be completed by the user (see e.g. preupdate_Association). According to this example of the type described MyClass is the set of all possible states of the class MyClass, representing a number of groups of articles or MyClass all possible sets of objects that can be observed at a given time. It can be described as the class of container. For each class in the class can create new objects and damaged or altered existing objects. This is the existence of some typical features (empty, add part is_in, sheep, and update) that can be on the set of instances of each class and the work of a translator resist. Seen an example of these features in CLASSOUTOFPACKAGES_and MYCLASSSS_files (see Appendix A: Translation results UML2RSL). To check the consistency of the whole system in order to verify that keep all constraints are a number of axioms defined top-level. This makes it possible to check whether the system is done in a consistent state before and after each change in status. For this reason, a number is generated by similar functions. They are included in the top level module S, but also a set of Boolean functions is formed in the lower level modules the. Lower level texture features, the user to check the consistency of objects and classes. The top feature using features lower level checks the consistency of the whole system. This simple example also shows the association, which is translated as two RSL functions between the classes involved (or, if the association is navigable in only one direction). The UML2RSL translation tool also accepts and executes test: Composition, aggregation, generalization (but only translates single inheritance) abstract, root, leaf and template classes. Dependencies ignored (details can be found in (George 2008)). HUGO/RT A translation performed by Hugo/RT goes beyond the basic class diagram translation (Knapp, 2008). Hugo/RT can translate the UML models that contain classes with state machines, collaborations, interactions, and OCL (Object Constraint Language) constraints. The state machines describing the state the objects can be in, can be complemented by another dynamic view of the system, namely, the sequence and collaboration diagrams, describing the interactions between different objects in the system. Hugo/RT can be used to verify whether these complementary views of dynamic properties of the system are coherent. In other words it allows verifying whether the system described by the state machines can fulfil the interaction described in the collaboration. The UML state machines should be described in the context of a UML class. The classes need to declare all events that its state machine can handle (Knapp, 2008). Hugo/RT translation for every state machine creates a separate PROMELA process. Considering an example borrowed from (Schäfer, Knapp, Merz, 2001). It is possible to define a system that is composed of an ATM and a Bank, presented by a simple class diagram in Figure 3. Figure 3 ATM - Bank class diagram <table> <thead> <tr> <th>ATM</th> <th>atm</th> <th>bank</th> </tr> </thead> <tbody> <tr> <td>«signal» PINVerified</td> <td>1</td> <td>1</td> </tr> <tr> <td>«signal» reenterPIN</td> <td></td> <td></td> </tr> <tr> <td>«signal» abort</td> <td></td> <td></td> </tr> </tbody> </table> Source: Schäfer, Knapp, Merz, 2001 Each of the classes contains a state machine. For instance let’s consider a state machine of the ATM module (see Figure 4). This state diagram contains simple and composite states together with guarded and unguarded transitions. Figure 4 ATM state machine Source: Schäfer, 2001 **Strategies for selection of the formal description tools** This section covers the possible strategies for selecting a proper formal verification/validation path. This includes a selection of the UML to formal language translator and as such also the model checker. The strategy included in this deliverable is taking into account only the tools considered for the first stage of the MODUS tool-set development. As described earlier, the two model checkers that are identified as first priority tools to be integrated within MODUS tool-set are SPIN and RAISE (SAL). This means that Hugo/RT and UML2RSL will be the tools that perform the translation between a UML model and a formal description. Considering different properties of the aforementioned translators, the strategy for the selection of the tools for the formal verification and validation is rather simple (at least at this stage of the development). Since UML2RSL handles only UML diagrams containing class diagrams it can only be considered for this basic types of models. UML diagrams containing state machines, within their classes, cannot be translated into RSL, because the translator is not processing behavioural diagram types. As such, large part of the system description would be lost. For diagrams that contain state machines describing the behaviour of the objects of a certain class type, Hugo/RT should be used in the currently planned MODUS tool-set formal verification block. The resulting PROMELA model produced by the Hugo/RT translation can later be fed to SPIN for formal verification according to collaboration diagrams and OCL constraints. If class diagrams that do not contain any state diagrams are translated using Hugo/RT, the tool will prompt a warning indicating that no behaviour description was detected and almost empty file (containing only idle process) will be generated. This strategy for selection of the formal verification tool to be used in each case can be implemented as a simple algorithm as depicted in Figure 5. **Figure 5** Formal methods selection 1. Export UML to XMI using Eclipse Modeling Plug-in 2. Does the model contain state diagrams? - **YES**: Translate the model using Hugo/RT - **NO**: Process the model using SPIN 3. Translate the model using UML2RSL 4. Process the model using RSL tools e.g. SAL 5. Return the results of the formal methods Source: Authors Conclusions MODUS is targeting the market of tools for embedded software engineering. The project will develop a toolset advancing embedded systems quality that will target the growing group of SMEs (and bigger companies as well) specialising in the development of embedded systems in different industrial sectors (e.g. avionics, automotive systems, consumer electronics, telecommunications systems, etc). It should be emphasized that MODE does not aim to compete with the major suppliers of CASE tools currently used in embedded software engineering to become. On the contrary, the project aims to facilitate the implementation of quality strategies by preserving existing investments in technical know-how and tools. The MODUS approach is aligned with present market needs; the familiarity with tools, ease of use and compatibility/interoperability remain among the most important criteria when selecting the development environment for a project. Specifically, this paper has focused on the formal verification part of the MODUS toolsets, but the uniqueness of the MODUS toolsets lies in the combination of formal verification, HW/SW co-simulation, SW performance tuning and customizable source code generation. References About the authors Lukasz Brewka received his M.Sc. degree in 2008 and his Ph.D. degree in 2012 both from Technical University of Denmark, Department of Photonics Engineering. He was involved in European projects ICT-ALPHA and MODUS. His research interest includes quality assurance in telecommunications and software systems - from network QoS to software QA. Author can be contacted at brewka111@gmail.com José Soler earned his PhD in Electrical Engineering (2006) by DTU (Denmark), and his MSc in Telecommunication Engineering (1999) by Zaragoza University (Spain). Currently he is Associate Professor on Telecommunication Networks at DTU Fotonik. His research interest include heterogeneous networks integration, telecommunication related software and services. Author can be contacted at joss@fotonik.dtu.dk Michael S. Berger was born in 1972 and received the M.Sc. EE and Ph.D. from the Technical University of Denmark in 1998 and 2004. He is currently Associate Professor at the Technical University of Denmark within the area of switching and network node design. Currently, he is leading a project on next generation IP and Carrier Ethernet networks partly funded by the Danish National Advanced Technology Foundation. Author can be contacted at msbe@fotonik.dtu.dk
{"Source-Url": "http://orbit.dtu.dk/files/97046213/bsrj_2014_0002.pdf", "len_cl100k_base": 5982, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 34491, "total-output-tokens": 7229, "length": "2e12", "weborganizer": {"__label__adult": 0.0003101825714111328, "__label__art_design": 0.0003418922424316406, "__label__crime_law": 0.0003197193145751953, "__label__education_jobs": 0.0008502006530761719, "__label__entertainment": 5.453824996948242e-05, "__label__fashion_beauty": 0.00014078617095947266, "__label__finance_business": 0.0004360675811767578, "__label__food_dining": 0.0002911090850830078, "__label__games": 0.00044345855712890625, "__label__hardware": 0.0012836456298828125, "__label__health": 0.00040602684020996094, "__label__history": 0.00018393993377685547, "__label__home_hobbies": 9.620189666748048e-05, "__label__industrial": 0.0006012916564941406, "__label__literature": 0.0001806020736694336, "__label__politics": 0.00023281574249267575, "__label__religion": 0.0004019737243652344, "__label__science_tech": 0.03265380859375, "__label__social_life": 8.7738037109375e-05, "__label__software": 0.005413055419921875, "__label__software_dev": 0.9541015625, "__label__sports_fitness": 0.0002758502960205078, "__label__transportation": 0.0006313323974609375, "__label__travel": 0.0001556873321533203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28993, 0.02236]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28993, 0.55807]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28993, 0.86788]], "google_gemma-3-12b-it_contains_pii": [[0, 482, false], [482, 2701, null], [2701, 5495, null], [5495, 7682, null], [7682, 10491, null], [10491, 13254, null], [13254, 16153, null], [16153, 18096, null], [18096, 18826, null], [18826, 22240, null], [22240, 23823, null], [23823, 25175, null], [25175, 27717, null], [27717, 28993, null]], "google_gemma-3-12b-it_is_public_document": [[0, 482, true], [482, 2701, null], [2701, 5495, null], [5495, 7682, null], [7682, 10491, null], [10491, 13254, null], [13254, 16153, null], [16153, 18096, null], [18096, 18826, null], [18826, 22240, null], [22240, 23823, null], [23823, 25175, null], [25175, 27717, null], [27717, 28993, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28993, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28993, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28993, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28993, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28993, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28993, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28993, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28993, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28993, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28993, null]], "pdf_page_numbers": [[0, 482, 1], [482, 2701, 2], [2701, 5495, 3], [5495, 7682, 4], [7682, 10491, 5], [10491, 13254, 6], [13254, 16153, 7], [16153, 18096, 8], [18096, 18826, 9], [18826, 22240, 10], [22240, 23823, 11], [23823, 25175, 12], [25175, 27717, 13], [27717, 28993, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28993, 0.02049]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
6259e12cd6e1311acf0c2837b410c33b122d2d07
Buckets: Aggregative, Intelligent Agents for Publishing Michael L. Nelson Langley Research Center, Hampton, Virginia Kurt Maly, Stewart N. T. Shen, and Mohammad Zubair Old Dominion University, Norfolk, Virginia National Aeronautics and Space Administration Langley Research Center Hampton, Virginia 23681-2199 May 1998 Buckets: Aggregative, Intelligent Agents for Publishing Michael L. Nelson NASA Langley Research Center MS 158 Hampton, VA 23681-0001, USA E-mail: m.l.nelson@larc.nasa.gov Kurt Maly, Stewart N. T. Shen, Mohammad Zubair Old Dominion University Computer Science Department Norfolk, VA 23529, USA E-mail: {maly, shen, zubair}@cs.odu.edu ABSTRACT Buckets are an aggregative, intelligent construct for publishing in digital libraries. The goal of research projects is to produce information. This information is often instantiated in several forms, differentiated by semantic types (report, software, video, datasets, etc.). A given semantic type can be further differentiated by syntactic representations as well (PostScript version, PDF version, Word version, etc.). Although the information was created together and subtle relationships can exist between them, different semantic instantiations are generally segregated along currently obsolete media boundaries. Reports are placed in report archives, software might go into a software archive, but most of the data and supporting materials are likely to be kept in informal personal archives or discarded altogether. Buckets provide an archive-independent container construct in which all related semantic and syntactic data types and objects can be logically grouped together, archived, and manipulated as a single object. Furthermore, buckets are active archival objects and can communicate with each other, people, or arbitrary network services. KEYWORDS: Digital library architectures, agents, archiving, multi-format, bucket, container, package. INTRODUCTION Digital libraries (DLs) are an important research topic in many scientific communities and have already become an integral part of the research process. However, access to these DLs is not as easy as users would like. Digital libraries are partitioned both by the discipline they serve (computer science, aeronautics, physics, etc.) and by the format of their holdings (technical reports, video, software, etc.). There are two significant problems with current DLs. First, interdisciplinary research is difficult because the collective knowledge of each discipline is stored in incompatible DLs that are known only to the specialists in the subject. The second significant problem is that although scientific and technical information (STI) consists of manuscripts, software, data, etc., the manuscript receives the majority of attention, and the other components are often discarded (Figure 1) [20]. Although non-manuscript digital libraries such as the software archive Nettliv [2] have been in use for some time, they still place the burden of STI reintegration on the customer. A NASA study found that customers desire to have the entire set of manuscripts, software, data, etc. available in one place [19]. With the increasing availability of all-digital storage and transmission, maintaining the tight integration of the original STI collection is now possible. Old Dominion University and NASA Langley Research Center are developing NCSTRL+ to address the multidiscipline and multi-genre problems. NCSTRL+ is based on the Networked Computer Science Technical Report Library (NCSTRL) [5], which is a highly successful digital library offering access to over 100 university departments and laboratories since 1994, and is implemented using the Dienst protocol [9]. During the development stage, NCSTRL+ includes selected holdings from the NASA Technical Report Server (NTRS) [14] and NCSTRL, providing clusters of collections along the dimension of disciplines such as aeronautics, space science, mathematics, computer science, and physics, as well as clusters along the dimension of publishing organization and genre, such as project reports, journal articles, theses, etc. The DL aspects of NCSTRL+ are discussed in [15, 16]. Although developed for NCSTRL+ and with our modified version of the Dienst protocol in mind, buckets are protocol and archive independent, needing only standard World Wide Web (WWW) capability to function. This paper gives an overview of bucket functionality, examines similar work, and discusses current implementation and future plans. OVERVIEW Buckets are object-oriented container constructs in which logically grouped items can be collected, stored, and transported as a single unit. For example, a typical research project at NASA Langley Research Center produces information tuples: raw data, reduced data, manuscripts, notes, software, images, video, etc. Normally, only the report part of this information tuple is officially published and tracked. The report might reference on-line resources, or even include a CD-ROM, but these items are likely to be lost or degrade over time. Some portions such as software, can go into separate archives (i.e., COSMIC or the Langley Software Server) but this leaves the researcher to re-integrate the information tuple by selecting pieces from multiple archives. Most often, the software and other items, such as datasets are simply discarded. After 10 years, the manuscript is almost surely the only surviving artifact of the information tuple. Large archives could have buckets with many different functionalities. Not all bucket types or applications are known at this time. However, we can describe a generalized bucket as containing many formats of the same data item (PS, Word, Framemaker, etc.) but more importantly, it can also contain collections of related non-traditional STI materials (manuscripts, software, datasets, etc.). Thus, buckets allow the digital library to address the long standing problem of ignoring software and other supportive material in favor of archiving only the manuscript [20] by providing a common mechanism to keep related STI products together. A single bucket can have multiple packages. Packages can correspond to the semantics of the information (manuscript, software, etc.), or can be more abstract entities such as the metadata for the entire bucket, bucket terms and conditions, pointers to other buckets or packages, etc. A single package can have several elements, which are typically different file formats of the same information, such as the manuscript package having both PostScript and PDF elements. Elements correspond to the syntax of a package. Packages and elements are illustrated in Figure 2. **Bucket Requirements** All buckets have unique ids, handles [7], associated with them. Buckets are intended to be either standalone objects or to be placed in digital libraries. A standalone bucket can be accessible through normal WWW means without the aid of a repository. Buckets are intended to be useful even in repositories that are not knowledgeable about buckets in general, or possibly just not about the specific form of buckets. Buckets should not lose functionality when removed from their repository. The envisioned scenario is that NCSTRL+ will eventually have moderate numbers of (10s - 100s of thousands) of intelligent, custom buckets instead of large numbers (millions) of homogenous buckets. Figure 3 contrasts a traditional architecture of having the repository interface contain all the intelligence and functionality with that of a bucket architecture where the repository intelligence and functionality can be split between the repository and individual buckets. This could be most useful when individual buckets require custom terms and conditions for access (security, payment, etc.). Figure 3 also illustrates a bucket gaining some repository intelligence as it is extracted from the archive en route to becoming a standalone bucket. A high level list of bucket requirements include: - a bucket is of arbitrary size - a bucket has a globally unique identifier - a bucket contains 0 or more components, called packages (no defined limit) - a package contains 1 or more components, called elements (no defined limit) - an element can be a file, or a "pointer" to another - both packages and elements can be other buckets (i.e., buckets can be nested) - a package can be a "pointer" to a remote bucket, package, or element (remote package or element access requires "going through" the remote hosting bucket) - packages and elements can be "pointers" to arbitrary network services, foreign keys to databases, etc. - buckets can keep internal logs of actions performed on them - interactions with packages or elements are made only through defined methods on a bucket - buckets can initiate actions; they do not have to wait to be acted on - buckets can exist inside or out of a repository Table 1 lists the required bucket methods; other methods can be custom defined. Note that Table 1 differs from protocols such as the Repository Access Protocol (RAP) [10]. RAP defines what actions the Repository understands, while we define the actions that buckets understand. Although the two are not mutually exclusive, the current plan is to not implement RAP for NCSTRL+. Table 2 lists the default private methods for the bucket. We expect this list to grow as the public methods are refined, especially as the current terms and conditions model moves past its current hostname and username/password capability. Bucket Tools There are two main tools for bucket use. One is the author tool, which allows the author to construct a bucket with no programming knowledge. Here, the author specifies the metadata for the entire bucket, adds packages to bucket, adds elements to the packages, provides metadata for the packages, and selects applicable clusters. The author tool gathers the various packages into a single component and parses the packages based on rules defined at the author's site. Many of the options of the author tool will be set locally via the second bucket tool, the management tool. The management tool provides an interface to allow site managers to configure the default settings for all authors at that site. The management tool also provides an interface to query and update buckets at a given repository. Additional methods can be added to buckets residing in a repository by invoking add_method on them and transmitting the new code. From this interface, the manager can halt the archive and perform operations on it, including updating or adding packages to individual buckets, updating or adding methods to groups of buckets, and performing other archival management functions. Bucket Implementation Our bucket prototypes are written in Perl 5, and make use of the fact that Dienst uses hypertext transfer protocol (HTTP) as a transport protocol. Like Dienst, bucket metadata is stored in RFC-1807 format [12], and package and element information is stored in newly defined optional and repeatable fields. Dienst has all of a document's files gathered into a single Unix directory. A bucket follows the same model and has all relevant files collected together using directories from file system semantics. Thus a Dienst administrator can cd into the appropriate directory and access the contents. However, access for regular users occurs through the WWW. The bucket is accessible through a Common Gateway Interface (CGI) script that enforces terms and conditions, and negotiates presentation to the WWW client. <table> <thead> <tr> <th>Default Public Bucket Methods</th> </tr> </thead> </table> Table 1: Default Public Bucket Methods Method | Argument | Currently Implemented | Description --- | --- | --- | --- metadata | format | Yes | with no argument, returns the metadata in the default format; with an argument, derives and returns the desired format display | -- | Yes | default method; bucket "unveils" itself to requester id | -- | Yes | returns the bucket’s unique identifier (handle) list to | -- | No | describes the nature of the publicly visible terms and conditions list methods | -- | Yes | list all public methods known by a bucket list owners | -- | Yes | list all principals that can modify the bucket add owner | owner | No | add to the list of owners delete owner | owner | No | delete from the list of owners add package | package | Yes | adds a package to an existing bucket delete package | package | Yes | deletes a package from an existing bucket add element | element | Yes | adds an element to an existing package delete element | element | Yes | deletes an element from an existing package get package | package | No | get an element from a package in a bucket; currently direct URLs are used for element extraction get element | element | No | capability to get an entire package, including all elements add method | method | Yes | "teaches" a new method to an existing bucket delete_method | method | Yes | removes a method from a bucket copy bucket | destination | No | export a copy of a bucket, original remains move bucket | destination | No | move the original bucket, no version remains Table 2. Default Private Bucket Methods <table> <thead> <tr> <th>Method</th> <th>Argument</th> <th>Currently Implemented</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>tc</td> <td>method name</td> <td>Yes</td> <td>all public methods pass through this terms and conditions method</td> </tr> <tr> <td>derive metadata</td> <td>format</td> <td>No</td> <td>converts from the default metadata format to the desired format</td> </tr> </tbody> </table> The philosophy of Dienst is to minimize the dependency on HTTP. Except for the User Interface service, Dienst does not make specific assumptions about the existence of HTTP or the Hypertext Markup Language (HTML). However, Dienst does make very explicit assumptions about what constitutes a document and its related data formats. Built into the protocol are the definitions of PostScript, ASCII text, inline images, scanned images, etc. To add a new file format, such as the increasingly popular PDF, Dienst configuration files have to be changed. If the protocol was resident only at one site, this would be acceptable. However, Dienst servers are running at nearly 100 sites – protocol additions require a coordinated logistical effort to synchronize versions and provide uniform capability. We favor making Dienst less knowledgeable about dynamic topics such as file format, and making that the responsibility of buckets (Figure 4). In NCSTRL+, Dienst is used as an index, search, and retrieval protocol. When the user selects an entry from the search results, Dienst would normally have the local User Interface service use the Describe verb to peer into the contents of the documents directory (including the metadata file), and Dienst itself would control how the contents are presented to the user. In NCSTRL+, the final step of examining the directories structure is skipped, and the directory’s index.cgi file is invoked. The default method for an index.cgi is generally the display method, so the user should notice little difference. However, at that point the bucket, not Dienst, determines what the user sees. RELATED WORK There has been a lot of research in the area of redefining the concept of “document.” In this section we examine some of these projects and technologies that are similar to buckets. Digital Objects Buckets are most similar to the digital objects first described in the Kahn/Wilensky Framework [8], and its derivatives such as the Warwick Framework containers [11] and the more recent Flexible and Extensible Digital Object Repository Architecture (FEDORA) [4]. In FEDORA, DigitalObjects are containers, which aggregate one or more DataStreams. DataStreams are accessed through an Interface, and an Interface may in turn be protected by an Enforcer. Table 3 is a continuation of Table 1 from [4], with the fourth column added to show the bucket equivalents of concepts from the Kahn/Wilensky Framework, the Warwick Framework, and FEDORA. FEDORA has not been completely implemented at this point, and it is unknown what repository or digital library protocol limitations will be present. Also, it is unknown if FEDORA plans to allow DigitalObjects to be intelligent agents, similar to the Bucket Matching System described below. **Multivalent Documents** Multivalent documents [17] appear similar to buckets at first glance. However, the focus of multivalent documents is more on expressing and managing the relationships of differing “semantic layers” of a document, including language translations, derived metadata, annotations, etc. There is not an explicit focus on the aggregation of several existing data types into a single container. **Open Doc and OLE** OpenDoc [13] and OLE [1] are two similar technologies that provide the capability for compound documents. Both technologies can be summarized as viewing the document as a loose confederation of different embedded data types. The focus on embedded documents is less applicable to our digital library requirements than that of a generic container mechanism with separate facilities for document storage and intelligence. OpenDoc and OLE documents are more suitable to be elements within a bucket, rather than a possible bucket implementation. **Digibox** The DigiBox [18] technology is a container construct designed for electronic commerce. The goal of DigiBox is “to permit proprietors of digital information to have the same type and degree of control present in the paper world [18]. As such, the focus of the DigiBox capabilities are heavily oriented toward cryptographic integrity of the contents, and not so much on the less stringent demands of the current average digital library. There also appear to be no hooks to make a DigiBox an intelligent agent. DigiBox is a commercial endeavor and is thus less suitable for the for our NCSTRL+ prototype. **CURRENT AND FUTURE WORK** We are using the author tool to populate NCSTRL+ to gain insight on how to improve its operation. We are starting with buckets authored at Old Dominion University and NASA Langley Research Center and are choosing the initial entries to be “full” buckets, with special emphasis on buckets relating to NSF projects for ODU and for windtunnel and other experimental data for NASA. Until NCSTRL+ becomes a full production system, we are primarily seeking rich functionality buckets that contain diverse sets of packages. Alternate Implementations We are planning to also implement buckets using Lotus Domino, a Web server integrated with a Lotus Notes database server, in addition to the current CGI and Perl implementation. The bucket API as defined in Tables 1 & 2 will remain unchanged. In experimenting with Domino, we also plan to investigate implementing NCSTRL+ components without using Dienst. We plan to evolve NCSTRL+ to support a generalized publishing and searching model that can be implemented using Dienst or other DL protocols. Bucket Matching System The premise of the Bucket Matching System (BMS) is that the archived objects (buckets) should handle as many tasks as possible, not humans. Toward this end, we are designing the BMS as a communication mechanism for buckets to exchange information among themselves. The "tuple-space" communication of the Linda programming language [3] is the model for BMS. The following example illustrates a usage of the BMS. Consider a technical report published by the CS department which is also submitted to a conference. The report appears under the server maintained by the department and publishing authority which is: ncstrl.odu.cs. If the conference paper is accepted, it will eventually be published by the conference sponsor, say the ACM. The publishing authority would be ncstrl.acm. Although the conference paper will surely appear in a modified format, the tech report and the conference paper are clearly related, despite being separated by publishing authority, date of publication, and revisions. Two separate but related objects now exist, and are likely to continue to exist. How best to create the desired linkage between the objects? "ncstrl.acm" may have neither the resources nor the interest to spend the time searching out previous versions of a manuscript. "ncstrl.odu.cs" cannot link to the conference bucket at the creation time of the ODU bucket, since the conference bucket did not exist at the time. It is unrealistic to suggest that the relevant parties will go back to the ncstrl.odu.cs collection and create the linkage correctly after several months have passed. The solution is to have both buckets publish their metadata, or some subset of it, in the BMS. When a match, or near match, is found, the buckets can either 1) automatically link to each other; or more likely 2) bring the possible linkage to the attention of a person, who will provide the final approval for the linkage. There are a number of ways that a "match" can be found, but most likely it will be similar metadata within some definable threshold (e.g., 90% similar). Other uses for the BMS could include: Find similar works by different authors. The exact values would have to be determined by experimentation, but it is possible to envision a similarity ranking that is slightly lower being an indication of a similar work by different authors. For example, a similar work by a different author would be: 70% < similarity < 90%. Arbitrary selective dissemination of information (SDI) services. When a user's profile is matched, a notification can be sent immediately or a digest sent at every defined time interval (i.e., weekly). This method can be used to track different versions of a report, not just inter-genre (technical report vs. conference paper) or inter-institution (the author moves to a different university) issues. If version 2.0 of a bucket comes out, it can "find" all previous versions, and the appropriate actions can be taken (i.e., create a fully connected graph between the buckets, delete previous buckets, etc.) Metadata scrubbing. The issues of maintaining consistency and quality of metadata information is an increasingly important concern in digital libraries [6]. Part of the BMS could also include a metadata scrubber that, based on rules and heuristics defined at the scrubber, could automatically make or suggest updates to metadata. For example, the scrubber could have all references to "Hampton Institute" indicate the name change to "Hampton University", or handle an author's name change (for example, if someone changes their name upon marriage), or correct errors that may have been introduced, etc. The BMS could be implemented on multiple workstations, and would be primarily batch processing. Given that some of the operations would be computationally expensive, it can be done with loose time guarantees, perhaps even done on stolen idle cycles (from "hallway clusters" of workstations). CONCLUSIONS Buckets provide a mechanism for logically grouping the various semantic data objects (manuscript, software, datasets, etc.) and the various syntactic representations (PostScript, PDF, etc.). The ability to keep all the data objects together with their relationships intact relieves the user from having to reintegrate the original information tuple from many separate archives. Buckets also provide a more convenient method for describing the output of research projects, and provide a finer granularity for controlling terms and conditions within an archive. The aggregative aspects of buckets have already been implemented. The tools to make buckets easy to use and manage are being created. The Bucket Matching System will allow buckets to be intelligent agents, and allow inter-bucket communication as well as communication and action with arbitrary network resources. REFERENCES **Buckets: Aggregative, Intelligent Agents for Publishing** - **Authors:** Michael L. Nelson, Kurt Maly, Stewart N. T. Shen, and Mohammad Zubair - **Summary:** Buckets are an aggregative, intelligent construct for publishing in digital libraries. The goal of research projects is to produce information. This information is often instantiated in several forms, differentiated by semantic types (report, software, video, datasets, etc.). A given semantic type can be further differentiated by syntactic representations as well (PostScript version, PDF version, Word version, etc.). Although the information was created together and subtle relationships can exist between them, different semantic instantiations are generally segregated along currently obsolete media boundaries. Reports are placed in report archives, software might go into a software archive, but most of the data and supporting materials are likely to be kept in informal personal archives or discarded altogether. Buckets provide an archive-independent container construct in which all related semantic and syntactic data types and objects can be logically grouped together, archived, and manipulated as a single object. Furthermore, buckets are active archival objects and can communicate with each other, people, or arbitrary network services. **Subject Terms:** - Digital library architectures, agents, archiving, multi-format, bucket, container, package **Security Classification:** - Report: Unclassified - Page: Unclassified - Abstract: Unclassified
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19980210769.pdf", "len_cl100k_base": 5100, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 29528, "total-output-tokens": 6734, "length": "2e12", "weborganizer": {"__label__adult": 0.0003459453582763672, "__label__art_design": 0.0007653236389160156, "__label__crime_law": 0.00052642822265625, "__label__education_jobs": 0.0059967041015625, "__label__entertainment": 0.00017023086547851562, "__label__fashion_beauty": 0.00023293495178222656, "__label__finance_business": 0.0005054473876953125, "__label__food_dining": 0.0003294944763183594, "__label__games": 0.0005550384521484375, "__label__hardware": 0.0014753341674804688, "__label__health": 0.0005774497985839844, "__label__history": 0.000698089599609375, "__label__home_hobbies": 0.00017189979553222656, "__label__industrial": 0.0005536079406738281, "__label__literature": 0.0008921623229980469, "__label__politics": 0.0003561973571777344, "__label__religion": 0.0004620552062988281, "__label__science_tech": 0.2919921875, "__label__social_life": 0.0002758502960205078, "__label__software": 0.06414794921875, "__label__software_dev": 0.6279296875, "__label__sports_fitness": 0.00021970272064208984, "__label__transportation": 0.0005817413330078125, "__label__travel": 0.00024366378784179688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28832, 0.02588]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28832, 0.34964]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28832, 0.88986]], "google_gemma-3-12b-it_contains_pii": [[0, 331, false], [331, 331, null], [331, 5477, null], [5477, 9515, null], [9515, 11651, null], [11651, 16135, null], [16135, 18380, null], [18380, 23935, null], [23935, 27302, null], [27302, 27302, null], [27302, 27302, null], [27302, 28832, null]], "google_gemma-3-12b-it_is_public_document": [[0, 331, true], [331, 331, null], [331, 5477, null], [5477, 9515, null], [9515, 11651, null], [11651, 16135, null], [16135, 18380, null], [18380, 23935, null], [23935, 27302, null], [27302, 27302, null], [27302, 27302, null], [27302, 28832, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28832, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28832, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28832, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28832, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28832, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28832, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28832, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28832, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28832, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28832, null]], "pdf_page_numbers": [[0, 331, 1], [331, 331, 2], [331, 5477, 3], [5477, 9515, 4], [9515, 11651, 5], [11651, 16135, 6], [16135, 18380, 7], [18380, 23935, 8], [23935, 27302, 9], [27302, 27302, 10], [27302, 27302, 11], [27302, 28832, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28832, 0.04444]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
824d9879e0a133b301d84a41810cbedf6c1684de
INTRODUCTION TO MATLAB PARALLEL COMPUTING TOOLBOX Keith Ma ---------------------------------------- keithma@bu.edu Research Computing Services -------- help@rcs.bu.edu Boston University ------------------------------------------ Overview • Goals: 1. Basic understanding of parallel computing concepts 2. Familiarity with MATLAB parallel computing tools • Outline: • Parallelism, defined • Parallel “speedup” and its limits • Types of MATLAB parallelism • multi-threaded/implicit, distributed, explicit) • Tools: parpool, SPMD, parfor, gpuArray, etc **Parallel Computing** **Definition:** The use of two or more processors in combination to solve a single problem. - Serial performance improvements have slowed, while parallel hardware has become ubiquitous. - Parallel programs are typically harder to write and debug than serial programs. Parallel speedup, and its limits (1) • “Speedup” is a measure of performance improvement \[ \text{speedup} = \frac{\text{time}_{\text{old}}}{\text{time}_{\text{new}}} \] • For a parallel program, we can run with an arbitrary number of cores, \( n \). • Parallel speedup is a function of the number of cores \[ \text{speedup}(p) = \frac{\text{time}_{\text{old}}}{\text{time}_{\text{new}}(p)} \] Parallel speedup, its limits (2) - Amdahl’s law: Ideal speedup for a problem of fixed size - Let: $p =$ number of processors/cores $\alpha =$ fraction of the program that is strictly serial $T =$ execution time Then: $$T(p) = T(1)\left(\alpha + \frac{1}{p} (1 - \alpha)\right)$$ And: $$S(p) = \frac{T(1)}{T(p)} = \frac{1}{\alpha + \frac{1}{p} (1 - \alpha)}$$ - Think about the limiting cases: $\alpha = 0$, $\alpha = 1$, $p = 1$, $p = \infty$ Parallel speedup, and its limits (3) - Diminishing returns as more processors are added - Speedup is limited if $\alpha < 0$ - “Linear speedup” is the best you can do (usually) *Accelerating MATLAB Performance, Yair Altman, 2015* Parallel speedup, and its limits (3) - The program “chokes” if too many cores are added - Caused by communication cost and overhead, resource contention ![A more realistic Amdahl's law graph] - Realistic Amdahl's law of parallelization efficiency *Accelerating MATLAB Performance, Yair Altman, 2015* Hardware: single core - No parallelism - Good luck finding one… Hardware: multi-core - Each processor core runs independently - All cores can access system memory - Common in desktops, laptops, smartphones, probably toast... Hardware: multi-core, multi-processor - Each processor core runs independently - All cores can access system memory - Common in workstations and servers (including the SCC here at BU) Hardware: accelerators - Accelerator/GPU is an a separate chip with many simple cores. - GPU memory is separate from system memory. - Not all GPUs are suitable for research computing tasks (need support for APIs, decent floating-point performance). Hardware: clusters - Several independent computers, linked via network - System memory is distributed (i.e. each core cannot access all cluster memory) - Bottlenecks: inter-processor and inter-node communications, contention for memory, disk, network bandwidth, etc. Multithreaded parallelism... one instance of MATLAB automatically generates multiple simultaneous instruction streams. Multiple processors or cores, sharing the memory of a single computer, execute these streams. An example is summing the elements of a matrix. Distributed computing. Explicit parallelism. Three Types of Parallel Computing *Parallel MATLAB: Multiple Processors and Multiple Cores, Cleve Moler, MathWorks* - **Multithreaded parallelism.** - **Distributed computing.** multiple instances of MATLAB run multiple independent computations on separate computers, each with its own memory...In most cases, a single program is run many times with different parameters or different random number seeds. - **Explicit parallelism.** Three Types of Parallel Computing **Multithreaded parallelism.** **Distributed computing.** **Explicit parallelism…** several instances of MATLAB run on several processors or computers, often with separate memories, and simultaneously execute a single MATLAB command or M-function. New programming constructs, including parallel loops and distributed arrays, describe the parallelism. MATLAB parallel computing toolbox (PCT) - Supports multithreading (adds GPUs), distributed parallelism, and explicit parallelism - You need... - MATLAB and a PCT license - Note that licenses are limited at BU – 500 for MATLAB, 45 for PCT - Parallel hardware, as discussed above Distributed Jobs • Define and run independent jobs • No need to parallelize code! • Expect linear speedup • Each task must be entirely independent • Approach: define jobs, submit to a scheduler, gather results • Using MATLAB scheduler – not recommended: ```matlab c = parcluster; j = createJob(c); createTask(j, @sum, 1, {[1 1]}); createTask(j, @sum, 1, {[2 2]}); createTask(j, @sum, 1, {[3 3]}); submit(j); wait(j); out = fetchOutputs(j); ``` Distributed Jobs on SCC • For task-parallel jobs on SCC, use the cluster scheduler (not the tools provided by PCT) • To define and submit one job: qsub matlab -nodisplay -singleCompThread -r "rand(5), exit" • To define and submit many jobs, use a job array: qsub -N myjobs -t 1-10:2 matlab -nodisplay -singleCompThread -r \ "rand($SGE_TASK_ID), exit" • Results must be gathered manually, typically by a “cleanup” job that runs after the others have completed: qsub -hold_jid myjobs matlab -nodisplay -singleCompThread -r \ "my_cleanup_function, exit" • Much more detail on MATLAB batch jobs on the SCC [here](#). Parallel Jobs - Split up the work for a single task - Must write parallel code - Your mileage may vary - some parallel algorithms may run efficiently and others may not - Programs may include both parallel and serial sections - Client: - The head MATLAB session – creates workers, distributes work, receives results - Workers/Labs: - independent, headless, MATLAB sessions - do not share memory - create before you need them, destroy them when you are done - Modes: - pmode: special interactive mode for learning the PCT and development - matlabpool/parpool: is the general-use mode for both interactive and batch processing. Parallel Jobs: matlabpool/parpool - *matlabpool/parpool* creates workers (a.k.a labs) to do parallel computations. - **Usage:** ``` parpool(2) perform parallel tasks delete(gcp) ``` - No access to GUI or graphics (workers are “headless”) - Parallel method choices that use *parpool* workers: - *parfor*: parallel for-loop; can’t mix with *spmd* - *spmd*: single program multiple data parallel region parfor (1) - Simple: a parallel for-loop - Work load is distributed evenly and automatically according to loop index. Details are intentionally opaque to user. - Many additional restrictions as to what can and cannot be done in a parfor loop – this is the price of simplicity - Data starts on client (base workspace), automatically copied to workers’ workspaces. Output copied back to client when done. - Basic usage: ```matlab parpool(2) x = zeros(1,10); parfor i=1:10 x(i) = sin(i); end delete(gcp) ``` parfor (2) - For the parfor loop to work, variables inside the loop must all fall into one of these categories: <table> <thead> <tr> <th>Type</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Loop</td> <td>A loop index variable for arrays</td> </tr> <tr> <td>Sliced</td> <td>An array whose segments are manipulated on different loop iterations</td> </tr> <tr> <td>Broadcast</td> <td>A variable defined before the loop and is used inside the loop but never modified</td> </tr> <tr> <td>Reduction</td> <td>Accumulates a value across loop iterations, regardless of iteration order</td> </tr> <tr> <td>Temporary</td> <td>Variable created inside the loop, but not used outside the loop</td> </tr> </tbody> </table> ```matlab c = pi; s = 0; X = rand(1,100); parfor k = 1 : 100 a = k; % a - temporary variable; k - loop variable s = s + k; % s - reduction variable if i <= c % c - broadcast variable a = 3*a - 1; end Y(k) = X(k) + a; % Y - output sliced var; X - input sliced var end ``` parfor (3): what can’t you do? - Data dependency: loop iterations are must be independent ```matlab a = zeros(1,100); parfor i = 2:100 a(i) = myfct(a(i-1)); end ``` parfor (4): what can’t you do? - Data dependency exceptions: “Reduction” operations that combine results from loop iterations in order-independent or entirely predictable ways ```matlab % computing factorial using parfor x = 1; parfor idx = 2:100 x = x * idx; end % reduction assignment with insert operator d = []; % Assume d is already initialized parfor idx = -10 : 10 d = [d, idx*idx*idx]; end % reduction assignment of a row vector v = zeros(1,100); parfor idx = 1:n v = v + (1:100)*idx; end ``` parfor (4): what can’t you do? - Data dependency exception: “Reduction” operations that combine results from loop iterations - Reduction operations must not depend on loop order - must satisfy $x \diamond (y \diamond z) = (x \diamond y) \diamond z$ - Plus (+) and multiply (*) operators pass, subtract (-) and divide (/) may fail. ```matlab s = 0; parfor i=1:10 s = i-s; end ``` parfor (5): what can’t you do? • Loop index must be consecutive integers. parfor i = 1 : 100 % OK parfor i = -20 : 20 % OK parfor i = 1 : 2 : 25 % No parfor i = -7.5 : 7.5 % No A = [3 7 -2 6 4 -4 9 3 7]; parfor i = find(A > 0) % No • ...and more! See the documentation Integration example (1) Integrate of the \( \cos(x) \) between 0 and \( \pi/2 \) using mid-point rule \[ \int_a^b \cos(x) \, dx = \sum_{i=1}^p \sum_{j=1}^n \int_{a_{ij}}^{a_{ij} + h} \cos(x) \, dx \approx \sum_{i=1}^p \left[ \sum_{j=1}^n \cos(a_{ij} + \frac{h}{2})h \right] \] Integration example (2): serial % serial integration (with for-loop) tic m = 10000; a = 0; % lower limit of integration b = pi/2; % upper limit of integration dx = (b - a)/m; % increment length intSerial = 0; % initialize intSerial for i=1:m x = a+(i-0.5)*dx; % mid-point of increment i intSerial = intSerial + cos(x)*dx; end toc \[ x(1) = a + \frac{dx}{2} \quad \text{and} \quad x(m) = b - \frac{dx}{2} \] Integration example (3): *parfor* This example performs parallel integration with *parfor*. ```matlab matlabpool open 4 tic m = 10000; a = 0; b = pi/2; dx = (b - a)/m; % increment length intParfor2 = 0; parfor i=1:m intParfor2 = intParfor2 + cos(a+(i-0.5)*dx)*dx; end toc matlabpool close ``` spmd (1) spmd = single program multiple data - Explicitly and/or automatically... - divide work and data between workers/labs - communicate between workers/labs - Syntax: ```matlab % execute on client/master spmd % execute on all workers end % execute on client/master ``` **spmd (2): Integration example (4)** - **Example: specifying different behavior for each worker** - `numlabs()` --- Return the total number of labs operating in parallel - `labindex()` --- Return the ID for this lab ``` parpool(2) tic % includes the overhead cost of spmd spmd m = 10000; a = 0; b = pi/2; n = m/numlabs; % # of increments per lab deltax = (b - a)/numlabs; % length per lab ai = a + (labindex - 1)*deltax; % local integration range bi = a + labindex*deltax; dx = deltax/n; % increment length for lab x = ai+dx/2:dx:bi-dx/2; intSPMD = sum(cos(x)*dx); % integral sum per worker intSPMD = gplus(intSPMD,1); % global sum over all workers end % spmd toc delete(gcp) ``` spmd (3): where are the data? - Memory is not shared by the workers. Data can be shared between workers using explicit MPI-style commands: ```matlab spmd if labindex == 1 A = labBroadcast(1,1:N); % send the sequence 1..N to all else A = labBroadcast(1); % receive data on other workers end % get an array chunk on each worker I = find(A > N*(labindex-1)/numlabs & A <= N*labindex/numlabs); % shift the data to the right among all workers to = mod(labindex, numlabs) + 1; % one to the right from = mod(labindex-2, numlabs) + 1; % one to the left I = labSendReceive(labTo, labFrom, I); out = gcat(I, 2, 1); % reconstruct the shifted array on the 1st worker end ``` spmd (4): where are the data? - Memory is not shared by the workers. Data can be shared between workers using **special data types: composite, distributed, codistributed** - **Composite**: variable containing references to unique values on each worker. - On the workers, accessed like normal variables - On the client, elements on each worker are accessed using cell-array style notation. - **Distributed/Codistributed**: array is partitioned amongst the workers, each holds some of the data. - All elements are accessible to all workers - The distinction is subtle: a codistributed array on the workers is accessible on the client as a distributed array, and vice versa spmd (5): where are the data? - created **before** *spmd* block: copied to all workers spmd (6): where are the data? - created **before** `spmd` block: copied to all workers - created **in** `spmd` block, then unique to worker, *composite* on client ``` >> spmd >> q = magic(labindex + 2); >> end >> q ``` ```matlab q = Lab 1: class = double, size = [3 3] Lab 2: class = double, size = [4 4] ``` ``` >> q{1} ``` ```matlab ans = 8 1 6 3 5 7 4 9 2 ``` spmd (7): where are the data? - created **before** `spmd` block: copied to all workers - created **in** `spmd` block, then unique to worker, *composite* on client - created as a **distributed** array before `spmd`: divided up as a **codistributed** array on workers ```matlab W = ones(6,6); W = distributed(W); % Distribute to the workers spmd T = W*2; % Calculation performed on workers, in parallel % T and W are both **codistributed** arrays here end T % View results in client % T and W are both **distributed** arrays here ``` spmd (8): where are the data? - created **before** `spmd` block: copied to all workers - created **in** `spmd` block, then unique to worker, *composite* on client - created as a **distributed** array **before** `spmd`: divided up as a **codistributed** array on workers - created as a **codistributed** array **in** `spmd`: divided up on workers, accessible as **distributed** array on client ```matlab spmd(2) RR = rand(20, codistributor()); % worker stores 20x10 array end ``` Distributed matrices (1): creation - There are many ways to create distributed matrices – you have a great deal of control. \[ A = \mathrm{rand}(3000); \quad B = \mathrm{rand}(3000); \] \[ \text{spmd} \] \[ p = \mathrm{rand}(n, \text{codistributor1d}(1)); \quad \% \text{2 ways to directly create} \] \[ q = \text{codistributed.rand}(n); \quad \% \text{distributed random array} \] \[ s = p \times q; \quad \% \text{run on workers; } s \text{ is distributed} \] \[ \% \text{distribute matrix after it is created} \] \[ u = \text{codistributed}(A, \text{codistributor1d}(1)); \quad \% \text{by row} \] \[ v = \text{codistributed}(B, \text{codistributor1d}(2)); \quad \% \text{by column} \] \[ w = u \times v; \quad \% \text{run on workers; } w \text{ is distributed} \] \[ \text{end} \] Distributed matrices (2): efficiency - For matrix-matrix multiply, here are 4 combinations on how to distribute the 2 matrices (by row or column) - some perform better than others. \[ n = 3000; \ A = \text{rand}(n); \ B = \text{rand}(n); \] \[ \begin{align*} \text{spmd} \\ \ ar &= \text{codistributed}(A, \text{codistributor1d}(1)) \ % \text{distributed by row} \\ \ ac &= \text{codistributed}(A, \text{codistributor1d}(2)) \ % \text{distributed by col} \\ \ br &= \text{codistributed}(B, \text{codistributor1d}(1)) \ % \text{distributed by row} \\ \ bc &= \text{codistributed}(B, \text{codistributor1d}(2)) \ % \text{distributed by col} \\ \ crr &= ar \times br; \quad crc = ar \times bc; \\ \ ccr &= ac \times br; \quad ccc = ac \times bc; \end{align*} \] \[ \text{end} \] - Wall-clock times of the four ways to distribute \( A \) and \( B \) <table> <thead> <tr> <th>C (row x row)</th> <th>C (row x col)</th> <th>C (col x row)</th> <th>C (col x col)</th> </tr> </thead> <tbody> <tr> <td>2.44</td> <td>2.22</td> <td>3.95</td> <td>3.67</td> </tr> </tbody> </table> Distributed matrices (3): function overloading - Some function/operators will recognize distributed array inputs and execute in parallel matlabpool open 4 n = 3000; A = rand(n); B = rand(n); C = A * B; % run with 4 threads maxNumCompThreads(1); % set threads to 1 C1 = A * B; % run on single thread a = distributed(A); % distributes A, B from client b = distributed(B); % a, b on workers; accessible from client c = a * b; % run on workers; c is distributed matlabpool close - Wallclock times for the above (distribution time accounted separately) <table> <thead> <tr> <th>C1 = A * B (1 thread)</th> <th>C = A * B (4 threads)</th> <th>a = distribute(A) b = distribute(B)</th> <th>c = a * b (4 workers)</th> </tr> </thead> <tbody> <tr> <td>12.06</td> <td>3.25</td> <td>2.15</td> <td>3.89</td> </tr> </tbody> </table> Linear system example: $Ax = b$ % serial $n = 3000; M = \text{rand}(n); x = \text{ones}(n,1);$ $[A, b] = \text{linearSystem}(M, x);$ $u = A\backslash b; \quad \% \text{solves } Au = b; u \text{ should equal } x$ clear $A$ $b$ % parallel in \text{spmd} \text{spmd} $m = \text{codistributed}(M, \text{codistributor}('1d',2)); \quad \% \text{by column}$ $y = \text{codistributed}(x, \text{codistributor}('1d',1)); \quad \% \text{by row}$ $[A, b] = \text{linearSystem}(m, y);$ $v = A\backslash b;$ \text{end} clear $A$ $b$ $m$ $y$ % parallel using \text{distributed array overloading} $m = \text{distributed}(M); y = \text{distributed}(x);$ $[A, b] = \text{linearSystem}(m, y);$ $W = A\backslash b;$ \hspace{1cm} function \[A, b] = \text{linearSystem}(M, x)\] \hspace{1cm} \% \text{Returns } A \text{ and } b \text{ of linear system } Ax = b$ \hspace{1cm} A = M + M'; \quad \% A \text{ is real and symmetric}$ \hspace{1cm} b = A * x; \quad \% b \text{ is the RHS of linear system}$ Parallel Jobs: pmode `>> pmode start 4` - Commands at the “P>>” prompt are executed on all workers - Use `if` with `labindex` to issue instructions specific to workers - Memory is NOT shared! Terminology: - `worker = lab = processor` - `labindex = processor id` - `numlabs = number of processors` Using GPUs (1) - For some problems, GPUs achieve better performance than CPUs. - MATLAB GPU utilities are limited, but growing. - **Basic GPU operations:** ``` >> n = 3000; % matrix size >> a = rand(n); % n x n random matrix >> A = gpuArray(a) % copy a to the GPU >> B = gpuArray.rand(n) % create random array directly on GPU >> C = A * B; % matrix multiply on GPU >> c = gather(C); % bring data back to base workspace ``` - On the SCC, there are compute nodes equipped with GPUs. To request a GPU for interactive use (for debugging, learning) ``` scc1% qsh -l gpus=1 ``` To submit batch job requesting a GPU ``` scc1% qsub -l gpus=1 batch_scc ``` Using GPUs (2): arrayfun \[ \begin{align*} \text{maxIterations} & = 500; \\ \text{gridSize} & = 3000; \\ x\text{lim} & = [-1, 1]; \\ y\text{lim} & = [0, 2]; \\ x & = \text{gpuArray.linspace}(x\text{lim}(1), x\text{lim}(2), \text{gridSize}); \\ y & = \text{gpuArray.linspace}(y\text{lim}(1), y\text{lim}(2), \text{gridSize}); \\ [x\text{Grid}, y\text{Grid}] & = \text{meshgrid}(x, y); \\ z0 & = \text{complex}(x\text{Grid}, y\text{Grid}); \\ \text{count0} & = \text{ones}(\text{size}(z0)); \\ \text{count} & = \text{arrayfun}(\text{@SerialFct}, z0, \text{count0}, \text{maxIterations}); \\ \text{count} & = \text{gather}(\text{count}); \% \text{Fetch the data back from the GPU} \end{align*} \]
{"Source-Url": "http://www.bu.edu/tech/files/2015/09/matlab_pct_slides.pdf", "len_cl100k_base": 6000, "olmocr-version": "0.1.50", "pdf-total-pages": 44, "total-fallback-pages": 0, "total-input-tokens": 68718, "total-output-tokens": 7802, "length": "2e12", "weborganizer": {"__label__adult": 0.00031447410583496094, "__label__art_design": 0.0005559921264648438, "__label__crime_law": 0.0003786087036132813, "__label__education_jobs": 0.0044708251953125, "__label__entertainment": 0.00011909008026123048, "__label__fashion_beauty": 0.0001748800277709961, "__label__finance_business": 0.0004105567932128906, "__label__food_dining": 0.0003407001495361328, "__label__games": 0.0007815361022949219, "__label__hardware": 0.0036163330078125, "__label__health": 0.0005574226379394531, "__label__history": 0.00034689903259277344, "__label__home_hobbies": 0.00019407272338867188, "__label__industrial": 0.0011091232299804688, "__label__literature": 0.0002148151397705078, "__label__politics": 0.00025200843811035156, "__label__religion": 0.0005459785461425781, "__label__science_tech": 0.18896484375, "__label__social_life": 0.00019121170043945312, "__label__software": 0.051055908203125, "__label__software_dev": 0.744140625, "__label__sports_fitness": 0.00034236907958984375, "__label__transportation": 0.00057220458984375, "__label__travel": 0.00022017955780029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20205, 0.02427]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20205, 0.86997]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20205, 0.73822]], "google_gemma-3-12b-it_contains_pii": [[0, 230, false], [230, 568, null], [568, 973, null], [973, 1372, null], [1372, 1825, null], [1825, 2057, null], [2057, 2361, null], [2361, 2427, null], [2427, 2589, null], [2589, 2774, null], [2774, 3026, null], [3026, 3294, null], [3294, 3602, null], [3602, 4039, null], [4039, 4427, null], [4427, 4715, null], [4715, 5169, null], [5169, 5805, null], [5805, 6453, null], [6453, 6878, null], [6878, 7409, null], [7409, 8473, null], [8473, 8644, null], [8644, 9162, null], [9162, 9552, null], [9552, 9830, null], [9830, 10109, null], [10109, 10618, null], [10618, 10921, null], [10921, 11201, null], [11201, 11918, null], [11918, 12595, null], [12595, 13277, null], [13277, 13365, null], [13365, 13793, null], [13793, 14339, null], [14339, 14825, null], [14825, 15613, null], [15613, 16663, null], [16663, 17551, null], [17551, 18539, null], [18539, 18839, null], [18839, 19511, null], [19511, 20205, null]], "google_gemma-3-12b-it_is_public_document": [[0, 230, true], [230, 568, null], [568, 973, null], [973, 1372, null], [1372, 1825, null], [1825, 2057, null], [2057, 2361, null], [2361, 2427, null], [2427, 2589, null], [2589, 2774, null], [2774, 3026, null], [3026, 3294, null], [3294, 3602, null], [3602, 4039, null], [4039, 4427, null], [4427, 4715, null], [4715, 5169, null], [5169, 5805, null], [5805, 6453, null], [6453, 6878, null], [6878, 7409, null], [7409, 8473, null], [8473, 8644, null], [8644, 9162, null], [9162, 9552, null], [9552, 9830, null], [9830, 10109, null], [10109, 10618, null], [10618, 10921, null], [10921, 11201, null], [11201, 11918, null], [11918, 12595, null], [12595, 13277, null], [13277, 13365, null], [13365, 13793, null], [13793, 14339, null], [14339, 14825, null], [14825, 15613, null], [15613, 16663, null], [16663, 17551, null], [17551, 18539, null], [18539, 18839, null], [18839, 19511, null], [19511, 20205, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20205, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20205, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20205, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20205, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20205, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20205, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20205, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20205, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20205, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20205, null]], "pdf_page_numbers": [[0, 230, 1], [230, 568, 2], [568, 973, 3], [973, 1372, 4], [1372, 1825, 5], [1825, 2057, 6], [2057, 2361, 7], [2361, 2427, 8], [2427, 2589, 9], [2589, 2774, 10], [2774, 3026, 11], [3026, 3294, 12], [3294, 3602, 13], [3602, 4039, 14], [4039, 4427, 15], [4427, 4715, 16], [4715, 5169, 17], [5169, 5805, 18], [5805, 6453, 19], [6453, 6878, 20], [6878, 7409, 21], [7409, 8473, 22], [8473, 8644, 23], [8644, 9162, 24], [9162, 9552, 25], [9552, 9830, 26], [9830, 10109, 27], [10109, 10618, 28], [10618, 10921, 29], [10921, 11201, 30], [11201, 11918, 31], [11918, 12595, 32], [12595, 13277, 33], [13277, 13365, 34], [13365, 13793, 35], [13793, 14339, 36], [14339, 14825, 37], [14825, 15613, 38], [15613, 16663, 39], [16663, 17551, 40], [17551, 18539, 41], [18539, 18839, 42], [18839, 19511, 43], [19511, 20205, 44]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20205, 0.02664]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
5ad3f5f3ccdac97f46e7dcd65b2c766a79a526e7
Investigation of Governance Mechanisms for Crowdsourcing Initiatives Radhika Jain Zicklin School of Business, Baruch College, Radhika.jain@baruch.cuny.edu Recommended Citation This material is brought to you by the Americas Conference on Information Systems (AMCIS) at AIS Electronic Library (AISeL). It has been accepted for inclusion in AMCIS 2010 Proceedings by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. Investigation of Governance Mechanisms for Crowdsourcing Initiatives Radhika Jain Department of Computer Information Systems Zicklin School of Business, Baruch College New York, NY 10010 Radhika.jain@baruch.cuny.edu Abstract Crowdsourcing has increasingly become a recognized sourcing mechanism for problem-solving in organizations by outsourcing the problem to an undefined entity or the ‘crowd’. While the phenomenon of crowdsourcing is not new, it has gained considerable attention in practice due to new crowdsourcing opportunities that have been enabled by new social networking and web 2.0 technologies. While crowdsourcing initiatives provide several benefits for the participants involved, it also poses several novel challenges to effectively manage the crowd. Drawing from the governance mechanisms in the open source literature, we develop an analysis framework to examine the governance mechanisms implemented in three different crowdsourcing initiatives and their impact on the outcome of the initiative. Keywords Crowdsourcing, collective intelligence, open innovation, control and governance mechanisms INTRODUCTION A rise in amateurism (Howe and Junker, 2008), hypercompetitive global marketplace, the increasing complexity of problems and customers’ desire to participate in the product design and development process (Winsor, 2009), and deeper exploration of potential opportunities (Bonabeau, 2009) is pushing organizations to increasingly form and participate into new forms of collaborative alliances through what is known as ‘Crowdsourcing’. Crowdsourcing includes “[initiatives] when a company outsources jobs once performed by employees to the crowd, but [also] when people come together of their own accord and begin performing that function” (Howe and Junker, 2008). Near ubiquitous presence of Internet connectivity has made application of crowdsourcing in various unexpected areas tremendously efficient and cheap process (Economist, 2008). Crowdsourcing can include “anything from gathering feedback on a new idea, asking for assistance to solve a product problem, or looking for contractors, investors or new employees interested in participating in a project” (Gale, 2008). Table 1 summarizes various crowdsourcing initiatives. Crowdsourcing can provide organizations richer content and perspectives from a diverse crowd than what may be possible within a organizational unit or function, (PMNetwork, 2009) while allowing organizations a creative and cost-effective way to access innovative resources outside the boundaries of their unit, function, or even outside their organization (Walmsley, 2009). This model of opening up the boundaries of an organization to tap knowledge of external entities is increasingly becoming source of competitive advantage for organizations in various industries (Chesbrough, 2003) and customers are seen as biggest source for identifying the innovative ideas (Leimeister, Huber, Bretschneider and Krcmar, 2009). For example, in case of a new product development, crowdsourcing can provide organizations a better sense of their customers’ needs (PMNetwork, 2009) while projecting favorable image to the consumers that businesses listen to them. It can also facilitate discovery of best talent with relative ease (Schmitt, 2009). For contributors, crowdsourcing provides opportunities for working with large organizations to increase exposure and work on real problems (Drummond and Perkins, 2009). Crowdsourcing has allowed people to tap, explore, and turn their hobbies into something more beneficial (MacMillan, 2009). Participation in crowdsourcing project can provide individuals with opportunities to get noticed, sharpen their creative skills, and stay involved with things they enjoy while sharing knowledge and experiences with each other (Bonabeau, 2009; Schmitt, 2009; Winsor, 2009; MacMillan, 2009). Participation in such initiatives also strengthens the sense of community (Winsor, 2009). When crowdsourcing projects are initiated by a nonprofit and/or a government institution, sense of civic duty (Bonabeau, 2009), drive to contribute to the community, concerns about the democracy, and the healthy functioning of governmental agencies can also be powerful motivators for individuals to contribute (Howe and Junker, 2008). ### Table 1: Examples of Popular Crowdsourcing Initiatives <table> <thead> <tr> <th>Type</th> <th>Company</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Public or government initiatives</td> <td>Open for Questions (Phillips, March 26 2009)</td> <td>• President Obama’s experimental initiative “to open up the White House to American people... to get perspectives from outside Washington”, to incite feedback on the most important issues plaguing the American people. While the initiative was able to identify several important public concerns, among the top issues was issue of ‘marijuana legalization’, casting doubts on the usefulness and effectiveness of such initiatives.</td> </tr> </tbody> </table> | Public or government initiatives | Guardian (Findlay, 2009) | • To tackle the largest British political scandal (British MPs charged millions of dollars worth of frivolous expenses - from clearing moats to building duck houses - to the public) before a rival newspaper Daily Telegraph could complete document analysis. • Within the first 80 hours, 170,000 documents were reviewed and half of the 456,000 documents with details of MPs' expenses were read. | | New product development | Cambrian (Marshall, 2008) | • An online platform where innovative people could harness the wisdom of the masses, engage the participation of experts and secure funding to turn ideas into products and services. | | New product development | Riversimple (Sampson, 2009) | • An automotive start-up firm working on hydrogen-fuelled urban two-seater. A mechanical system, a part of the suspension system, and an electronic hardware component to be made available to the community for their input on design. | | Problem solving | Netflix (Lohr, 2009) | • Thousands of teams from 186 countries made submissions on improving Netflix’s movie recommendation system’s outcome by 10% for a cash prize of $1 million. New contest will present the contestants with demographic and behavioral data and they will be asked to model individuals’ taste profiles. | | Crowdsourcing marketplaces | InnoCentive | • Marketplace for business projects, where companies post challenges — often in areas like product development or applied science — and engineers and scientists working alone or in teams compete for cash payments or prizes offered by the companies (Lohr, 2009 ), lending intellectual gravitas to the open innovation industry (Hoffmann, 2009) | | Crowdsourcing marketplaces | TopCoder | • Software programming tasks are posted as contests. The developer of the best solution wins the top prize while other participants walk away with smaller rewards and garner skill ratings that can be included on their résumés. | Despite the increasing popularity of crowdsourcing due to the aforementioned benefits, careful evaluation of the issues and challenges of crowdsourcing is critical to ensure that firms can effectively exploit its potential. One of the major concerns for organizations that undertake such hybrid collaboration is managing and controlling the crowd. Consider the ‘Marijuana legalization’ as the topmost priority resulting from the ‘Open for Questions’ government crowdsourcing initiative mentioned in Table 1. Primary research question driving our research is “What is the nature of governance mechanisms used in crowdsourced projects?” In the next section we will analyze various challenges faced by organizations in implementing crowdsourcing initiatives, followed by the extant review of the various governance mechanisms in open source software development. Drawing from the governance mechanisms in the open source literature, we develop an analysis framework to examine the governance mechanisms implemented in three different crowdsourcing initiatives. We then present details of our qualitative research approach, data collection, and data analysis. We then present our preliminary analysis of governance mechanisms used in three crowdsourcing initiatives in various domains. We conclude by discussing the potential contributions of our research and discussing our next steps. **THEORETICAL BACKGROUND** Below we analyze some of the challenges as identified in the crowdsourcing literature. Challenges in the management of crowdsourcing projects 1. **Effective incentive mechanisms:** When crowds are invited by for-profit organizations, the dynamics of the crowd become different from those invited to participate in a non-profit or government institution sponsored initiative. Lack of proper incentive mechanisms can be viewed by the crowd as unethical and exploitive (Hoffmann, 2009) and a practice that leads to cheap source of labor (Brandel, 2008). Thus organizations will need to ensure that their incentive mechanisms are designed to thwart such impressions and to receive good-faith efforts from the crowd. 2. **Managing submissions:** Since crowdsourcing projects can yield tremendous amount of information, managing the idea generation process end-to-end is extremely critical. For example, the IT platform should be capable of supporting active participation of the crowd by enabling the support for features that drive individuals to participate in the idea generation process (Leimeister et al., 2009). Firms also need to delicately balance encouraging participation and maintaining clarity of overall business objectives (Winsor, 2009). Organizations should also develop a clear strategy for evaluating crowdsourced results and incorporating them into the project (PMNetwork, 2009). For example, when evaluating ideas received from the crowd to improve its product offerings through the IdeaStorm, Dell did not simply adopt the most popular ideas or the one that provided Dell with relative advantage, but decision to adopt was based on the complexity of the ideas (Di Gangi and Wasko, 2009). Participants also expect sponsoring organizations to be actively engaged in the project to provide needed information and to bring transparency in the process (Bonabeau, 2009). Lack of such transparency can raise concerns among participants about the accuracy of the output and suspicions about manipulation (Bonabeau, 2009). Organizations may also need to train internal stakeholders to effectively engage with the community and tap the crowd’s potential for generating new products (PMNetwork, 2008). Depending upon the nature of the project, ‘idea-incubation support’ to the finalists may be needed to address an idea’s weaknesses and make the most of its strengths (Jouret, 2009). 3. **Loss of control:** By allowing crowds to participate in product development processes, firms are likely to lose significant degree of control over the behavior of crowd and outcome of the project, as crowds may make unpredictable moves or are steered by undue influences from those who may not consider the firm’s best interests (Bonabeau, 2009). Organizations will need to identify appropriate governance mechanisms to steer the crowd toward completing the task without losing their focus. 4. **Quality of the ideas:** If solutions demand multiple perspectives or viewpoints then firms may be unable to capture truly diverse populations since participation may trend toward the upscale, educated, and tech-savvy crowd (Brandel, 2008) and thus firms need to ensure that the crowd does not exert undue influence over their decision-making process. Also crowdsourcing works more effectively when individuals are expressing their individuality to the utmost (PMNetwork, 2009). Another issue, especially when identifying popular products, is ‘information cascading’ which arises when individual’s opinion about the merit of a given product or service are influenced by those of others (Johnson, 2007). This can make evaluation of idea quality solely based on popular voting mechanism unpredictable. 5. **Creating trust:** Finally, one of the most crucial challenges that organizations face in crowdsourcing is in creating the environment of mutual trust between the crowd and the organization itself. Without such trust, open collaboration and innovation cannot happen. Standard ways of doing business, such as detailed contracts, often do the very opposite (2008). This suggests that organizations need to implement control mechanisms that do not overly constrain their ability to create environment of mutual trust, yet provides them sufficient contractual framework. With these concerns in mind, our primary concern is to investigate how organizations overcome these challenges through the use of various governance mechanisms. We next review the literature on the open source software development to identify various governance mechanisms. Governance mechanisms in open source software development Open source software development projects are developed and managed by “Internet-based communities of software developers who voluntarily collaborate to develop software that they or their organizations need (von Hippel and von Krogh, 2003).” Popular examples of open source include the development of Linux operating system, Apache web server, and OpenOffice suite. Realizing the benefits and competitive threat posed by open source initiatives, various software firms are joining forces with open source software development communities to identify common grounds and collaborate (Shah, 2006; O’Mahony and Bechky, 2008). From literature on open source software development, we identified commonly deployed governance mechanisms. Table 2: Governance Mechanisms in Open Source Software Development <table> <thead> <tr> <th>Governance mechanism</th> <th>Source</th> <th>Description</th> </tr> </thead> </table> | Membership management | (Sharma, Sugumaran and Rajagopalan, 2002) | • Provide mechanisms for qualified people to join and contribute to the project • Community members must be allowed to forge and dissolve relationships with outside entities | | | (Markus, Manville and Agres, 2000) | • Ensure that there is a manageable number of high-quality contributors | | Rules and institution | (Markus et al., 2000) | • Rules and institutions that members can adapt to their individual needs • Community members participate in making and changing the rules • Procedures for discussing and voting on important issues | | Reputation | (Markus et al., 2000) | • The desire to maintain a good reputation is a key motivator and a control mechanism • The fear of exclusion, transparency of performance and behavior can also be used to regulate member’s behavior | | | (Gallivan, 2001) | • Self-control: the emphasis on the individual’s professional reputation to regulate member’s behavior | | | (Franck and Jungwirth, 2003) | • Reputation game to reconcile the interests • Peers must also have incentives to make fair assessments of the contributions of others | | Monitoring and sanction | (Markus et al., 2000) | • Strong social pressures against noncompliance with norms • Social control: tactics include behavioral norms and member voting • Sanctioning members’ behavior: ‘voting a member out’, reducing a member’s privileges or not allowing them to be ‘voted in’ to begin with. | | | (Gallivan, 2001) | • Social control: tactics include behavioral norms and member voting • Sanctioning members’ behavior: ‘voting a member out’, reducing a member’s privileges or not allowing them to be ‘voted in’ to begin with. | | Leadership | (Bonaccorsi and Rossi, 2003) | • A widely accepted leadership setting the project guidelines and driving the decision process • The authority of the project leaders arises naturally from a bottom up investiture as a result of the contributions. • The leadership deeply influences the evolution of the project by selecting the best fitting solution | | Coordination | (Bonaccorsi and Rossi, 2003) | • Co-ordination mechanism based on shared protocols: a common notion of validity (solutions that not only exhibit the best performance but also look simple, clear and logical are selected, thus guaranteeing non-chaotic future expansion of the work) | | Task decomposition | (Markus et al., 2000) | • Effective work structures and processes, such as task decomposition and project management in software-development work • Legal arrangements designed to ensure fairness | | Decision making | (Shah, 2006) | • Open license as contract. Decision-making rights • Property rights, Restrictions on use, modification, and distribution • Proprietary modifications | Before discussing our preliminary findings on governance mechanisms in crowdsourcing initiatives, we briefly present our research methodology and approach to data analysis. **RESEARCH METHODOLOGY** Given the lack of empirical research on governance mechanisms in crowdsourcing, our primary objective was to achieve better understanding of these mechanisms, gaining insights into why and how these mechanisms work. As a result, our research approach is exploratory in nature rather than confirmatory (Yin, 1989). We used the mechanisms identified from the open source software development literature (summarized in Table 2) as guiding framework for data analysis. Such approach that draws upon prior theoretical work is well established in IS research (See for example, (Olsson, Ó Conchúir, Ågerfalk and Fitzgerald, 2008)). **Data Sources** The primary source of data for our research is a collection of publicly available accounts of three crowdsourcing initiatives in private and public sector domains. Since most of these projects are conducted in open online environments, these accounts provide in-depth access to how the projects were managed and how they evolved over a period of time. We examined publicly available accounts for the following projects: - Private sector - Netflix (Improving recommendation algorithm) - A Million Penguins (Crowd collaborative wikinovel-writing project) - Public Sector - UK Department for Work and Pensions (Developing IT strategy) We examined various newspaper, magazine, and wikipedia articles (Bell, Bennett, Koren and Volinsky, 2009; Copeland, 2009; Gomes, 2009; Lohr, 2009; Leonhardt, 2007) and books (Howe, 2009) to obtain the data for the Netflix recommendation competition. Primary source of data for ‘A Million Penguins’ was publisher’s blog (2007) that accompanied the wikinovel project and contained the notes from the editors on the progress of the novel over the two month duration of the project. It has more than 20+ blog postings from the editors and 250+ comments on these postings. We also examined the wikinovel blog itself (2007). Both blogs provide very rich data on what editors and contributors felt as the wikinovel-writing progressed. The case on UK’s Department for Work and Pensions documents the department’s effort to develop an IT strategy document using crowdsourcing. This initiative was different in that the department experimented with crowdsourcing to individuals within the department that were not part of the IT strategy team. The CIO, James Gardner, details his experiences on his blog (http://bankervision.typepad.com) and in a CIO magazine article (Gardner, 2010). Data analysis We used the open coding techniques from the grounded theory methodology to analyze the case study data (Strauss and Corbin, 1990). The goal of open coding is to reveal the essential ideas found in the data. The first step is to decompose observations into discrete incidents or ideas, each of which receives a name or label that represents the concepts inherent in the phenomenon. The second step is to discover categories by finding related phenomena or common concepts and themes in the accumulated data in order to group them under joint headings. This step identifies categories and sub-categories of data. FINDINGS All the above projects, except ‘A Million Penguins’, were completed successfully or were deemed successful. Outcome of novel-writing effort as part of the ‘A Million Penguins’ project was considered to be less than desirable and became known as the most written novel than the most read novel (Creasey, 2009). The wikinovel project resulted in 4,000+ pages of meandering, incoherent, anarchic, and uncontrollable mess. We summarize findings of our preliminary analysis of the governance mechanisms implemented in each of these crowdsourcing initiatives in Table 3. Initial comparison of the governance mechanisms implemented in the ‘A Million Penguins’ project with those implemented in the other three crowdsourced projects reveals that governance mechanisms will need to be aligned to the objectives of the crowdsourced initiative. We will present detailed analysis and findings at the conference. Table 3: Preliminary analysis of governance mechanisms in crowdsourcing <table> <thead> <tr> <th>Project</th> <th>Outcome</th> <th>Governance mechanisms identified</th> </tr> </thead> <tbody> <tr> <td>Netflix</td> <td>Competitors successfully improved Netflix’s recommendation algorithm by a specified factor of 10%</td> <td>Outcome control</td> </tr> <tr> <td></td> <td></td> <td>Effective incentive mechanisms</td> </tr> <tr> <td></td> <td></td> <td>Process transparency</td> </tr> <tr> <td></td> <td></td> <td>Online collaboration platform for knowledge sharing and learning</td> </tr> <tr> <td></td> <td></td> <td>Effective task decomposition</td> </tr> <tr> <td>A Million Penguins</td> <td>Unreadable novel</td> <td>Process transparency</td> </tr> <tr> <td></td> <td></td> <td>Online collaboration platform for knowledge sharing and learning</td> </tr> <tr> <td></td> <td></td> <td>No outcome control</td> </tr> <tr> <td></td> <td></td> <td>No coordination mechanisms</td> </tr> <tr> <td></td> <td></td> <td>No overview storyline or framework</td> </tr> <tr> <td></td> <td></td> <td>No task decomposition e.g. no decomposition of plot, characters, etc.</td> </tr> <tr> <td></td> <td></td> <td>No ‘benevolent dictator’ to steer the wiki-novel in a firm direction</td> </tr> <tr> <td>UK Department for Work and Pensions</td> <td>Successfully developed the IT strategy document that was substantially detailed, actionable, highly innovative, well-supported, and broad-reaching</td> <td>Process transparency</td> </tr> <tr> <td></td> <td></td> <td>‘Benevolent dictator’ to keep the work of IT strategy on track</td> </tr> <tr> <td></td> <td></td> <td>Effective task decomposition and integration</td> </tr> <tr> <td></td> <td></td> <td>Overview storyline or framework</td> </tr> <tr> <td></td> <td></td> <td>Member management</td> </tr> </tbody> </table> CONCLUSION Given the dearth of research that examines governance mechanisms in crowdsourcing, findings of our research have several important contributions to both IS research and practice community. First, findings of our research provide better insights into how governance mechanisms may impact the outcome of the crowdsourcing initiative. We are currently examining nature of the task/project to be completed and their characteristics that maps with a set of governance mechanisms to develop a guiding framework. We are also examining other crowdsourcing projects such as Wikipedia and OpenGov to enhance generalizability of the framework that would result from the above analysis. We expect to present the completed analysis and the framework at the conference. REFERENCES Jain Governance Mechanisms in Crowdsourcing
{"Source-Url": "http://virtual-communities.net/mediawiki/images/f/fd/Jain.pdf", "len_cl100k_base": 4948, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 20509, "total-output-tokens": 6728, "length": "2e12", "weborganizer": {"__label__adult": 0.0006508827209472656, "__label__art_design": 0.00162506103515625, "__label__crime_law": 0.002895355224609375, "__label__education_jobs": 0.09393310546875, "__label__entertainment": 0.0005049705505371094, "__label__fashion_beauty": 0.0005102157592773438, "__label__finance_business": 0.2158203125, "__label__food_dining": 0.0008234977722167969, "__label__games": 0.0018863677978515625, "__label__hardware": 0.001079559326171875, "__label__health": 0.0019369125366210935, "__label__history": 0.001438140869140625, "__label__home_hobbies": 0.0005197525024414062, "__label__industrial": 0.00128936767578125, "__label__literature": 0.002193450927734375, "__label__politics": 0.00653076171875, "__label__religion": 0.0007157325744628906, "__label__science_tech": 0.1690673828125, "__label__social_life": 0.001415252685546875, "__label__software": 0.0970458984375, "__label__software_dev": 0.395263671875, "__label__sports_fitness": 0.0006241798400878906, "__label__transportation": 0.0014247894287109375, "__label__travel": 0.0007596015930175781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31866, 0.02967]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31866, 0.13941]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31866, 0.89973]], "google_gemma-3-12b-it_contains_pii": [[0, 656, false], [656, 4684, null], [4684, 9594, null], [9594, 14838, null], [14838, 19793, null], [19793, 22898, null], [22898, 30206, null], [30206, 31866, null]], "google_gemma-3-12b-it_is_public_document": [[0, 656, true], [656, 4684, null], [4684, 9594, null], [9594, 14838, null], [14838, 19793, null], [19793, 22898, null], [22898, 30206, null], [30206, 31866, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31866, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31866, null]], "pdf_page_numbers": [[0, 656, 1], [656, 4684, 2], [4684, 9594, 3], [9594, 14838, 4], [14838, 19793, 5], [19793, 22898, 6], [22898, 30206, 7], [30206, 31866, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31866, 0.21053]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
6165ee3d794226ea0736b24f2825097ebfc179bb
Sliding Tile Puzzle 2x2 Sliding Tile States Observe: on 2x2, a slide is a rotation Goal state: a b c. Solvable: a b a . . a c a c a c . c . c b c b . b b . b a . .c b c b c b . b ab b a . a a . a c a c . c Not solvable: a c a . . a b a b a b . b . b c b c . c c . c a . .c b c b c b . c a c .c a . a a . a b a b . b Exactly half of the states are solvable, the other half are not. In the case of 2x2 puzzles, I can solve it if I start with a configuration where there is a cycle. If not, I can’t. Solvable? We call a state solvable if it can be transformed into the row-by-row sorted state (with blank last). For a puzzle with at least two each rows and columns for any fixed final state, exactly half of the states can be transformed into the final state. A parity check tells us whether an arbitrary state is solvable. <table> <thead> <tr> <th>column number</th> <th>solvability condition</th> </tr> </thead> <tbody> <tr> <td>odd</td> <td>even number inversions</td> </tr> <tr> <td>even</td> <td>blank’s row-from-bottom parity != inversions parity</td> </tr> </tbody> </table> 1) Look at a puzzle as a line of numbers (row 1, row 2, row 3, … row n) 2) For each number in the line, count how many inversions there are (how many numbers before it are bigger than it) 3) If the grid width is odd, then the number of inversions in a solvable situation is even. 4) If the grid width is even, and the blank is on an even row counting from the bottom (second-last, fourth-last etc), then the number of inversions in a solvable situation is odd. 5) If the grid width is even, and the blank is on an odd row counting from the bottom (last, third-last, fifth-last etc) then the number of inversions in a solvable situation is even. <table> <thead> <tr> <th>5 4 3</th> <th>odd number cols, 4+3+2+1=10 inversions, solvable</th> </tr> </thead> <tbody> <tr> <td>2 1 0</td> <td></td> </tr> </tbody> </table> 7 6 5 0 | even number cols, 6+5+4+3+2+1=21 inversions, | | | blank in row 2 from bottom, solvable | 7 6 5 4 | even number cols, 6+5+4+3+2+1=21 inversions, | | | blank in row 1 from bottom, unsolvable | Search sliding tile space Exhaustive Search Random walk is so much slower than BFS and DFS that we will ignore it for this problem. Both BFS and DFS are exhaustive so they will solve the problem, however it may take too long. Estimate state space: for any (r,c) problem, there are (r x c)! states. State space adjacency graph has 2 components: - Solvable states, (r)c)!/2 nodes - Unsolvable states, (r)c)!/2 nodes So starting from a fixed state, in the worst case, we examine \((rc)!/2\) nodes. <table> <thead> <tr> <th>dimension</th> <th>number of states</th> </tr> </thead> <tbody> <tr> <td>2 2</td> <td>(4! = 24)</td> </tr> <tr> <td>2 3</td> <td>(6! = 720)</td> </tr> <tr> <td>2 4</td> <td>(8! = 40320)</td> </tr> <tr> <td>3 3</td> <td>(9! = 362880)</td> </tr> <tr> <td>2 5</td> <td>(10! = 3.6e6)</td> </tr> <tr> <td>2 6 3 4</td> <td>(12! = 4.8e8)</td> </tr> <tr> <td>2 7</td> <td>(14! = .87e11)</td> </tr> <tr> <td>3 5</td> <td>(15! = 1.3e12)</td> </tr> <tr> <td>4 4</td> <td>(16! = 2.1e13)</td> </tr> </tbody> </table> Solving Slide Tile with BFS In maze traversal, we considered adjacency graphs of cells and used BFS to traverse this graph. The associated graph in a sliding tile puzzle is as follows: - Each node in a graph is a sliding tile state - Two nodes are adjacent if there is a single-slide between the states - With this graph, we use BFS as before To implement this in Python we can use a dictionary of the parents so each time we see a new state, we add it to the dictionary so that we know we have seen a state iff it is in the dictionary. Runtime Analysis For a 3x3 search space: \(\frac{9!}{2} = 181440\) We divide by 2 here because half of the total states are solvable, half are not. How can we use this to estimate a 2x5 runtime? Let’s say that we have 181440 iterations (of our algorithm’s graph) in 2.2 sec at 82600 iterations/sec. For BFS with a graph of \(N\) nodes and \(E\) edges, the runtime is \(O(N+E)\). Average degree (number of neighbours) in a 3x3 search space? - 4 corners with 2 neighbours - 4 sides with 3 neighbours - 1 middle with 4 neighbours Average degree: \(2 \times \frac{4}{9} + 3 \times \frac{4}{9} + 4 \times \frac{1}{9} = \frac{24}{9} = 2 \frac{2}{3} = 2.67\) So the number of edges in a 3x3 tile search space equals \(9! \times 2.67\). Average degree for 2x5 search space? - 6 non-corners with 3 neighbours - 4 corners with 2 neighbours Average degree: \(3 \times \frac{6}{10} + 2 \times \frac{4}{10} = \frac{26}{10} = 2.6\) So we expect the worst case (no solution runtime for a 2x5 tile search) to take about \(\frac{10^{2.6}}{2.67} = 9.75 \text{ times as long}\). Therefore, 1814400 iterations in 21.5 seconds at 84400 iterations/sec (which is close to what was expected). How about to solve a 4x4 puzzle? To get a lower bound, we compare the sizes of the search spaces. A 4x4 search space is \(16 \times 15 \times 14 \times 13 \times 12 \times 11 = R \text{ times the size of 2x5 search space}\). So we expect a 4x4 no-solution runtime at least \(R \times 21.5 \text{ seconds} = \text{about 2.3 years}\) BFS takes too long to solve a 4x4 puzzle so we need a faster algorithm. Why use BFS? To get the shortest solution. DFS will ignore many moves at each stage. How can we solve a 4x4 puzzle in a reasonable amount of time? Is there a way to tell which moves are more promising than other moves? Knowledge: information about the particular problem, could be proved or heuristic. Heuristic: suggestion, idea, not making provable claim about it e.g. in early 2000s, computer Go programs were full of heuristics, but almost nothing was proved. Special Purpose Algorithms: these are good for solving your problem, but you can’t use it anywhere else. A general algorithm helps you solve more problems. Special purpose algorithms do exist for the sliding tile puzzle. One such algorithm: 1. In sorted order (left to right, row by row) move next element into position while avoiding elements already placed. 2. Last 2 elements of each row need special technique 3. Last 2 rows need special technique 4. Final 2x2 grid (last 2 rows, last 2 columns) rotate into solution if and only if original state is solvable Heuristic search is a guided search. A heuristic function is used to decide which node of the search tree to explore next. Dijkstra’s Algorithm (single-source shortest path on weighted graphs): given a starting point, this algorithm will find all shortest paths from that starting point. At each node, we know the shortest path to get to that node so far. Input: graph/digraph with non-negative edge/arc weights and a start node S Output: for each node v, the shortest path from S to V In a weighted graph each edge has a weight. Algorithm: Let the node we start at be S, the distance of node Y be the distance from S to Y. 1. Mark all nodes as unvisited. Create a set of all the unvisited nodes called the unvisited set. 2. Assign to every node a tentative distance value: zero for S and infinity for all other nodes. Set S as current. 3. For the current node, C, consider all of its unvisited neighbors and calculate their tentative distances through C. Compare the newly calculated tentative distance to the current assigned value and assign the smaller one. a. For example, if the current node A is marked with a distance of 6, and the edge connecting it with a neighbor B has length 2, then the distance to B through A will be 6 + 2 = 8. If B was previously marked with a distance greater than 8 then change it to 8. Otherwise, keep the current value. 4. When we are done considering all of the unvisited neighbors of C, mark the C as visited and remove it from the unvisited set. A visited node will never be checked again. 5. If the destination node has been marked visited (when planning a route between two specific nodes) or if the smallest tentative distance among the nodes in the unvisited set is infinity (when planning a complete traversal; occurs when there is no connection between the initial node and remaining unvisited nodes), then stop. The algorithm has finished. 6. Otherwise, select the unvisited node that is marked with the smallest tentative distance, set it as the new "current node", and go back to step 3. Dijkstra’s SSSP Algorithm: ``` dist[s] == 0 fringe = { s } while not isempty(fringe): v = a fringe vertex with min dist remove v from fringe for each nbr w of v: add w to fringe (if not there already) newdist = dist[v] + wt(v,w) if newdist < dist[w]: dist[w] = newdist parent[w] = v ``` **Parent:** in our final solution, we can look back at the parent nodes saved to “build” the shortest path we have found. **Fringe:** set of nodes that have a neighbour among the nodes whose distances we know. A **greedy algorithm** is an algorithmic paradigm that follows the problem solving heuristic of making the locally optimal choice at each stage. This algorithm is greedy because at each step we remove the fringe node with the minimum distance so far. It is optimal on graphs (or acyclic digraphs) with non-negative edge weights: distance-so-far of fringe node with minimum distance-so-far in length of shortest path from start to that node. **A* Algorithm** **A***: *uses heuristic* to estimate remaining distance to target. If the heuristic underestimates the cost (always less than or equal to the cost), then A* finds the shortest path and usually considers less nodes than Dijkstra’s. In this algorithm, we have some extra information that allows us to process certain nodes before others. Add start node, each node has parent/cost - **Cost:** current minimum distance to that node (0 for the start node) - Heuristic \((target, v)\): a heuristic can be the Euclidean distance to the goal. If a heuristic must be a good estimate, you cannot guarantee that you have the shortest/lowest cost path. Why is it coded like “if next not in done”? You don’t want to process a node that has already been added to the path. We have three sets of nodes: - Done: finished processing, assigned path and weight - Fringe: currently processing - Nodes we haven’t seen (will be in neighbours of the nodes we are looking at) Cost\(\text{[current]} + \text{wt(current, next)}\) - Cost of current node + weight of an edge - If next is not in cost (we don’t have the distance for it) - Or if new\_cost < cost\[next\]: don’t bother updating if it’s not better Algorithm: ``` fringe = PQ() # PQ = priority queue fringe.add(start, 0) parent, cost, done = {}, {}, [] parent[start], cost[start] = None, 0 #cost[v] will be min dist-so-far from start to v #if heuristic(target, v) is always less/equal than min dist(target, v), #then final cost[v] will be min dist from start to v while not fringe.empty(): current = fringe.remove() # min priority done.add(current) if current == target: break for next in nbrs(current): # Look at neighbours of current node if next not in done: new_cost = cost[current] + wt(current, next) if next not in cost or new\_cost < cost[next]: cost[next] = new\_cost h = heuristic(target, next) priority = new\_cost + h fringe.add(next, priority) parent[next] = current ``` Arad to Bucharest Heuristic: straight line distance to B, Euclidean distance to the end node (this is easy to compute with latitude/longitude coordinates) This means that we’ve calculated the distance to Bucharest for each location and put that as a value for each node. Initialize all nodes to infinity, start (Arad) to 0 - Put its priority (366, heuristic + weight) - What’s the first node we process? Arad, the only one - Arad has three neighbours so we update their costs (S, T, Z) - New cost = shortest path (heuristic) + edge weight - Now we see cost, heuristic, and priority (sum of those two prior values) - Pick the one with the lowest priority value, etc. (see trace below) Now let’s apply A* to a sliding tile puzzle. To do this, we need a heuristic. If we want to find the shortest solution, we need to make sure we use A* with a heuristic that doesn’t overestimate the true path. We can start with a usual state space adjacency graph: - Node: sliding tile state (position) - Edge: a pair of states, can single-slide from one to another - Cost of a path: sum of number of edges from start (unit-cost weights) - Choice of heuristic function: - Number of misplaced tiles - Sum, over all tiles, of Manhattan distance (taxicab distance) from current to final location - **Manhattan distance**: The sum of the horizontal and vertical distances between points on a grid Each of these heuristic functions is always less/equal to the number of moves to solve so with A* each yields the shortest solution. Example: 4 3 2, 1 5 0 - 4: 1 - 3: 1 - 2: 1 - 1: 1 - 5: 0 - Sum: 4 Is an underestimate? Yes Are all tiles only going to move once? No Humans solving sliding tile Humans and computers often solve problems differently. We can solve sliding tile puzzles by decomposition. We can solve a 2x3 sliding tile puzzle by reducing it to a 2x2 puzzle. Consider any 2x3 puzzle with tiles 1-5. - Claim A: we can always get to position with numbers in left column correct (1, 4) - Claim B: after getting to that position, original problem solvable if and only if solving remaining 2x2 problem (while leaving left column in place) solvable Claim A Proof: - Each tile move preserves the solvability condition - E.g. assume number of rows is odd - Solvability condition: number of inversions is even - Each tile move preserves parity of number of inversions - Moving a tile left or right does not change number of inversions, and therefore doesn't change its parity of number of inversions - Moving a tile up or down does change number of inversions. The tile moves past an even number of other tiles (which is the width of the board minus 1). So, the number of inversions changed by \( \pm 1, \pm 1, \pm 1, \ldots, \pm 1 \), an even number of times. So it's still even, even if the number of inversions change, the parity did not. - So original 2x3 position solvable if and only if position with 1,4 in place solvable - Two cases: - Case 1: clockwise cyclic order of other three tiles is (2, 3, 5) - Subproblem solvable: in order, just need to be cycled into place - Original position is solvable: because the subproblem is solvable - Original position had even number of inversions: because odd number of columns so if it was solvable this has to be true - Case 2: clockwise cyclic order of other three tiles is (2, 5, 3) Claim B Proof: - Each tile move preserves the solvability condition - E.g. assume number of rows is odd - Solvability condition: number of inversions is even - Each tile move preserves parity of number of inversions - Moving a tile left or right does not change number of inversions, and therefore doesn't change its parity of number of inversions - Moving a tile up or down does change number of inversions. The tile moves past an even number of other tiles (which is the width of the board minus 1). So, the number of inversions changed by \( \pm 1, \pm 1, \pm 1, \ldots, \pm 1 \), an even number of times. So it's still even, even if the number of inversions change, the parity did not. - So original 2x3 position solvable if and only if position with 1,4 in place solvable - Two cases: - Case 1: clockwise cyclic order of other three tiles is (2, 3, 5) - Subproblem solvable: in order, just need to be cycled into place - Original position is solvable: because the subproblem is solvable - Original position had even number of inversions: because odd number of columns so if it was solvable this has to be true - Case 2: clockwise cyclic order of other three tiles is (2, 5, 3) - Sub problem unsolvable: out of order, can never switch inversion of 5 and 3 - Original position has odd number of inversions (so unsolvable): why?
{"Source-Url": "http://webdocs.cs.ualberta.ca/~hayward/396/hoven/4stile.pdf", "len_cl100k_base": 4417, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 21205, "total-output-tokens": 4889, "length": "2e12", "weborganizer": {"__label__adult": 0.0006117820739746094, "__label__art_design": 0.0008940696716308594, "__label__crime_law": 0.0009765625, "__label__education_jobs": 0.0021533966064453125, "__label__entertainment": 0.0005178451538085938, "__label__fashion_beauty": 0.0003056526184082031, "__label__finance_business": 0.0004606246948242187, "__label__food_dining": 0.0009927749633789062, "__label__games": 0.028076171875, "__label__hardware": 0.004268646240234375, "__label__health": 0.0006818771362304688, "__label__history": 0.00128936767578125, "__label__home_hobbies": 0.0007915496826171875, "__label__industrial": 0.0015363693237304688, "__label__literature": 0.0012006759643554688, "__label__politics": 0.0005083084106445312, "__label__religion": 0.0008945465087890625, "__label__science_tech": 0.443603515625, "__label__social_life": 0.00030922889709472656, "__label__software": 0.0204315185546875, "__label__software_dev": 0.4833984375, "__label__sports_fitness": 0.0026073455810546875, "__label__transportation": 0.0028476715087890625, "__label__travel": 0.0007238388061523438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15959, 0.03598]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15959, 0.7315]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15959, 0.88116]], "google_gemma-3-12b-it_contains_pii": [[0, 1349, false], [1349, 2506, null], [2506, 3996, null], [3996, 5708, null], [5708, 7483, null], [7483, 9605, null], [9605, 11366, null], [11366, 11902, null], [11902, 13364, null], [13364, 15811, null], [15811, 15959, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1349, true], [1349, 2506, null], [2506, 3996, null], [3996, 5708, null], [5708, 7483, null], [7483, 9605, null], [9605, 11366, null], [11366, 11902, null], [11902, 13364, null], [13364, 15811, null], [15811, 15959, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15959, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15959, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15959, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15959, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15959, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15959, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15959, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15959, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15959, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15959, null]], "pdf_page_numbers": [[0, 1349, 1], [1349, 2506, 2], [2506, 3996, 3], [3996, 5708, 4], [5708, 7483, 5], [7483, 9605, 6], [9605, 11366, 7], [11366, 11902, 8], [11902, 13364, 9], [13364, 15811, 10], [15811, 15959, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15959, 0.09009]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
8e13b3a1439cd94fa3f4c5d4cd146cd6a4d6cd98
A Serverless Cloud Integration For Quantum Computing M. Grossi\textsuperscript{a}, L. Crippa\textsuperscript{a,b}, A. Aita\textsuperscript{a}, G. Bartoli\textsuperscript{a}, V. Sammarco\textsuperscript{a}, E. Picca\textsuperscript{a}, N. Said\textsuperscript{a}, F. Tramonto\textsuperscript{c} and F. Mattei\textsuperscript{a} \\ \textsuperscript{a}IBM Italia S.p.A., Circonvallazione Idroscalo, 20090 Segrate (MI), Italy \\ \textsuperscript{b}Università di Parma, Dipartimento di Fisica, Parco Area delle Scienze, 7/A, 43124 Parma, Italy \\ \textsuperscript{c}IBM Client Innovation Center Srl, Via Lombardia 2/A, 20068 Peschiera Borromeo (MI), Italy \textbf{ARTICLE INFO} \textbf{Keywords:} \\ quantum computing \\ cloud computing \\ serverless \\ API integration \\ software architectures \textbf{ABSTRACT} Starting from the idea of Quantum Computing which is a concept that dates back to 80s, we come to the present day where we can perform calculations on real quantum computers. This sudden development of technology opens up new scenarios that quickly lead to the desire and the real possibility of integrating this technology into current software architectures. The usage of frameworks that allow computation to be performed directly on quantum hardware poses a series of challenges. This document describes a an architectural framework that addresses the problems of integrating an API exposed Quantum provider in an existing Enterprise architecture and it provides a minimum viable product (MVP) solution that really merges classical quantum computers on a basic scenario with reusable code on GitHub repository. The solution leverages a web-based frontend where user can build and select applications/use cases and simply execute it without any further complication. Every triggered run leverages on multiple backend options, that include a scheduler managing the queuing mechanism to correctly schedule jobs and final results retrieval. The proposed solution uses the up-to-date cloud native technologies (e.g. Cloud Functions, Containers, Microservices) and serves as a general framework to develop multiple applications on the same infrastructure. \section{1. Introduction} The design of a software on distributed system is based on essential pillars such as modularity, openness and reuse of components. Typically, the application is divided into logical layers allowing targeted interventions on decoupled elements [5]. However, the usage of frameworks that allow computation to be performed on quantum hardware poses a series of challenges that will be extensively discussed in the next section. The goal of this project is to overcome these constraints and develop a software architecture that can be reused as a design pattern whenever dealing with similar problems. The result is a system able to receive requests from the user, send them to a quantum computer and receive back the result by assuring the ordering and coherence of events as well as the right format [23]. At the same time, the system must maintain the characteristic of openness: it would be possible to plug and unplug new functionalities. A first attempt to create a modular design to integrate a quantum computer API services within a three tier application was made by some of the authors in [24]. In that case, the problem of managing an asynchronous service was not addressed and the focus was on a different technology for the backend integration. Given this requirement, we decided to adopt a configuration driven architecture where it is possible to add/remove components with minimal effort and without impacting the other parts of the system, which are loosely decoupled [11]. On the other hand, being notified when the computation is done has been achieved by exploiting an Event Streams queue hosted on IBM Cloud [15], built on top of Apache Kafka [29]. The usage of a streaming application that reacts when a message is published on the channel, according to a publish/subscribe pattern [8], is the proposed solution towards a real-time system. However, since Event Streams promotes parallelism as the number of consumers listening on a specific topic, we had to develop a user-specific assignment as a secure procedure to manage multi topic requests. The article is organized in the following: in Sec. 2 we describe the state of the art related to the integration of a new technology like the quantum computer in the current IT scenario, focusing on the effective challenges and feasible solutions. In Sec. 3 we elaborate on the proposed architecture from a general perspective defining the data flow and what are the current requirements and limitations in order to provide a possible best practice in implementing a real quantum computing based application. In Sec. 4 we describe in details each component of the proposed architecture, not only from a generic technological component but also suggesting specific items available on the market. We conclude this paper with a schematic reconstruction of the proposed solution with a remark about motivations, the technology and the methodology adopted in Sec. 5. \section{2. State of Art} The idea of Quantum Computing is a concept that dates back to 80s, thanks to the work of Benioff [3] that developed a model for a quantum mechanical touring machine. Few years later, Feynman in his paper [9] proposed the idea that to simulate quantum systems one should use a computer responding to the laws of quantum mechanics. Feynman speculated that the synergistic usage of quantum super-position and entanglement in computation may enable the design of computing devices showing a high degree of parallelism, which grows with the size of the device itself, in exponential way. While, at first, the physical build-up of such a system was something out of reach, the theoretical work on the field has started and in the 90s the most famous algorithms like the Shor’s [28] and Grover’s [12] were already available, and we understood that quantum computers could be used for more tasks than quantum simulations only. In 1996 Di Vincenzo proposed a set of minimal ‘criteria’ for a physical system to be considered a Quantum Computer [7]. The unit information of a quantum computer is the qubit, and there are several physical systems that can act as qubits. During late 90s, a lot of candidates for quantum bits started to show up, and today we can count a lot of different promising technology, some of which are superconductors [22], ion traps [4], photonic [1], each of which has some pros and cons at this time in terms of coherence time, working temperature, easy-control, scalability. The systems we know today are still prototypical, in a sense that they are very good to learn and test the technology, but they are still not able to provide a real advantage in terms of computation time with respect to classical computers, even if a claim for a ‘Quantum Supremacy’ proof has been called out by Google in 2019 [2]. The paradigm for the quantum computing architecture of today - that will probably stay for years - is that of a service that is available as a Cloud service. The first 5 qubits available publicly on the Cloud were released by IBM in 2016 [17] and now the ‘IBM Quantum Experience’ has several processors available in the Cloud to be used for free. The performance of a Quantum computer are clearly related to the number of qubits, but also to other factors like the noise, the errors and so on. IBM Researchers proposed a way to measure this performance named Quantum Volume [6]. Quantum Error Correction, that started in the 90s, and control software are presently very important ways to improve the performance of existing systems. Current cloud architecture are focused only on quantum oriented experience where the user can create circuits to be run, can select a device on which to run them and then can send the job to the service. The job will be queued together with others and run when the device is available. The whole path, however, is not connected with ‘classical’ computation, no management of a quantum algorithm definition with respect to real input is taken into account. The inclusion of a quantum algorithm execution, from data encoding to the return of processed data for a classical computation flux needs to be addressed. The aim of this paper is indeed to propose a reference architectural framework able to bridge the gap between the classical and quantum computation for real problems. At the time of writing there are no other proposals in this sense. We provide then a general reference framework as well as a working minimum valuable product with a specific technology adoption in the context of hybrid cloud. 2.1. Challenges With the rising of quantum technologies (like IBM Quantum system [18]), enterprises and researchers will be likely to use this kind of technology almost on a daily basis in a relatively near future. Thinking about a real scenario, an Enterprise would probably be oriented in progressively integrate quantum technologies in their up and running architecture to support and improve existing workloads rather than entirely replace existing workloads with quantum technologies. So, with this assumption, we developed an architectural framework that addresses the problems of integrating an API exposed Quantum provider, like the IBM Quantum system, in an existing Enterprise architecture. There are two major challenges in this: the first one is related to the technological and theoretical knowledge requirements in adopting a quantum provider: currently the only way to integrate a regular workload (e.g. a web application or a scheduled batch) is to use Qiskit SDK [19] or the Rigetti SDK [20], that represent, at this time of writing, the only two real quantum computer full stack systems. In this study, we focused on Qiskit SDK. Nevertheless, the same limitations and challenges that will be presented in this chapter, apply to any Qiskit like API indiscriminately. Qiskit is an SDK currently available only for Python, this means that, without the proper decoupling logic, the only workloads integrable with IBM Quantum system would be Python workloads. Moreover, even in a scenario in which a Python workload needs to be integrated with IBM Quantum system, in order to use Qiskit, a developer must know the logic of Quantum Circuit [21] composition. In an Enterprise scenario, where a certain amount of effort is required to change the developers team composition once it has been defined (from different perspectives: timing, cost, logistics), using Qiskit can become harsh [30]. In general, we are talking about an accessibility problem, that is common also in the integration of a standard workload with HPC environments [32]. In fact, the second type of challenges is related to the similarities between IBM Quantum system and an HPC environment. These two types of computational services share some common features such as: the complexity of the calculations and therefore the large process times, the concurrent access to the hardware recourse from different client systems that introduces a workload submission management and a scalability problem. We decided to develop this framework on the cloud (IBM Cloud) for the extended number of services that an enterprise grade cloud provider has in terms of data storage, hosting and middleware technologies and message queues. The enterprise grade support and SLA’s of the major cloud providers allow the team to focus only on the component design and development rather than infrastructural and availability problems. 3. Proposed architecture, requirements and data flow The framework developed aims to solve the two major challenges of accessibility and workload management. The accessibility challenge is tackled using a serverless FaaS (Functions as a Service) technology [10] that exposes a language agnostic HTTP interface to receive input data. This kind of technology offers a finer grain computational unit and is designed to execute small amounts of code in short time when a specific action is performed (a trigger). This can be done both by deploying a single packaged and executable software specifying the desired runtime (e.g. a .jar file with a Java runtime) or by deploying a custom Docker container [16]. To achieve accessibility, we designed a Docker container with Python as base image[1] extended with Qiskit, the open-source SDK developed by IBM and by the Qiskit Community to program quantum computers, as quantum integration framework that will expose a single HTTP API to receive raw data and transform it into a Quantum Circuit. The Framework provides a single Docker container (hence a single function with a single HTTP endpoint) for each specific algorithm that will be implemented. This is the decoupling logic that the Framework implements, hence it requires the availability of such technology in the environment chosen for deployment. In this way, a dedicated team can work only on the FaaS platform to expose multiple quantum algorithms through an HTTP API and a different system administration team can invoke them simply calling the designated endpoint, without worrying about the implementation of the algorithm itself. It is important to note that FaaS technology addresses also another architectural requirement: the scalability. Since the Framework operates as a facilitator for accessing the IBM Quantum system, its components will be used by multiple external systems; this implies that it needs to be highly scalable to avoid critical bottlenecks in the job submission phase. Here comes another analogy with the HPC architectures: the job scheduling system. This is crucial for the overall performances of the whole framework [26]. Due to the scalability offered by the FaaS technology, in order to face an increasing amount of requests, the Framework gives the possibility for many external systems to be integrated with it without loss of performances, handling Quantum Circuit composition and job submission. This is a further requirement of the FaaS component to achieve scalability. Despite the good scalability level in the phase of job submission, to be effectively scalable, the Framework needs another component to increase the scalability in the phase of results retrieval. The adopted component is a queue with a kafka interface. This queue component is used both by the FaaS element, that writes a submission report on a specific topic, containing all the information needed to retrieve the output of the job together with the identifier of the client that submitted it, and by the results collector component (a Python application further referenced as Backend 2) to retrieve the job results and send them as a message on another specific topic of the queue. An important prerequisite for the queue component is the high throughput, since it needs to handle all the messages received by the FaaS component during the job submission phase. As per the user interface component (a Java application further referenced as Backend 1) and results retriever component (Python application) the only architectural requirement is the availability of a general Java and Python runtimes to run the code. Since the code is ready to be containerized, also a standard container runtime (like Docker or Podman) and a container registry would work perfectly fine. In the context of the MVP we chose Cloud Foundry as runtime environment since it offers auto-configuration capabilities and handy administration interfaces that reduce the time and the effort needed for set it up. In Fig. 1 we show the proposed overall Architecture overview diagram and flow, as a first hybrid quantum-classical approach reflected to three different layers. In the following paragraph we describe a specific flow according to the first simple web application built: the representation of multiple 2-characters emoticons using the superposition property of quantum computing. ![Figure 1: Architectural overview diagram and flow.](image-url) cloud functions: to the database, this allows to retrieve data and to recall the following we showed an example of how the BackEnd 1 is linked to the Cloud Functions. In the following sections, it is more specific, the data model is composed by a single Java application hosted on IBM Cloud Foundry. Alternatively, the data model is composed by the proper page. 4. Framework components 4.1. FrontEnd Generally, the proposed framework User Interface (UI) leverages on a set of JavaScript functions, asynchronous Ajax calls, and a well-defined data structure to get inputs and to read outputs. All the graphic parts of HTML and CSS, together with the "general" JavaScript functions used to build the UI itself, can be modelled to fit customer's need and templates: it can be implemented in a "vanilla" flavours such as the one included prototype, or using any kind of ready to use a template (i.e., Bootstrap, React, etc.). The FrontEnd layer has the role of input collector and result display: it uses a set of client-side functions to perform the main integration tasks. The UI is built using the Carbon Design System components and functions, leveraging on the mobile-first approach, and ensuring cross-browser and cross-device compatibility. The FrontEnd is built as a general UI containing all applications developed under the proposed framework. However, it can be adapted to the user's need by changing the structure and graphics of the pages, leaving unaltered the JavaScript functions while leveraging on the proposed framework to communicate with the BackEnd 1. When a user interacts with the web page, a trigger is activated and the FrontEnd collects the data, putting them into the integration data structure and sending them to the BackEnd 1. Using a WebSocket connection, the page subscribes to a specific topic that belongs to BackEnd 1, to be able to retrieve results from the backend asynchronously. After a run is launched, a pop-up confirms the successful job submission to the backend. After the algorithm has run, the output is stored into the browser cache using a predefined data model, together with all the needed metadata. This data model provides a common JSON structure to pass data from the UI to the BackEnd 1, and vice versa. Now, it can be retrieved by the FrontEnd using a set of functions and the results can be used to display the proper page. 4.2. BackEnd 1 As reported in the picture of the architecture, the BackEnd 1 is made of a single Java application hosted on IBM Cloud Foundry, which communicates with Cloudant [14] to read configuration parameters needed to call the Cloud Functions. To be more specific, the data model is composed by just one collection called 'config' that contains the parameters that let BackEnd 1 to call the Cloud Functions. In the following we show an example of how the BackEnd 1 is linked to the database, this allows to retrieve data and to recall the cloud functions: ```json { "_id": "smile_super_position", "functionHttpMethod": "POST", "functionBackendUrl": "URL", "functionParams": { "body": "incomingRequestBody", "headers": { "Authorization": "IAMBearerToken", "Content-Type": "application/json", "Accept": "application/json" } } } ``` The BackEnd 1 reads these data and translates this configuration into a restful API call. All the details which make it possible are given: the method (POST), the endpoint, the authentication and the header. Given a new cloud function, the addition of new functionalities to this asset is as simple as adding another record into the config collection. This way, the architecture guarantees extensibility and it can be easily reused in future integration. Once the job is submitted to the quantum computer via the quantum API provided in this case by Qiskit, an attribute that contains a random Event Streams topic (e.g.: topic-1234) is read from the queue. User segregation is achieved since each client will be listening on a single specific topic. Concerning the development, we adopted Java Spring Boot[31] as a framework to enable a "production-ready" environment benefiting from the automatic Spring configuration and third-party libraries management. 4.3. Cloud Functions The serverless architectures allow the developers to focus on business logic exclusively without worrying about preparing the runtime, managing deployment and infrastructure related concerns, in this case a quantum computing integration [25]. The Cloud Functions component [13] defined in the proposed framework is the one responsible to build the quantum algorithm and execute the related circuit on the IBM Quantum provider, giving the possibility to choose between the available quantum hardware or simulators. From an architectural point of view, this is a block of instructions that runs on IBM Cloud Functions. This block of code, that is called "action", is invoked by the BackEnd 1 to process a user request received from the front end layer. The code of the action is written in Python, the language adopted by the Qiskit library. When the action is called from the BackEnd 1, a quantum circuit is created according to the input parameters. Then, an IBM Quantum job is defined and it is sent to the IBM Quantum system for the execution. Serverless architectures are gaining traction in cloud-based application architectures used by startups and matured organizations alike [27]. Instead of waiting for its final results, we collect its job ID2 as a parameter that is provided to the BackEnd 2 on a --- 2https://qiskit.org/documentation/_modules/qiskit/providers/ibmq A Serverless Cloud Integration For Quantum Computing Event Streams queue. From an application point of view, the action is invoked by BackEnd 1 with the following set of parameters: - algorithm-specific input parameters needed to build the related quantum circuit; - the type of IBM Quantum backend requested: real quantum hardware, noiseless simulator or noisy simulator; - the client ID and process job ID\(^3\); these two parameters are simply transmitted to BackEnd 2 and they are not used by the action. Once the afore mentioned parameters are acquired, the action creates the corresponding quantum circuit. The action is authenticated via the quantum API used, in this case the IBM Quantum provider, and it runs the execution request on the selected quantum backend. When the algorithm execution has been selected to run on a real quantum device, the code selects automatically the least busy IBM Quantum backend according to the number of qubits needed to map the quantum circuit. When the selection is the quantum simulator, the backend is set to be the “qasm simulator”. This option is used both in case of a noiseless simulation and in case of a noisy simulation, where the action uses the "qasm simulator" activating the noise model of the least busy real device. After the execution command, the action sends to the Event Streams queue the IBM Quantum provider job ID, the selected quantum backend, and the client and job ID received by the BackEnd 1. In case of error during the job submission to the IBM Quantum provider, or during the transmission of the parameters to the Event Streams queue, the action returns the associated error message. 4.4. BackEnd 2 The BackEnd 2 is a containerized application whose main task is to retrieve results of a quantum job as soon as they are available, regardless of the selected IBM Quantum backend, that could be anyone of the available quantum devices or cloud simulators. BackEnd 2 activity focuses on the management of the job ID polling, and this is performed by the integration of two framework components: - Kafka client: this component enables communication between the BackEnd 1, the BackEnd 2 and the Cloud Functions component (both in input and in output); - Polling: this component uses Qiskit libraries to retrieve a job final result and it uses the Event Stream component to deliver results back to BackEnd 1. The BackEnd 2 is written and deployed on a Python IBM Cloud Foundry instance. In addition, it leverages Python multi-process module to enable the management of more than one request at a time\(^4\). The main function is single- \[\begin{array}{|c|} \hline <table> <thead> <tr> <th>qiskit_job_data:</th> </tr> </thead> <tbody> <tr> <td>&quot;qiskit_job_id&quot;: 5fb50ae3d211b0b01996bbbd,</td> </tr> <tr> <td>&quot;backend_name&quot;: &quot;ibmq_qasm_simulator&quot;,</td> </tr> <tr> <td>&quot;clientID&quot;: 8303521272,</td> </tr> <tr> <td>&quot;jobID&quot;: 8041372717,</td> </tr> <tr> <td>&quot;timeUTC&quot;: 2020-11-18 11:48:50.315726</td> </tr> </tbody> </table> \hline \end{array}\] Figure 2: Example of input JSON. \(^3\)Not the IBM Quantum job ID. \(^4\)https://docs.python.org/3.7/library/multiprocessing.html 4.4.1. Kafka Client Component The Event Streams component consists of two Python classes that implement connections with the input and output components of the queue. They lie on the same Event Streams instance managing different topics. The Kafka-based Event Streams component allows to realize a smart decoupling between the quantum and classical world enabling asynchronous communication mechanisms needed to reduce the connection time to the quantum on-line system. The input component is represented by the insertion from the Cloud functions of the JSON containing the call execution id and the related job ID. They were produced submitting a quantum circuit to be executed on the IBM Quantum system. This part is implemented by a class that interacts with the queue in two ways: - listen as “consumer” to retrieve the incoming messages and then activate the polling activity through the “Polling component”; - write as “producer” to re-insert in the queue the unprocessed job ID. It happens when, considering the time between the “estimated time” of the process and the actual timestamp exceeds a pre-defined threshold. This control has been developed to reduce as much as possible the polling time on the IBM Quantum system. The output component of the queue is where the BackEnd 2 writes the results retrieved from IBM Quantum. The component then makes the results available to the BackEnd 1, that will proceed to post-process and to retrieve them back to the A Serverless Cloud Integration For Quantum Computing Figure 3: Polling mechanism overview. front end. The class is designed to write as “producer” on the output queue and to send JSON extended input together with the retrieved results or the error messages. 4.4.2. Polling Component This component contains all the dependencies related to the Qiskit framework. The task of the Polling component is to get, using the Event Streams component, an IBM Quantum job launched by the Cloud Functions on BackEnd 1 proper input, monitoring its progress and returning the results to the BackEnd 1. This component is initialized in the main function as poll workers: each worker gets an event from the manager pool queue, processing it independently from each others. It consists of three modules: - PollingIBMQWorker: contains the worker started in the main function; - PollingIBMQ: a class, whose method called “retrieveAndDeliverResult”, contains all the logic of the process for retrieve and manage each job’s result from IBM Quantum; - QuantumUtils: contains functions for the authentication to the IBM Quantum cloud. When each worker is started, the IBM Quantum account is enabled using its API key authentication, and it creates the connection to the IBM Quantum systems. As a first step, the PollingIBMQ class checks if the job exists on the quantum backend and then checks its final state (e.g. DONE, CANCELED or ERROR). If the job is in one of the allowed final states, the PollingIBMQ sends it to the output topic, and in case the final status is DONE it adds the obtained results. If the job is not complete and if the quantum backend is a real quantum device, it checks what is its estimated completion time: if it is less than a defined time, it will wait for the job to be finished using Qiskit proper function; otherwise, it will send the event back to the input topic so that it can process another job. In the case of a quantum simulator backend, it automatically starts waiting for the job, because the estimated time is not provided by Qiskit as waiting time is usually much less. The estimated completion time is then added to the JSON, so that it can be used by the Event Streams component to filter the event. If the job ends in a final error state or if there is an error in the process, this component always returns a result with the description of the encountered error. This strategy gives priorities to the jobs based on results readiness, to better manage multiple requests and, likewise, reducing the waiting time for the frontend users. In Fig. 4 there is an example of the final JSON. 5. Conclusions The potential of an innovative technology such as quantum computing is evident in the number of scientific articles that show its advances. The real scope of the technology and its adoption depend on the possibility and practicality of integrating it into the current context. To facilitate this adoption, in this paper we define the criteria and critical is- \[\text{provided by the queue\_info() function (https://qiskit.org/documentation/_modules/qiskit/providers/ibmq/job/ibmqjob.html)IBMQJob.queue\_inf o}\] A Serverless Cloud Integration For Quantum Computing Figure 4: Example of final JSON. sues for a possible integration of quantum computing within a cloud computing architecture. After an introduction about a general architecture and its operation using various cloud computing technologies, we even describe a real implementation via a Minimum Viable Product. In this context we developed a scheduler that allows users to correctly schedule and manage several quantum jobs. This is a simple solution that can be expanded and it can be downloaded from GitHub. In this scenario we used the IBM Quantum API tool, however its role can be extended without loss of generalization to any other Quantum API tools. The proposed solution can be schematically described in the following: Why - nowadays, quantum algorithms can be run on different web platforms, where the user needs to write an algorithm using the available tools; - programmatic access via API can be done using Software Development Kits (SDKs); - As the interest in quantum computing continues to grow, it is urgent to find methods to fill the gap between a low-level approach and the high-level general user experience. What The proposed framework provides developers, UI designers and researchers with a system that: - enables and speeds up the creation and deployment of web applications with a hybrid classical-quantum backend; - creates a custom user experience based on the problem to be solved; - spreads out the usage of quantum computing on Customer’s Production environments. How - This method has been reflected in a well-defined framework built as MVP on IBM Cloud and Quantum technologies: - IBM Cloud Foundry; - IBM Cloudant; - IBM Event Streams; - IBM Cloud Functions; - IBM Quantum. - A new approach on asynchronous job submission method specially created to support hybrid classical-quantum web applications. - Best practices: configuration driven architecture, open source, loosely coupled architectural pattern to schedule computation (polling/batch). Acknowledgements We would like to acknowledge G. De Sio and L. Savorana for grateful discussion and interaction. We acknowledge use of the IBM Quantum for this work. The views expressed are those of the authors and do not reflect the official policy or position of IBM or the IBM Quantum team. IBM, the IBM logo, and ibm.com are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. The current list of IBM trademarks is available at https://www.ibm.com/legal/copytrade. 6. References CRedit authorship contribution statement References
{"Source-Url": "https://export.arxiv.org/pdf/2107.02007", "len_cl100k_base": 6536, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25542, "total-output-tokens": 7502, "length": "2e12", "weborganizer": {"__label__adult": 0.0003638267517089844, "__label__art_design": 0.0005931854248046875, "__label__crime_law": 0.0003943443298339844, "__label__education_jobs": 0.0007266998291015625, "__label__entertainment": 0.0001239776611328125, "__label__fashion_beauty": 0.00017952919006347656, "__label__finance_business": 0.0006051063537597656, "__label__food_dining": 0.00048613548278808594, "__label__games": 0.0005917549133300781, "__label__hardware": 0.00197601318359375, "__label__health": 0.0009908676147460938, "__label__history": 0.0003829002380371094, "__label__home_hobbies": 0.00012433528900146484, "__label__industrial": 0.0007295608520507812, "__label__literature": 0.00029206275939941406, "__label__politics": 0.0003819465637207031, "__label__religion": 0.0006647109985351562, "__label__science_tech": 0.2476806640625, "__label__social_life": 9.888410568237303e-05, "__label__software": 0.01617431640625, "__label__software_dev": 0.72509765625, "__label__sports_fitness": 0.0003352165222167969, "__label__transportation": 0.0007276535034179688, "__label__travel": 0.00028014183044433594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33618, 0.02845]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33618, 0.37091]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33618, 0.90019]], "google_gemma-3-12b-it_contains_pii": [[0, 5731, false], [5731, 12137, null], [12137, 16052, null], [16052, 21671, null], [21671, 26169, null], [26169, 29316, null], [29316, 33618, null], [33618, 33618, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5731, true], [5731, 12137, null], [12137, 16052, null], [16052, 21671, null], [21671, 26169, null], [26169, 29316, null], [29316, 33618, null], [33618, 33618, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33618, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33618, null]], "pdf_page_numbers": [[0, 5731, 1], [5731, 12137, 2], [12137, 16052, 3], [16052, 21671, 4], [21671, 26169, 5], [26169, 29316, 6], [29316, 33618, 7], [33618, 33618, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33618, 0.05224]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
7c6e77c4660d182d5c6d3ee5a7684ec1b853d42c
MGA-Toolbox Version 0.1 Tool for Aligning Multiple Graphs Thomas Föber Marco Mernberger {thomas, mernberger}@mathematik.uni-marburg.de Philipps-University Marburg Mathematics and Computer Science Department Knowledge Engineering & Bioinformatics Group 35032 Marburg Germany 2008 January, 24th Contents 1 Introduction 1 1.1 Multiple Graph Alignment . . . . . . . . . . . . . . . . . . . 1 1.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Requirements and Installation 4 3 Usage 4 3.1 Program settings . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Parser for graphs . . . . . . . . . . . . . . . . . . . . . . . . 6 3.3 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 A Problems and Solutions 8 B Implementation – An overview 9 B.1 Greedy heuristic . . . . . . . . . . . . . . . . . . . . . . . . 9 B.2 Star alignment . . . . . . . . . . . . . . . . . . . . . . . . . 10 B.3 Evolutionary algorithm . . . . . . . . . . . . . . . . . . . . . 10 1 Introduction 1.1 Multiple Graph Alignment Graphs have been widely used for modelling structured objects that are difficult to model by other means. Especially in the field of bioinformatics, graphs have been used for modeling chemical structures, proteins and protein substructures as well as ligands that bind to them. Usually one is interested in finding similarities between such molecules, for example to detect chemical ligands that bind to the same proteins or proteins that share similar functions. Furthermore, graphs have been used beyond that field as a general means of modeling structured objects, e.g. XML documents, text documents, hand- written words, social networks or as object descriptors in images. Here, too, a comparison of graphs is likewise interesting, for example one might be interested in comparing hand-written texts, etc. Therefore a tool is needed for the comparison of such objects. Although several approaches exists, based on graph isomorphism, subgraph isomorphism and pattern recogni- tion, most of them are limited to a pairwise comparision of graphs. Our tool enables us to compare multiple graphs at once by calculating a mul- tiple graph alignment. Therefore we are not limited to a pairwise compar- ison. A graph alignment in this respect is defined as a one-to-one mapping of vertices of different graphs onto each other, in a way that the correspon- dence of vertex labels as well as the mapping of edges with similiar distances onto each other is maximized. Therefore we expect the graphs to be vertex- labeled and edge weighted, where edge labels are interpreted as distance values. Compared to sequence alignments, graph alignments present a much greater algorithmic challenge, as the vertices in a graph do not possess a natural or- der, as is the case with sequences with a certain sequence order. Moreover, a graph alignment tries to map the graphs in such a way that a maximum of edge identity as well as label identity is achieved. Since these alignments do not rely on exact matching techniques, inexact assignments of vertices to each other as well as edges to each other are allowed, albeit at a cost. We measure the quality of a given alignment by means of a scoring function, in which we penalize edge mismatches and vertex mismatches separately and reward edge and vertex matches by scoring constants. By adjusting these scoring parameters, one is able to increase the influence of vertex labels and decrease the influence of edges on the calculated alignment and vice versa. Since edges should be Since the number of possible mappings increases exponentially with the num- ber of vertices in a graph, the task of finding the best possible alignment is very complex and the MGA-Problem is NP-hard. 1.2 Algorithms We have implemented two algorithms solving the MGA-Problem; an evolu- tionary algorithm (EA) and a greedy heuristic. As showed in [3] the EA is able to produce alignments of much higher quality, albeit at the cost of a con- siderable increase of runtime. Therefore we recommend using the EA only on small graphs with up to 100 vertices. As a compromise between quality and runtime it is also possible to use a combination of greedy heuristic and EA, called star-EA. This algorithm uses the EA to calculate pairwise alignments and aggregates them to a multiple one using the star-align heuristic. Obviously this method is only usable on MGA problems with more than two graphs. The implementation of our algorithms is described in the appendix. Here we want to give a brief introduction in the three mentioned algorithms: **EA:** The representation of the EA is able to encode a multiple alignment for \( m \) graphs. Unlike the algorithms described below the optimization procedure can consider the whole problem at once. Ingenious genetic operators, especially developed for the MGA problem are able to construct an optimal alignment reliably. However MGA is a very complex problem and the search-space grows exponentially with the number of vertices and graphs. In addition, the underlying fitness-function is based on a sum-of-pairs measure and its runtime grows also with number of vertices and number of graphs. With some clever tricks we could reduce the runtime for evaluation the fitness-function of a multiple alignment dramatically but with very large number of vertices and graphs the runtime for optimization does still explode. Then EA in combination with star alignment can be used. **EA + Star alignment:** This variation uses the EA, too, with the difference that only pairwise alignments are calculated. The MGA problem with \( m \) graphs is decomposed into \( \frac{1}{2}m \cdot (m + 1) \) pairwise alignments that are solved by EA. A heuristic called star alignment merges these \( \mathcal{O}(m^2) \) pairwise alignments to a multiple one. We have shown [3] that this procedure leads to a significant speed-up of the optimization process at the cost of a slight decline in fitness and quality of the alignment. As mentioned above the runtime still grows exponentially with the number of vertices so that there are cases in which we have to replace the EA by a simple greedy heuristic. **Greedy + Star alignment:** This procedure is similar to the EA + Star alignment method and simply replaces the EA by a greedy heuristic. This heuristic quickly solves the subgraph-isomorphism problem in a first step producing a seed-solution of the pairwise graph alignment problem. In a second step, this seed-solution is greedily extended to an alignment by adding the remaining vertices to the current solution. Once again we have to decompose the MGA problem as described above and use the star alignment heuristic. In comparison to all other described approaches, this method is the quickest one at the expense of a reduced quality of the results regarding fitness and quality. 2 Requirements and Installation Our program is implemented in Java 1.6. Since Java is platform independent, the MGA-Tool is runnable on each operating system. You just have to install the Java interpreter and to copy the MGA-Tool file mga.jar into any folder. The latest version of the MGA-Tool is available at: www.uni-marburg.de/fb12/kebi/research/software/. 3 Usage The MGA-Tool can be started by typing java -jar MGA.jar plus additional arguments in a command window with correct path. java -jar MGA.jar -? returns a list with all possible settings that are described in the following section in detail. usage: MGA [options] [method] inputdirectory {inputdirectory2} outputdirectory {parameter} OPTIONS: '-?': Help, displays this text. '-p': calculate pairwise alignments. 2 input directories needed. All files in first directory are aligned with all files in second directory '-m': calculate multiple alignment. 1 input directory needed. All files in directory are aligned with each other METHOD: 'EA': Use Evolutionary Algorithm for calculation. 'EA+S': Use Evolutionary Algorithm + star alignment for calculation. 'GS+S': Use Greedy Strategy + star alignment for calculation. PARAMETER: 'stallGen:x': stop EA after x stall generations. Default: x = infinity. 'stallTime:x': stop EA after x stall seconds. Default: x = infinity. 'time:x': stop EA after x seconds. Default: x = infinity. 'gen:x': stop EA after x generations. Default: x = infinity. 'fitness:x': stop EA at a fitness of x. Default: x = infinity. 'mu:x': population size. Default: x = 4. 'nu:x': selective pressure. Default: x = 20. 'delta:x': cutoff value for edge distances. Default: x = 11. 'nmm:x': penalty for vertex mismatches. Default: x = -5. 'nm:x': bonus for vertex matches. Default: x = 1. \[ d:x \] : penalty for dummy matches. Default: \( x = -2.5 \). \[ emm:x \] : penalty for edge mismatches. Default: \( x = -0.2 \). \[ em:x \] : bonus for edge matches. Default: \( x = 0.1 \). \[ eps:x \] : tolerance threshold for edge length differences. Default: \( x = 0.2 \). ### 3.1 Program settings First, the user has to specify if a pairwise alignment or a multiple alignment should be calculated. This is done with the parameter `-p` or `-m` respectively. The former requires two input directories. All graphs in the first directory are (pairwise) aligned with the graphs in the second directory. The results are stored in the specified output directory. The latter requires one input directory. All graphs in this directory are aligned and the resulting multiple alignment is stored in the output directory. Before specifying the input- and output directories the calculation method must be chosen. There are three methods available in our toolbox: - **-EA** EA for calculating the multiple graph alignment - **-EA+S** EA for calculation of \( O(m^2) \) pairwise alignments and the subsequent merging of these by the star-align procedure. - **-GS+S** Greedy heuristic for calculation of \( O(m^2) \) pairwise alignments and the subsequent merging of these by the star-align procedure. Typically the upper needs the most time for calculation but is the exactest method. The last procedure is the fastest one but only a heuristic which cannot guarantee the optimal solution. If `-EA` oder `-EA+S` is used for solving the MGA problem, EA-typical parameters can be specified. The population size \( \mu \) can be set by `mu: \mu` a selective pressure of \( \nu \) analogous by `nu: \nu`. Other EA typical parameters are not settable since we have performed some parameter tuning [1] that indicated that these are the only parameters that have influence on runtime and result. The parameter tuning indicates also recommended values. If `mu` and `nu` are not set by user than this recommended values are used during optimization. The EA further needs one or several termination criteria. **Stall generations** are the generations in which no improvement occurs. This generations are a good indicator, for the progress of the optimization. Analogous it is possible to specify **stall time** as termination criteria. If the optimal **fitness** is known we can terminate, if this fitness is reached. Termination Criteria not based on progress are **time** and **generations**. It the maximum time or generations respectively are reached the optimization procedure is lefted. All these parameters can be set by parameter:value syntax as described above. If `-GS+S` is used as solver the described parameters are not required because greedy and star-alignment heuristic are parameterless. Our fitness function is also parameterisable: As described in section 1.1 we can specify constants for match \(nm\) or mismatch \(nmm\) of assigned vertices as well as for edges (\(em\) and \(emm\) respectively). If the graphs differ in respect to their size (or if it is better to assign a vertex of one graph to no vertex of another graph) we match some vertices onto dummy vertices which is penalized by a constant \(d\). Note that we are solving a maximization problem. So match constants should be larger than mismatch and dummy constants. Especially for biological applications we obtain a complete graph if we encode distances between vertices as edges with associated label. Sometimes these graphs contain too much information so that it has advantages to remove all edges with higher label than \(\delta\). This is done by setting the constant \(\delta\) to a specific distance threshold. As edge labels are interpreted as distances, one can specify a tolerance threshold \(\epsilon_s\) that sets the maximum for the difference in edge lengths, up to which the edges are still counted as a match. All here described settings are realized by parameter:value syntax. 3.2 Parser for graphs The MGA Toolbox can distinguish between mol2, pseudoc and rlbcoor format[4]. Files ending with mol2, pseudoc.txt or rlbcoor are automatically parsed by their belonging parser. Additionally we have implemented a general parser reading graphs in a simple format based on matlab syntax. This format stores a graph in a simple text file ending with .graph (not with txt!) and containing exact two lines. The first line encodes the vertex-labels \[A, B, C, A,\] \[-,15,18,15,-;15,-,18,15,-;25,-,-,25,-\] Figure 1: Example for illustrating the .graph-format separated by comma; in the second line each row of the adjacency matrix is separated by semicolon, entries in the row by comma. Figure 1 shows a simple vertex-labeled and edge-weighted graph with corresponding encoding in graph-format ("-" indicate no edge). 3.3 Output The calculated alignments are stored in a mga file. This file is human readable and can be opened by a text editor. The assigned name of this file is based on the name of the graphs that are aligned and is a composition of the first and the second graph separated with "_._.". In the multiple case, the result file is named by the first input graph plus the keyword multiple. The structure is as follows: 1. name of the aligned graphs 2. number of graphs 3. calculation method 4. time required for calculation 5. score of alignment 6. required generations (if EA was used for solving the MGA problem) 7. reason for stopping (interesting if more than one criteria was used) 8. used scoring parameters 9. alignment matrix: each column codes an assignment of single vertices indices out of each graph, if a -1 exists in cell (i, j) we use a dummy vertex in graph i and assignment-column j instead of a vertex out of graph i. 10. for each graph based on alignment ordering: (a) labels: vertex labels (b) distance: adjacency matrix 3.4 Example We have two classes of graphs in two orders F1 and F2 and want to align each graph out of each order pairwise and store the result in the folder res. As method we want to use an EA to achieve best results. The stopping criterion is 500 stall generations and we want to use standard parameters. The command is like follows: ``` java -jar MGA.jar -p EA /home/thomas/F1 /home/thomas/F2 /home/thomas/res stallGen:500 ``` ## Problems and Solutions <table> <thead> <tr> <th>Problem</th> <th>Reason / Solution</th> </tr> </thead> <tbody> <tr> <td>The program does not stop.</td> <td>Check the parameters you have given. If you specified stop criteria that cannot possibly be met (such as a certain fitness value) the program would not stop.</td> </tr> <tr> <td>The program does not use the right files for the alignment calculation.</td> <td>Make sure you have specified the input and output folders in the right order.</td> </tr> <tr> <td>The program breaks with a &quot;no such directory error&quot;.</td> <td>Make sure your directories exist and be sure to specify two input directories for the pairwise case. These can of course be the same but you have to specify it two times.</td> </tr> <tr> <td>My parameters are ok, but the program still does not seem to stop.</td> <td>MGA is a very complex problem that needs time to be solved. If you use the EA change the stopping criteria or switch to the greedy heuristic to save time.</td> </tr> <tr> <td>The calculated alignment is obviously not optimal.</td> <td>If you used any approach other than EA, the program cannot guarantee to reach a global optimum. If you used the EA and it is still not optimal, try again and allow a longer runtime by changing the stop criteria.</td> </tr> <tr> <td>The program did calculate an alignment, but it is not what we expected.</td> <td>Be aware, that the scoring parameters have a strong impact on the definition of a &quot;good&quot; alignment. By varying these, one can emphasize the importance of vertex labels or edge weights respectively. E.g. if the alignments show perfect vertex label assignments but completely neglect structural similarities, try to increase match boni for edges or lower match boni for vertices.</td> </tr> <tr> <td>How can the program be stopped?</td> <td>Use your operating system to kill the process (i.e. 'kill' on UNIX-System or Strg-c on Windows-Systems).</td> </tr> <tr> <td>I can not parse my graphs.</td> <td>The files storing the graphs does not end with &quot;.mol2&quot;, &quot;.pseudoc.txt&quot;, &quot;.rlb-coor&quot; or &quot;.graph&quot;.</td> </tr> <tr> <td>Problem</td> <td>Reason / Solution</td> </tr> <tr> <td>---------</td> <td>-------------------</td> </tr> </tbody> </table> | My graph-file has the correct ending but parsing is not possible. | 1. Your file does not use the standard syntax defined in [4]. Translate the file into the ’.graph’ format (cf. section 3.2) and parse it again. 2. Windows Systems hide well-known endings of a file, so that it is possible, that i.e. a file test.mol2.txt is displayed as test.mol2. | | The program breaks with a *Null Pointer Exception*. | Options, Method, Directories, Parameters are given in the false order during calling the program out of the command window. | | Is a graphical user interface available? | No! | If you have any problems or comments concerning the algorithm or the implementation feel free to contact us. **B Implementation – An overview** **B.1 Greedy heuristic** The greedy heuristic uses the Bron-Kerbosh-Algorithm for the calculation of a seed-solution for a pairwise alignment. Here, vertices consist of tuples containing one vertex from graph 1 and one vertex from graph 2. Let $G_1 = (V_1, E_1)$ be graph one and $G_2 = (V_2, E_2)$ be graph two. For all $v_i \in V_1$ for all $v_j \in V_2$ the tuple $(v_i, v_j)$ is added to the productgraph, if both vertices have the same label. An edge is drawn between two product vertices, if the corresponding vertices are connected in both graphs or not connected in both graphs. Then a clique algorithm performs a search of the first 100 cliques in the graph and sets the largest clique as the seed-solution. The algorithm then extends the solution greedily, in a way that the vertex tuple is added to the seed-solution that has the largest number of neighbours inside the seed solution. This is done until all remaining vertices are added to the solution. In the multiple case, all possible pairwise alignments are calculated and subsequently merged via a star-alignment merging algorithm. B.2 Star alignment The star align merging algorithm merges a number of pairwise alignments to one multiple alignment. For a given set of graphs all pairwise alignments are calculated, either by the greedy algorithm (greedy approach) or by the EA (star-EA). We subsequently calculate a multiple alignment by using one graph as a pivot graph and align all other graphs as calculated in the pairwise alignments. For each vertex in the pivot graph, the corresponding matched vertex in the particular pairwise alignment is mapped onto it. If dummy vertices occur, they will be mapped onto dummies in the corresponding pairwise alignment. This is done for each pairwise alignment in which the pivot graph is aligned with another graph, starting with the biggest in respect to the number of vertex entries. If eventually an alignment is added that has a shorter length, the remaining positions are filled with dummy vertices. As it is unclear, which graph is suited best as a pivot graph, we try each graph as a pivot graph and pick the multiple alignment yielding the highest score. B.3 Evolutionary algorithm The EA encodes the MGA problem as $m \times n$ matrix, with $m$ is number of graphs and $n$ is a variable. We use ES typical operators [2] for mating-selection and selection where for the selection only plus-selection is used. As termination criteria we have implemented stall-generations, stall-time, time, generations and fitness which are realized as simple if-conditions. The initialization is realized as follows: We determine the number of vertices in the largest graph $maxL$ and set the genome-length $n$ to $maxL + 1$. Now a $m \times n$ matrix is created and for each row a random permutation of 0 to $maxL$ is generated. Obviously if any of the graphs does not contain enough vertices, the remaining cells $(i, j)$ are set to “-1” which indicates a dummy-vertex. For mutation we use a very simple mechanism that selects two cells in the same row and swaps them. We have also implemented much more complex operators (i.e. using a self-adaptation mechanism) – but, experimental results have shows that a simple mutation performs much better. The recombination is realized for $\rho$ Individuals. So it is possible to switch off the recombination by set $\rho$ to one. Once again experimental results have shown that recombination should be turned on and $\rho$ set to 2. The recombination operator takes two uniformly randomly choosen parental individuals, cuts them on a uniformly randomly choosen row and exchanges their in this way created blocks. Note, that alignment indices are not ordered in the alignment, so a simple merging does not show the desired effect of improving fitness. Therefore we use the row on which the split occours as pivot-row to ensure the right order. A genome-length adaptation mechanism ensures that we work always on a optimal genome length. It is clear that a too short genome-length can result in a not optimal alignment, on the other hand, a too large genome-length enlarges the search-space enormously and slows down the optimization process. Another advantage is that the genome-length must not be specified by the user. We ensure (cf. initialization) that individuals always carry one dummy-column. Such a dummy-column, if dummies are required, will be integrated in the alignment. Because this integration needs time we check with small probability the number of dummy-columns. Three cases can occur: 1. We have exactly one dummy-column: Then nothing has to be done. 2. We have \( k > 1 \) dummy-columns: We can delete \( k – 1 \) of them, since we have a clue, that we deal with too much dummies. 3. We have no dummy-columns: All dummies are in use, so that we have to add a new dummy-column. At least we want to mention that we use the sum-of-pairs scoring scheme to evaluate individuals. This scoring scheme has the disadvantage that its calculation needs time which increase with number of vertices as well as number of graphs. Therefore we use two techniques that allow us to decrease the computational effort. First we calculate for each column a histogram of vertex-labels in linear time. With help of a simple arithmetical expression we can calculate the score of the column in constant time. Overall this results in a reduction from \( O(m^2) \) to \( O(m) \). A similar trick is used for decreasing the runtime for the calculation of the edge-score. Edges are treated as match if their distance is less than \( \epsilon \). To check an assignment of edges we once again need \( O(m^2) \) time. By sorting the edges according to their length we could decrease the time to \( O(m \cdot \log(m)) \). References
{"Source-Url": "https://cs.uni-paderborn.de/fileadmin/informatik/fg/is/Software/gcb08document.pdf", "len_cl100k_base": 5439, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 23763, "total-output-tokens": 6062, "length": "2e12", "weborganizer": {"__label__adult": 0.0003120899200439453, "__label__art_design": 0.0004444122314453125, "__label__crime_law": 0.0004277229309082031, "__label__education_jobs": 0.0016260147094726562, "__label__entertainment": 0.00014007091522216797, "__label__fashion_beauty": 0.00024771690368652344, "__label__finance_business": 0.0003862380981445313, "__label__food_dining": 0.0004410743713378906, "__label__games": 0.0006985664367675781, "__label__hardware": 0.00153350830078125, "__label__health": 0.0008921623229980469, "__label__history": 0.0004916191101074219, "__label__home_hobbies": 0.0001939535140991211, "__label__industrial": 0.0009632110595703124, "__label__literature": 0.0003299713134765625, "__label__politics": 0.00035572052001953125, "__label__religion": 0.0005879402160644531, "__label__science_tech": 0.482421875, "__label__social_life": 0.00017058849334716797, "__label__software": 0.0217437744140625, "__label__software_dev": 0.484375, "__label__sports_fitness": 0.0004198551177978515, "__label__transportation": 0.0004763603210449219, "__label__travel": 0.00024247169494628904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25272, 0.02414]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25272, 0.37596]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25272, 0.88445]], "google_gemma-3-12b-it_contains_pii": [[0, 1435, false], [1435, 4317, null], [4317, 6965, null], [6965, 8751, null], [8751, 11546, null], [11546, 13444, null], [13444, 15037, null], [15037, 18071, null], [18071, 19969, null], [19969, 22839, null], [22839, 25272, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1435, true], [1435, 4317, null], [4317, 6965, null], [6965, 8751, null], [8751, 11546, null], [11546, 13444, null], [13444, 15037, null], [15037, 18071, null], [18071, 19969, null], [19969, 22839, null], [22839, 25272, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25272, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25272, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25272, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25272, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25272, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25272, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25272, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25272, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25272, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25272, null]], "pdf_page_numbers": [[0, 1435, 1], [1435, 4317, 2], [4317, 6965, 3], [6965, 8751, 4], [8751, 11546, 5], [11546, 13444, 6], [13444, 15037, 7], [15037, 18071, 8], [18071, 19969, 9], [19969, 22839, 10], [22839, 25272, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25272, 0.07692]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
52f9267e93bc269dc2c163259a30c50dbe0cf465
Message-Passing Computing Basics of Message-Passing Programming using user-level message passing libraries Two primary mechanisms needed: 1. A method of creating separate processes for execution on different computers 2. A method of sending and receiving messages Single Program Multiple Data (SPMD) model Different processes merged into one program. Within program, control statements select different parts for each processor to execute. All executables started together - static process creation. Basic MPI way Multiple Program Multiple Data (MPMD) Model Separate programs for each processor. Master-slave approach usually taken. One processor executes master process. Other processes started from within master process - **dynamic process creation**. PVM way ``` spawn(); ``` Start execution of process 2 Basic “point-to-point” Send and Receive Routines Passing a message between processes using `send()` and `recv()` library calls: ``` Process 1 send(&x, 2); Movement of data recv(&y, 1); Process 2 x y ``` Generic syntax (actual formats later) Synchronous Message Passing Routines that actually return when message transfer completed. **Synchronous send routine** Waits until complete message can be accepted by the receiving process before sending the message. **Synchronous receive routine** Waits until the message it is expecting arrives. Synchronous routines intrinsically perform two actions: They transfer data and they synchronize processes. Synchronous `send()` and `recv()` library calls using 3-way protocol (a) When `send()` occurs before `recv()` (b) When `recv()` occurs before `send()` Asynchronous Message Passing Routines that do not wait for actions to complete before returning. Usually require local storage for messages. More than one version depending upon the actual semantics for returning. In general, they do not synchronize processes but allow processes to move forward sooner. **Must be used with care.** **MPI Definitions of Blocking and Non-Blocking** **Blocking** - return after their local actions complete, though the message transfer may not have been completed. **Non-blocking** - return immediately. Assumes that data storage to be used for transfer not modified by subsequent statements prior to being used for transfer, and it is left to the programmer to ensure this. Notices these terms may have different interpretations in other systems.) How message-passing routines can return before message transfer completed *Message buffer* needed between source and destination to hold message: Slide 50 Asynchronous (blocking) routines changing to synchronous routines Once local actions completed and message is safely on its way, sending process can continue with subsequent work. Buffers only of finite length and a point could be reached when send routine held up because all available buffer space exhausted. Then, send routine will wait until storage becomes re-available - \textit{i.e.} \textit{then routine behaves as a synchronous routine.} Message Tag Used to differentiate between different types of messages being sent. Message tag is carried within message. If special type matching is not required, a wild card message tag is used, so that the `recv()` will match with any `send()`. Message Tag Example To send a message, $x$, with message tag 5 from a source process, 1, to a destination process, 2, and assign to $y$: ``` Process 1 send(&x, 2, 5); Process 2 recv(&y, 1, 5); ``` Waits for a message from process 1 with a tag of 5 “Group” message passing routines Apart from point-to-point message passing routines, have routines that send message(s) to a group of processes or receive message(s) from a group of processes - higher efficiency than separate point-to-point routines although not absolutely necessary. Broadcast Sending same message to all processes concerned with problem. *Multicast* - sending same message to defined group of processes. ``` Process 0 data buf bcast(); Process 1 data bcast(); Process n - 1 data bcast(); ``` **MPI form** Scatter Sending each element of an array in root process to a separate process. Contents of \( i \)th location of array sent to \( i \)th process. Gather Having one process collect individual values from set of processes. MPI form Reduce Gather operation combined with specified arithmetic/logical operation. Example Values could be gathered and then added together by root: ``` Process 0 Action Code MPI form Process 1 Reduce Gather operation combined with specified arithmetic/logical operation. Example Values could be gathered and then added together by root: ``` Process n – 1 Reduce Gather operation combined with specified arithmetic/logical operation. Example Values could be gathered and then added together by root: ``` PVM (Parallel Virtual Machine) Perhaps first widely adopted attempt at using a workstation cluster as a multicore platform, developed by Oak Ridge National Laboratories. Available at no charge. Programmer decomposes problem into separate programs (usually a master program and a group of identical slave programs). Each program compiled to execute on specific types of computers. Set of computers used on a problem first must be defined prior to executing the programs (in a hostfile). Message routing between computers done by PVM daemon processes installed by PVM on computers that form the *virtual machine*. Can have more than one process running on each computer. MPI implementation we use is similar. PVM Message-Passing Routines All PVM send routines are nonblocking (or asynchronous in PVM terminology) PVM receive routines can be either blocking (synchronous) or nonblocking. Both message tag and source wild cards available. Basic PVM Message-Passing Routines **pvm_psend()** and **pvm_precv()** system calls. Can be used if data being sent is a list of items of the same data type. --- **Diagram:*** - **Process 1:** - Array holding data - `pvm_psend();` - Send buffer - Pack - Continue process - **Process 2:** - Array to receive data - `pvm_precv();` - Wait for message Full list of parameters for `pvm_psend()` and `pvm_precv()` \[ pvm_psend(int \text{dest\_tid}, \text{int \text{msgtag}, char *buf, int len, int datatyp)}} \] \[ pvm_precv(int \text{source\_tid, \text{int \text{msgtag, char *buf, int len, int datatyp)}}} \] Sending Data Composed of Various Types Data packed into send buffer prior to sending data. Receiving process must unpack its receive buffer according to format in which it was packed. Specific packing/unpacking routines for each datatype. Sending Data Composed of Various Types Example Process_1 send believe Message pvm_initsend(); pvm_pkint( ... &x ...); pvm_pkstr( ..., &s ...); pvm_pkfloat( ... &y ...); pvm_send(process_2 ... ); Process_2 Receive buffer Send buffer x s y pvm_recv(process_1 ...); pvm_upkint( ... &x ...); pvm_upkstr( ... &s ...); pvm_upkfloat(... &y ...); Send Receive buffer tbuffer Broadcast, Multicast, Scatter, Gather, and Reduce \begin{verbatim} pvm_bcast() pvm_scatter() pvm_gather() pvm_reduce() \end{verbatim} operate with defined group of processes. Process joins named group by calling \texttt{pvm_joingroup()} Multicast operation, \texttt{pvm_mcast()}, is not a group operation. Sample PVM program. ```c #include <stdio.h> #include <stdlib.h> #include <pvm3.h> #define SLAVE "spsum" #define PROC 10 #define NELEM 1000 main() { int mytid, tids[PROC]; int n = NELEM, nproc = PROC; int no, i, who, msgtype; int data[NELEM], result[PROC], tot=0; char fn[255]; FILE *fp; mytid=pvm_mytid(); /* Enroll in PVM */ /* Start Slave Tasks */ no= pvm_spawn(SLAVE, (char**)0, 0, "", nproc, tids); if (no < nproc) { printf("Trouble spawning slaves \n"); for (i=0; i<no; i++) pvm_kill(tids[i]); pvm_exit(); exit(1); } /* Open Input File and Initialize Data */ strcpy(fn, getenv("HOME"); strcat(fn, "/pvm3/src/rand_data.txt"); if ((fp = fopen(fn, "r")) == NULL) { printf("Can't open input file %s\n", fn); exit(1); } for (i=0; i<n; i++) fscanf(fp, "%d", &data[i]); printf("%d from %d\n", result[who], who); } ``` /* Open Input File and Initialize Data */ strcpy(fn, getenv("HOME")); strcat(fn, "/pvm3/src/rand_data.txt"); if ((fp = fopen(fn, "r")) == NULL) { printf("Can’t open input file %s\n", fn); exit(1); } for(i=0; i<n; i++) fscanf(fp, "%d", &data[i]); /* Broadcast data To slaves*/ pvm_initsend(PvmDataDefault); msgtype = 0; pvm_pkint(&nproc, 1, 1); pvm_pkint(tids, nproc, 1); pvm_pkint(&n, 1, 1); pvm_pkint(data, n, 1); pvm_mcast(tids, nproc, msgtag); /* Get results from Slaves*/ msgtype = 5; for (i=0; i<nproc; i++) { pvm_recv(-1, msgtype); pvm_upkint(&nproc, 1, 1); pvm_upkint(tids, nproc, 1); pvm_upkint(&n, 1, 1); pvm_upkint(data, n, 1); pvm_pkint(&me, 1, 1); pvm_pkint(&sum, 1, 1); printf("%d from %d\n", result[who], who); } /* Compute global sum */ for (i=0; i<nproc; i++) tot += result[i]; printf("The total is %d.\n", tot); pvm_exit(); /* Program finished. Exit PVM */ return(0); mytid = pvm_mytid(); /* Receive data from master */ msgtype = 0; pvm_recv(-1, msgtype); pvm_upkint(&nproc, 1, 1); pvm_upkint(tids, nproc, 1); pvm_upkint(&n, 1, 1); pvm_upkint(data, n, 1); /* Determine my tid */ for (i=0; i<nproc; i++) if(mytid==tids[i]) {me = i; break;} /* Add my portion Of data */ x = n/nproc; low = me * x; high = low + x; for(i = low; i < high; i++) sum += data[i]; /* Send result to master */ pvm_initsend(PvmDataDefault); pvm_pkint(&me, 1, 1); pvm_pkint(&sum, 1, 1); msgtype = 5; master = pvm_parent(); pvm_send(master, msgtype); /* Exit PVM */ pvm_exit(); return(0); MPI (Message Passing Interface) Standard developed by group of academics and industrial partners to foster more widespread use and portability. Defines routines, not implementation. Several free implementations exist. MPI Process Creation and Execution Purposely not defined and will depend upon the implementation. Only static process creation is supported in MPI version 1. All processes must be defined prior to execution and started together. *Originally SPMD model of computation.* MPMD also possible with static creation - each program to be started together specified. Communicators Defines *scope* of a communication operation. Processes have ranks associated with communicator. Initially, all processes enrolled in a “universe” called `MPI_COMM_WORLD` and each process is given a unique rank, a number from 0 to \( n - 1 \), where there are \( n \) processes. Other communicators can be established for groups of processes. Using the SPMD Computational Model ```c main (int argc, char *argv[]) { MPI_Init(&argc, &argv); . . MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /*find process rank */ if (myrank == 0) master(); else slave(); . . MPI_Finalize(); } ``` where `master()` and `slave()` are procedures to be executed by master process and slave process, respectively. “Unsafe” Message Passing MPI specifically addresses unsafe message passing. Unsafe message passing with libraries (a) Intended behavior (b) Possible behavior MPI Solution “Communicators” A communication domain that defines a set of processes that are allowed to communicate between themselves. The communication domain of the library can be separated from that of a user program. Used in all point-to-point and collective MPI message-passing communications. Default Communicator `MPI_COMM_WORLD` exists as the first communicator for all the processes existing in the application. A set of MPI routines exists for forming communicators. Processes have a “rank” in a communicator. Point-to-Point Communication PVM style packing and unpacking data is generally avoided by the use of an MPI datatype being defined. Blocking Routines Return when they are *locally complete* - when location used to hold message can be used again or altered without affecting message being sent. A blocking send will send the message and return. This does not mean that the message has been received, just that the process is free to move on without adversely affecting the message. Parameters of the blocking send \texttt{MPI\_Send(buf, count, datatype, dest, tag, comm)} - Address of send buffer - Datatype of each item - Message tag - Number of items to send - Rank of destination process - Communicator Parameters of the blocking receive \[ \text{MPI}_\text{Recv}(\text{buf}, \text{count}, \text{datatype}, \text{src}, \text{tag}, \text{comm}, \text{status}) \] - Address of receive buffer - Datatype of each item - Maximum number of items to receive - Rank of source process - Message tag - Status after operation - Communicator Example To send an integer $x$ from process 0 to process 1, ```c MPI_Comm_rank(MPI_COMM_WORLD,&myrank); /* find rank */ if (myrank == 0) { int x; MPI_Send(&x, 1, MPI_INT, 1, msgtag, MPI_COMM_WORLD); } else if (myrank == 1) { int x; MPI_Recv(&x, 1, MPI_INT, 0, msgtag, MPI_COMM_WORLD, status); } ``` Nonblocking Routines Nonblocking send - MPI_Isend(), will return “immediately” even before source location is safe to be altered. Nonblocking receive - MPI_Irecv(), will return even if there is no message to accept. Nonblocking Routine Formats \texttt{MPI} \_Isend(buf, count, datatype, dest, tag, comm, request) \texttt{MPI} \_Irecv(buf, count, datatype, source, tag, comm, request) Completion detected by \texttt{MPI} \_Wait() and \texttt{MPI} \_Test(). \texttt{MPI} \_Wait() waits until operation completed and returns then. \texttt{MPI} \_Test() returns with flag set indicating whether operation completed at that time. Need to know whether particular operation completed. Determined by accessing the \texttt{request} parameter. Example To send an integer $x$ from process 0 to process 1 and allow process 0 to continue, ``` MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /* find rank */ if (myrank == 0) { int x; MPI_Isend(&x,1,MPI_INT, 1, msgtag, MPI_COMM_WORLD, req1); compute(); MPI_Wait(req1, status); } else if (myrank == 1) { int x; MPI_Recv(&x,1,MPI_INT,0,msgtag, MPI_COMM_WORLD, status); } ``` Four Send Communication Modes Standard Mode Send Not assumed that corresponding receive routine has started. Amount of buffering not defined by MPI. If buffering provided, send could complete before receive reached. Buffered Mode Send may start and return before a matching receive. Necessary to specify buffer space via routine `MPI_Buffer_attach()` Synchronous Mode Send and receive can start before each other but can only complete together. Ready Mode Send can only start if matching receive already reached, otherwise error. *Use with care.* Each of the four modes can be applied to both blocking and nonblocking send routines. Only the standard mode is available for the blocking and nonblocking receive routines. Any type of send routine can be used with any type of receive routine. Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Broadcast and Scatter Routines The principal collective operations operating upon data are - **MPI_Bcast()** - Broadcast from root to all other processes - **MPI_Gather()** - Gather values for group of processes - **MPI_Scatter()** - Scatters buffer in parts to group of processes - **MPI_Alltoall()** - Sends data from all processes to all processes - **MPI_Reduce()** - Combine values on all processes to single value - **MPI_Reduce_scatter()** - Combine values and scatter results - **MPI_Scan()** - Compute prefix reductions of data on processes Example To gather items from the group of processes into process 0, using dynamically allocated memory in the root process, we might use ```c int data[10]; /*data to be gathered from processes*/ MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /* find rank */ if (myrank == 0) { MPI_Comm_size(MPI_COMM_WORLD, &grp_size); /*find group size*/ buf = (int *)malloc(grp_size*10*sizeof(int)); /*allocate memory*/ } MPI_Gather(data,10,MPI_INT,buf,grp_size*10,MPI_INT,0,MPI_COMM_WORLD); ``` Note that `MPI_Gather()` gathers from all processes, including root. Barrier As in all message-passing systems, MPI provides a means of synchronizing processes by stopping each one until they all have reached a specific “barrier” call. Sample MPI program. ```c #include "mpi.h" #include <stdio.h> #include <math.h> #define MAXSIZE 1000 void main(int argc, char *argv) { int myid, numprocs; int data[MAXSIZE], i, x, low, high, myresult, result; char fn[255]; char *fp; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); if (myid == 0) { /* Open input file and initialize data */ strcpy(fn,getenv("HOME")); strcat(fn,"/MPI/rand_data.txt"); if ((fp = fopen(fn,"r")) == NULL) { printf("Can't open the input file: %s ", fn); exit(1); } for(i = 0; i < MAXSIZE; i++) fscanf(fp,"%d", &data[i]); } /* broadcast data */ MPI_Bcast(data, MAXSIZE, MPI_INT, 0, MPI_COMM_WORLD); /* Add my portion of data */ x = n/nproc; low = myid * x; high = low + x; for(i = low; i < high; i++) myresult += data[i]; printf("I got %d from %d\n", myresult, myid); /* Compute global sum */ MPI_Reduce(&myresult, &result, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD); if (myid == 0) printf("The sum is %d.\n", result); MPI_Finalize(); } ``` Debugging and Evaluating Parallel Programs Visualization Tools Programs can be watched as they are executed in a space-time diagram (or process-time diagram): - Process 1 - Process 2 - Process 3 Legend: - Computing - Waiting - Message-passing system routine - Message Time PVM has a visualization tool called XPVM. Implementations of visualization tools are available for MPI. An example is the Upshot program visualization system. Evaluating Programs Empirically Measuring Execution Time To measure the execution time between point \( L_1 \) and point \( L_2 \) in the code, we might have a construction such as \[ L_1: \text{time(\&t1); /* start timer */} \] \[ L_2: \text{time(\&t2); /* stop timer */} \] \[ \text{elapsed\_time} = \text{difftime}(t2, t1); /* elapsed\_time = t2 - t1 */ \] \[ \text{printf("Elapsed time = \%5.2f seconds", elapsed\_time);} \] MPI provides the routine \texttt{MPI\_Wtime()} for returning time (in seconds). Home Page http://www.cs.unc.edu/par_prog Basic Instructions for Compiling/Executing PVM Programs Preliminaries • Set up paths • Create required directory structure • Modify makefile to match your source file • Create a file (hostfile) listing machines to be used (optional) Details described on home page. Compiling/executing PVM programs Convenient to have two command line windows. To start PVM: At one command line: pvm returning a pvm prompt (>) To compile PVM programs At another command line in pvm3/src/: aimk file To execute PVM program At same command line in pvm3/bin/?/ (where ? is name of OS) file To terminate pvm At 1st command line (>): quit Basic Instructions for Compiling/Executing MPI Programs Preliminaries • Set up paths • Create required directory structure • Create a file (hostfile) listing machines to be used (required) Details described on home page. Hostfile Before starting MPI for the first time, need to create a hostfile Sample hostfile ws404 #is-sm1 //Currently not executing, commented pvm1 //Active processors, UNCC sun cluster called pvm1 - pvm8 pvm2 pvm3 pvm4 pvm5 pvm6 pvm7 pvm8 Compiling/executing (SPMD) MPI program For LAM MPI version 6.5.2. At a command line: To start MPI: First time: lamboot -v hostfile Subsequently: lamboot To compile MPI programs: mpicc -o file file.c or mpiCC -o file file.cpp To execute MPI program: mpirun -v -np no_processors file To remove processes for reboot lamclean -v Terminate LAM lamhalt If fails wipe -v lamhost Compiling/Executing Multiple MPI Programs Create a file specifying programs: Example 1 master and 2 slaves, “appfile” contains n0 master n0-1 slave To execute: mpirun -v appfile Sample output 3292 master running on n0 (o) 3296 slave running on n0 (o) 412 slave running on n1 Intentionally blank
{"Source-Url": "http://facom.ufms.br/~marco/paralelos2008/slides2.pdf", "len_cl100k_base": 5366, "olmocr-version": "0.1.50", "pdf-total-pages": 62, "total-fallback-pages": 0, "total-input-tokens": 92665, "total-output-tokens": 7928, "length": "2e12", "weborganizer": {"__label__adult": 0.0002505779266357422, "__label__art_design": 0.00018227100372314453, "__label__crime_law": 0.0001982450485229492, "__label__education_jobs": 0.0003764629364013672, "__label__entertainment": 4.9591064453125e-05, "__label__fashion_beauty": 9.41753387451172e-05, "__label__finance_business": 0.00011593103408813477, "__label__food_dining": 0.0002663135528564453, "__label__games": 0.0004851818084716797, "__label__hardware": 0.0019378662109375, "__label__health": 0.0002522468566894531, "__label__history": 0.00014030933380126953, "__label__home_hobbies": 8.052587509155273e-05, "__label__industrial": 0.0004093647003173828, "__label__literature": 0.00011032819747924803, "__label__politics": 0.000125885009765625, "__label__religion": 0.0003457069396972656, "__label__science_tech": 0.0185089111328125, "__label__social_life": 5.78761100769043e-05, "__label__software": 0.01078033447265625, "__label__software_dev": 0.96435546875, "__label__sports_fitness": 0.0002343654632568359, "__label__transportation": 0.0003452301025390625, "__label__travel": 0.000148773193359375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20429, 0.01825]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20429, 0.40888]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20429, 0.7358]], "google_gemma-3-12b-it_contains_pii": [[0, 26, false], [26, 267, null], [267, 519, null], [519, 818, null], [818, 1069, null], [1069, 1481, null], [1481, 1634, null], [1634, 1969, null], [1969, 2421, null], [2421, 2578, null], [2578, 3028, null], [3028, 3278, null], [3278, 3530, null], [3530, 3816, null], [3816, 4061, null], [4061, 4209, null], [4209, 4295, null], [4295, 4812, null], [4812, 5302, null], [5302, 5525, null], [5525, 5756, null], [5756, 5791, null], [5791, 6124, null], [6124, 6383, null], [6383, 6625, null], [6625, 7002, null], [7002, 7312, null], [7312, 8251, null], [8251, 9771, null], [9771, 9992, null], [9992, 10355, null], [10355, 10716, null], [10716, 11112, null], [11112, 11189, null], [11189, 11273, null], [11273, 11577, null], [11577, 11800, null], [11800, 11933, null], [11933, 12284, null], [12284, 12510, null], [12510, 12839, null], [12839, 13156, null], [13156, 13374, null], [13374, 13897, null], [13897, 14290, null], [14290, 14841, null], [14841, 15087, null], [15087, 15753, null], [15753, 16308, null], [16308, 16476, null], [16476, 17651, null], [17651, 17928, null], [17928, 18088, null], [18088, 18600, null], [18600, 18642, null], [18642, 18913, null], [18913, 19277, null], [19277, 19503, null], [19503, 19745, null], [19745, 20127, null], [20127, 20410, null], [20410, 20429, null]], "google_gemma-3-12b-it_is_public_document": [[0, 26, true], [26, 267, null], [267, 519, null], [519, 818, null], [818, 1069, null], [1069, 1481, null], [1481, 1634, null], [1634, 1969, null], [1969, 2421, null], [2421, 2578, null], [2578, 3028, null], [3028, 3278, null], [3278, 3530, null], [3530, 3816, null], [3816, 4061, null], [4061, 4209, null], [4209, 4295, null], [4295, 4812, null], [4812, 5302, null], [5302, 5525, null], [5525, 5756, null], [5756, 5791, null], [5791, 6124, null], [6124, 6383, null], [6383, 6625, null], [6625, 7002, null], [7002, 7312, null], [7312, 8251, null], [8251, 9771, null], [9771, 9992, null], [9992, 10355, null], [10355, 10716, null], [10716, 11112, null], [11112, 11189, null], [11189, 11273, null], [11273, 11577, null], [11577, 11800, null], [11800, 11933, null], [11933, 12284, null], [12284, 12510, null], [12510, 12839, null], [12839, 13156, null], [13156, 13374, null], [13374, 13897, null], [13897, 14290, null], [14290, 14841, null], [14841, 15087, null], [15087, 15753, null], [15753, 16308, null], [16308, 16476, null], [16476, 17651, null], [17651, 17928, null], [17928, 18088, null], [18088, 18600, null], [18600, 18642, null], [18642, 18913, null], [18913, 19277, null], [19277, 19503, null], [19503, 19745, null], [19745, 20127, null], [20127, 20410, null], [20410, 20429, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20429, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20429, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20429, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20429, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20429, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20429, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20429, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20429, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20429, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20429, null]], "pdf_page_numbers": [[0, 26, 1], [26, 267, 2], [267, 519, 3], [519, 818, 4], [818, 1069, 5], [1069, 1481, 6], [1481, 1634, 7], [1634, 1969, 8], [1969, 2421, 9], [2421, 2578, 10], [2578, 3028, 11], [3028, 3278, 12], [3278, 3530, 13], [3530, 3816, 14], [3816, 4061, 15], [4061, 4209, 16], [4209, 4295, 17], [4295, 4812, 18], [4812, 5302, 19], [5302, 5525, 20], [5525, 5756, 21], [5756, 5791, 22], [5791, 6124, 23], [6124, 6383, 24], [6383, 6625, 25], [6625, 7002, 26], [7002, 7312, 27], [7312, 8251, 28], [8251, 9771, 29], [9771, 9992, 30], [9992, 10355, 31], [10355, 10716, 32], [10716, 11112, 33], [11112, 11189, 34], [11189, 11273, 35], [11273, 11577, 36], [11577, 11800, 37], [11800, 11933, 38], [11933, 12284, 39], [12284, 12510, 40], [12510, 12839, 41], [12839, 13156, 42], [13156, 13374, 43], [13374, 13897, 44], [13897, 14290, 45], [14290, 14841, 46], [14841, 15087, 47], [15087, 15753, 48], [15753, 16308, 49], [16308, 16476, 50], [16476, 17651, 51], [17651, 17928, 52], [17928, 18088, 53], [18088, 18600, 54], [18600, 18642, 55], [18642, 18913, 56], [18913, 19277, 57], [19277, 19503, 58], [19503, 19745, 59], [19745, 20127, 60], [20127, 20410, 61], [20410, 20429, 62]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20429, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
49055c4892cee06305916d20687bf6d3940f5e3e
New and Improved Methods for Administering Your Database December 16, 2004 Howard Horowitz Objective Expose you to some of the database features available in 10g and compare them to the lengthy workarounds that were used in previous Oracle versions. 10g provides better ways to administer your database. “It takes the mountain out of the mole hill”. Mountains & Mole Hills 9i Mountains = More effort and resources to accomplish similar tasks in 10g 10g Mole Hills = Less effort and resources to accomplish similar tasks in 9i. Popular Features - Automatic Shared Memory Management (ASMM) - Data Pump - SQL Tuning Advisor - Flashback Database - RMAN - Backupset Compression Automatic Shared Memory Management (ASMM) 8i method for automating SGA management There is no method. Workaround You have to shutdown the database and manually change the values. This could be done programmatically with multiple init<SID>.ora files. Each file containing different values for the SGA parameters and automated via shell and Cron/Autosys. Automatic Shared Memory Management (ASMM) 9i method for automating SGA management. Still not doable, however, you can dynamically change many of the values without shutting down the database. Workaround You have to use the alter system/session commands and also rely on the v$shared_pool_advice and db_cache_advice views for proper settings. Manual / programmatic effort is required if the behavior of your database changes and SGA changes are needed. Cron and Autosys to automate. Automatic Shared Memory Management (ASMM) 10g method for automating SGA management. `alter system set sga_target='x';` **Automatic Shared Memory Management (ASMM)** **sga_target** -- This parameter is new in Oracle Database 10g and reflects the total size of memory an SGA can consume. - Shared pool - Buffer cache - Java Pool - Large Pool Automatic Shared Memory Management (ASMM) - Automatically adapts to workload changes - Maximizes memory utilization - Single Parameter makes it easier to use - Helps eliminate out of memory errors - Can help improve performance Diagram showing memory pools and their interactions: - Buffer Cache - Large Pool - SQL Cache - Java Pool - Large Batch Jobs - Online Users - PGA Pool - SGA Pool Annotations on the diagram: - Automatically adapts to workload changes - Maximizes memory utilization - Single Parameter makes it easier to use - Helps eliminate out of memory errors - Can help improve performance Automatic Shared Memory Management (ASMM) - Requires an SPFILE and SGA_TARGET > 0. Can not exceed sga_max_size. - Does not apply to the following parameters. - Log Buffer - Other Buffer Caches (KEEP/RECYCLE, other block sizes) - Streams Pool (new in Oracle Database 10g) - Fixed SGA and other internal allocations - Can be adjusted via EM or command line. - A new background process named Memory Manager (MMAN) manages the automatic shared memory. Automatic Shared Memory Management DEMO Popular Features - Automatic Shared Memory Management (ASMM) - Data Pump - SQL Tuning Advisor - Flashback Database - RMAN - Backupset Compression **Data Pump** - 8i / 9i method for suspending exports and imports. **N/A** - 8i / 9i method for restarting failed exports and imports at point of failure. **N/A** - 8i / 9i method for controlling the number of threads/processes. **N/A** - 8i / 9i method for direct mode imports. **N/A** - 8i / 9i method for monitoring export and import’s. **N/A** - 8i / 9i method for importing and exporting data via PL/SQL. **N/A** - 8i / 9i method for exporting/importing pre-defined objects via include or exclude keywords (grants, procedures, functions, tables..etc). Supports like and not like clause. **N/A** - 8i / 9i method for remapping tablespaces and datafiles. **N/A** Data Pump High performance import and export - 60% faster than 9i export (single thread) - 15x-45x faster than 9i import (single thread) The reason it is so much faster is that Conventional Import uses only conventional mode inserts, whereas Data Pump Import uses the Direct Path method of loading. As with Export, the job can be parallelized for even more improvement dynamically. Creates a separate dump file for each degree of parallelism. Data Pump Time is money. Data Pump has cut down data movement/processing times significantly. Popular Features - Automatic Shared Memory Management (ASMM) - Data Pump - SQL Tuning Advisor - Flashback Database - RMAN - Backupset Compression **SQL Tuning Advisor** Most database and application related problems are the result of poorly written SQL and missing or misused indexes. Oracle provides an interface through OEM or via the PL/SQL stored packages `dbms_advisor` and `dbms_sqltune` to analyze existing SQL statements, provide tuning recommendations and implement those recommendations. - Navigate to performance - Then to Advisor Central - SQL Tuning Advisor - Top SQL **Additional advisors:** redo log size advisor, sql access advisor, undo advisor, and segment advisor. Popular Features • Automatic Shared Memory Management (ASMM) • Data Pump • SQL Tuning Advisor • Flashback Database • RMAN - Backupset Compression **Flashback Database** 8i / 9i method for point-in-time recovery - Shutdown the database - Restore all of the datafiles from last backup - Startup the database in mount state - Recover database until (SCN or Time) - **Apply the necessary redo/archive logs** - Open the database – open resetlogs Flashback Database 10g method for point-in-time recovery - Shutdown the database - Startup the database in mount state - SQL> `flashback database to timestamp to_timestamp('2004-12-16 16:10:00', 'YYYY-MM-DD HH24:MI:SS');` - Open the database – open resetlogs Flashback Database - New strategy for point-in-time recovery - Flashback Log captures old versions of changed blocks. - Think of it as a continuous backup - Replay log to restore DB to time - Restores just changed blocks - It’s fast - recovers in minutes, not hours. Moreover, this feature removes the need for database incomplete recoveries that require physical movement of datafiles/restores. - It’s easy - single command restore - SQL> Flashback Database to scn 1329643 Like a “Rewind” button for the Database Flashback Database Restrictions - Not used for Media failure errors. Used for Logical/User errors. - The database control file has been restored or re-created. - Previous tablespace has been dropped. - The database data file that contains the object to be queried has been shrunk. - A recovery through the resetlogs command has occurred. Views for Monitoring - V$Database - V$Flashback_Database_Log - V$Flashback_Database_Stat Popular Features • Automatic Shared Memory Management (ASMM) • Data Pump • SQL Tuning Advisor • Flashback Database • RMAN - Backupset Compression RMAN – Backupset Compression 8i / 9i method for compressing backups (Compression utility) gzip *.bak, *.arc, *.ctl….etc; RMAN – Backupset Compression 10g method for compressing backups - RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET; - RMAN> BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG; Prior to Oracle 10g, RMAN reduced the size of backup images by backing up only used blocks. This was great for databases that were over-sized, however, this didn’t help for large databases with little free space. The AS COMPRESSED BACKUPSET option of the BACKUP command allows RMAN to perform binary compression of backupsets. The resulting backupsets do not need to be uncompressed during recovery. RMAN – Backupset Compression Pros: • Backupsets were compressed by 78% when compared to a regular backupset. Cons: • Creating compressed backupsets imposes some extra CPU overhead during backup and restore, which can slow the backup process. If you have more than one CPU, you can use increased parallelism to run jobs on multiple CPUs and thus improve performance. Hidden Gems • Renaming Tablespaces • Dictionary View Improvements • Flush Buffer Cache • Automated Statistics Collection • Flashback Drop • Flashback Table • Flashback Transaction Query • Diamonds in the rough Rename Tablespace 8i / 9i method for renaming tablespaces • Create a new tablespace with the same size as the original one. (You have to make sure you have enough room on disk to store a duplicate copy). Space pending, this might require additional analysis of the original tablespace to determine if the new tablespace can be resized/reorged. • Move objects from the original tablespace to the new one. (This could take a while, depending on the size of the tablespace). • Drop the original tablespace and datafile(s) after the objects are moved to the newly named tablespace. Rename Tablespace 10g method for renaming tablespaces SQL> alter tablespace users rename to users3; Rename Tablespace Oracle allows the renaming of tablespaces in 10g. A simple alter tablespace command is all you need. ``` SQL> alter tablespace users rename to users3; Tablespace altered. Elapsed: 00:00:00.05 SQL> alter tablespace users3 rename to users; Tablespace altered. Elapsed: 00:00:00.02 ``` Rename Tablespace - Rename tablespace feature has lessened the workload for TTS operations. There’s no need to delete tablespaces on the target prior to impdp metadata. - Doesn’t Support System or Sysaux tablespaces - Supports Default, Temporary, and Undo Tablespaces (dynamically changes the spfile). Hidden Gems - Renaming Tablespaces - Dictionary View Improvements - Flush Buffer Cache - Automated Statistics Collection - Flashback Drop - Flashback Table - Flashback Transaction Query - Diamonds in the rough Dictionary View Improvements 8i / 9i method for monitoring blocking locks UTLLOCKT.sql – requires catblock.sql to be run OR --Blocking Locks Info SELECT bs.username "Blocking User", bs.username "DB User", ws.username "Waiting User", bs.sid "SID", ws.sid "WSID", bs.sql_address "address", bs.sql_hash_value "Sql hash", bs.program "Blocking App", ws.program "Waiting App", bs.machine "Blocking Machine", ws.machine "Waiting Machine", bs.osuser "Blocking OS User", ws.osuser "Waiting OS User", bs.serial# "Serial#", DECODE(wk.TYPE, 'MR', 'Media Recovery', 'RT', 'Redo Thread', 'UN', 'USER Name', 'TX', 'Transaction', 'TM', 'DML', 'UL', 'PL/SQL USER LOCK', 'DX', 'Distributed Xaction', 'CF', 'Control FILE', 'IS', 'Instance State', 'FS', 'FILE SET', 'IR', 'Instance Recovery', 'ST', 'Disk SPACE Transaction', ) FROM v$lock hk, v$session bs, New columns in v$session allow you to easily identify sessions that are blocking and waiting for other sessions. V$session also contains information from v$session_wait view. No need to join the two views. --Display blocked session and their blocking session details. ``` SELECT sid, serial#, blocking_session_status, blocking_session FROM v$session WHERE blocking_session IS NOT NULL; ``` or ``` SELECT blocking_session_status, blocking_session FROM v$session WHERE sid = 444; /* Blocked Session */ ``` Dictionary View Improvements 8i/9i method for monitoring rollback activity In addition to monitoring how long a SQL session will take, now you can monitor rollback activity from one dictionary view. Prior to 10g, you had to do the following: ``` Select b.sid, a.used_ublk From v$transaction a, v$session b Where a.addr=b.taddr and b.username = 'user-name' ``` V$session.used_ublk (Used Undo Block) will give you the count of number of blocks to be rolled back. Take counts every five minutes to figure out how long it will take to rollback a transaction. This is a manual process, leaving room for error. Dictionary View Improvements 10g method for monitoring rollback activity Select `time_remaining` from `v$session_longops` where `sid = 444; /* rollback session */` Or ``` select `time_remaining`, `sofar`, `elapsed_seconds` from `v$session_longops l`, `v$session s` where `l.sid=s.sid` and `l.serial# = s.serial#` and `s.module='long_proc'`; ``` SIDE NOTE: You can also use `dbms_appliation_info.set_module` and `set_client_info` to track the progress of a procedure and query the results from `v$session_longops`. I used to use these packages to monitor procedures via `v$session`. Hidden Gems • Renaming Tablespaces • Dictionary View Improvements • **Flush Buffer Cache** • Automated Statistics Collection • Flashback Drop • Flashback Table • Flashback Transaction Query • Diamonds in the rough Flush Buffer Cache 8i/9i method for flushing the buffer cache Prior to 10g, this wasn’t possible without shutting down and restarting the database or using the following undocumented commands: - SQL> alter session set events = 'immediate trace name flush_cache'; - alter tablespace offline/online to flush the buffer cache of blocks relating to that tablespace (As per Tom Kytes Article). Side-Note - You were able to flush the shared pool SQL> ALTER SYSTEM FLUSH SHARED_POOL; Flush Buffer Cache 10g method for flushing the buffer cache 10g has provided the ability to flush the buffer cache. This isn’t suggested for a production environment, but might be useful for QA/Testing. The bigger the cache, the larger the LRU and dirty list becomes. That results in longer search times. However, if the buffer cache is undersized, than running the following command can improve performance and take the burden off the DBWR. In addition to decreasing free buffer waits. SQL> ALTER SYSTEM FLUSH BUFFER_CACHE; Flush Buffer Cache DEMO Hidden Gems - Renaming Tablespaces - Dictionary View Improvements - Flush Buffer Cache - **Automated Statistics Collection** - Flashback Drop - Flashback Table - Flashback Transaction Query - Diamonds in the rough Automatic Statistics Collection 8i/9i method for gathering statistics Required a scheduled process that called DBMS_STATS or DBMS_UTIL packages. For finer granularity, master-slave scripts may have been created that called dbms_stats/analyze commands based on the percentage of table/index changes. ``` for sid in test1 test2 test3 do echo "connecting to $sid" sqlplus /nolog << EOF SET ECHO ON SET SERVEROUT ON SIZE 1000000 CONNECT system/manager@$sid BEGIN FOR uname IN (SELECT username FROM dba_users WHERE username NOT IN ('SYS', 'SYSTEM', 'OUTLN', 'DBSNMP')) LOOP DBMS_OUTPUT.PUT_LINE ('Analyzing USER :'||uname.username); DBMS_STATS.GATHER_SCHEMA_STATS( username => uname.username, estimate_percent => '25', block_sample => TRUE, method_opt => 'FOR ALL INDEXED COLUMNS SIZE 1', degree => '2', granularity => 'ALL', cascade => TRUE, options => 'GATHER'); END LOOP; END; exit EOF done ``` **Automatic Statistics Collection** 10g method for gathering statistics Database statistics are automatically collected using the `dbms_stats.gather_database_stats_job_proc` procedure. - By default this job runs within a maintenance window between 10 P.M. to 6 A.M. week nights and all day on weekends. - `DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC` is an internal procedure which gathers statistics for tables with either empty or stale statistics, similar to the `DBMS_STATS.GATHER_DATABASE_STATS` procedure using the GATHER AUTO option. The main difference is that the internal job prioritizes the work such that tables most urgently requiring statistics updates are processed first. You can also prevent statistics from being overwritten via `dbms_stats.lock_schema_stats` and `unlock`. In 10g you can also restore statistics to any point in time, in case the new statistics that were collected cause a sub optimal plan to be generated. You can restore statistics for a table, schema, fixed database objects, or the entire database. Hidden Gems - Renaming Tablespaces - Dictionary View Improvements - Flush Buffer Cache - Automated Statistics Collection - **Flashback Drop** - Flashback Table - Flashback Transaction Query - Diamonds in the rough Flashback Drop 8i/9i method for undoing a dropped table - Table level import from logical backup - Restore backup to another location and export table from backup database. Import dropped table from backup database to original database. - Point-in-time recovery prior to the dropped table The first method is the fastest but will require a fair amount of time, depending on the table size. The second method is an extension of the first if a logical backup wasn’t taken. The third method requires a database shutdown. Flashback Drop 10g method for undoing a dropped table SQL> flashback table emp to before drop; Flashback Drop • Recycle Bin (Sounds familiar)……….A logical representation of the dropped object. The dropped/renamed table is still occupying space in it’s original tablespace. You can still query it after it’s dropped. • You can empty out the recycle bin by purging the objects. select object_name, original_name, type from user_recycle_bin; or show recycle; Purge table mm$$55777$table$1; • You can permanently drop a table without writing to recycle bin. Drop table emp purge; Flashback Drop Has a few quirks - Doesn’t restore foreign key constraints or materialized views. - Restores indexes, constraints, and triggers with it’s klingon language - `mm$$55777$table$1` (Requires a rename of triggers and constraints). Flashback Drop DEMO Hidden Gems - Renaming Tablespaces - Dictionary View Improvements - Flush Buffer Cache - Automated Statistics Collection - Flashback Drop - **Flashback Table** - Flashback Transaction Query - Diamonds in the rough Flashback Table 8i method for restoring data • Point-in-time recovery from backupset • Logical import from export • Restore database to new location and import or direct path insert into source table via dblink. Flashback Table 9i method for restoring data ``` INSERT INTO emp (SELECT * FROM emp AS OF TIMESTAMP TO_TIMESTAMP('16-Sep-04 1:00:00','DD-MON-YY HH24:MI:SS') MINUS SELECT * FROM emp ); ``` Side-Note – This might not work if DDL operations are performed or constraints are altered. Flashback Table 10g method for restoring data Flashback table emp to TIMESTAMP TO_TIMESTAMP(‘16-Sep-04 1:00:00’,’DD-MON-YY HH24:MI:SS’) Make sure you enable row movement prior to the restore. SQL> alter table emp enable row movement; Side-Note – This might not work if DDL operations are performed or constraints are altered. Flashback Table The before image data is stored in your undo_tablespace. Make sure you allocate enough space. Also make sure you allocate a big retention_size if you decide not to let Oracle automatically manage it. Setting undo_retention=0 tells oracle to automatically manage this parameter. I’m off on a tangent again, but it’s a good one. Oracle will prevent you from getting those annoying ora-01555 snapshot too old errors by including “retention guarantee” in your create or alter statements. This does come at a cost if your running a transaction that needs undo space. You can disable it “alter tablespace retention noguarantee;” Flashback Table DEMO Hidden Gems - Renaming Tablespaces - Dictionary View Improvements - Flush Buffer Cache - Automated Statistics Collection - Flashback Drop - Flashback Table - Flashback Transaction Query - Diamonds in the rough Flashback Transaction Query 8i/9i method for generating sql undo statements Log Miner (Good luck parsing through those logs). Flashback Transaction Query 10g method for generating sql undo statements ```sql SELECT undo_sql FROM flashback_transaction_query WHERE table_owner='SCOTT' AND table_name='EMP' AND start_scn between 21553 and 44933; ``` (You can also use timestamp) Flashback Transaction Query provides the ability to generate the SQL statements for undoing DML. Hidden Gems - Renaming Tablespaces - Dictionary View Improvements - Flush Buffer Cache - Automated Statistics Collection - Flashback Drop - Flashback Table - Flashback Transaction Query - Diamonds in the rough Diamonds in the rough Additional 10g features worth mentioning - Drop database command (includes datafiles, control files, archive logs, backups, and spfile). * RMAN* > *drop database including backups*; - Freeing up unused space in tables/indexes and adjusting the high-water mark. Also requires “*alter table emp enable row movement*” * SQL* > *alter table emp shrink space cascade compact*; - utl_mail (no need to reference utl_smtp protocol. It’s built in) - utl_compress (compression of binary data (blobs and raw data). Similar to gzip. - Support of regular expressions (Unix commands in PL/SQL) - Default temporary and user tablespaces Putting it all together 10g has made major strides in preventing DBAs and developers from making *mountains* out of *mole hills*. These gains have led to more efficient code and better database administration. The underlying result is savings in cost and time for your organization. References Web Sites: http://www.oracle.com/technology/pub/articles/10gdba/index.html (Oracle Database 10g: Top 20 Features for DBAs http://www.adpgmbh.ch/ora/misc/dynamic_performance_views.html Books: Oracle Database 10g New Features Presentations: Tuning Oracle 10g, Rich Niemiec Questions & Answers Howard.Horowitz@adeccona.com hhorow6801@aol.com
{"Source-Url": "http://www.nyoug.org/Presentations/2004/20041210gadmin.pdf", "len_cl100k_base": 5012, "olmocr-version": "0.1.53", "pdf-total-pages": 67, "total-fallback-pages": 0, "total-input-tokens": 88887, "total-output-tokens": 7633, "length": "2e12", "weborganizer": {"__label__adult": 0.0002846717834472656, "__label__art_design": 0.00024819374084472656, "__label__crime_law": 0.0002675056457519531, "__label__education_jobs": 0.0017595291137695312, "__label__entertainment": 8.672475814819336e-05, "__label__fashion_beauty": 0.00011336803436279296, "__label__finance_business": 0.0008258819580078125, "__label__food_dining": 0.00023806095123291016, "__label__games": 0.0005340576171875, "__label__hardware": 0.0013027191162109375, "__label__health": 0.00035309791564941406, "__label__history": 0.0002225637435913086, "__label__home_hobbies": 9.363889694213869e-05, "__label__industrial": 0.0004379749298095703, "__label__literature": 0.00018668174743652344, "__label__politics": 0.00015664100646972656, "__label__religion": 0.0004377365112304687, "__label__science_tech": 0.0234222412109375, "__label__social_life": 0.00010794401168823242, "__label__software": 0.2493896484375, "__label__software_dev": 0.71875, "__label__sports_fitness": 0.0001710653305053711, "__label__transportation": 0.0002211332321166992, "__label__travel": 0.00017023086547851562}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21331, 0.01183]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21331, 0.17845]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21331, 0.78747]], "google_gemma-3-12b-it_contains_pii": [[0, 93, false], [93, 253, null], [253, 377, null], [377, 532, null], [532, 679, null], [679, 1036, null], [1036, 1522, null], [1522, 1643, null], [1643, 1866, null], [1866, 2472, null], [2472, 2932, null], [2932, 2973, null], [2973, 3120, null], [3120, 3787, null], [3787, 4233, null], [4233, 4328, null], [4328, 4475, null], [4475, 5084, null], [5084, 5231, null], [5231, 5528, null], [5528, 5789, null], [5789, 6313, null], [6313, 6744, null], [6744, 6744, null], [6744, 6891, null], [6891, 7014, null], [7014, 7634, null], [7634, 8004, null], [8004, 8215, null], [8215, 8797, null], [8797, 8899, null], [8899, 9203, null], [9203, 9508, null], [9508, 9508, null], [9508, 9733, null], [9733, 10660, null], [10660, 11180, null], [11180, 11789, null], [11789, 12384, null], [12384, 12599, null], [12599, 13081, null], [13081, 13609, null], [13609, 13634, null], [13634, 13857, null], [13857, 14783, null], [14783, 15473, null], [15473, 15823, null], [15823, 16038, null], [16038, 16559, null], [16559, 16656, null], [16656, 17162, null], [17162, 17406, null], [17406, 17427, null], [17427, 17642, null], [17642, 17858, null], [17858, 18144, null], [18144, 18474, null], [18474, 19116, null], [19116, 19138, null], [19138, 19349, null], [19349, 19477, null], [19477, 19827, null], [19827, 20038, null], [20038, 20693, null], [20693, 20977, null], [20977, 21263, null], [21263, 21331, null]], "google_gemma-3-12b-it_is_public_document": [[0, 93, true], [93, 253, null], [253, 377, null], [377, 532, null], [532, 679, null], [679, 1036, null], [1036, 1522, null], [1522, 1643, null], [1643, 1866, null], [1866, 2472, null], [2472, 2932, null], [2932, 2973, null], [2973, 3120, null], [3120, 3787, null], [3787, 4233, null], [4233, 4328, null], [4328, 4475, null], [4475, 5084, null], [5084, 5231, null], [5231, 5528, null], [5528, 5789, null], [5789, 6313, null], [6313, 6744, null], [6744, 6744, null], [6744, 6891, null], [6891, 7014, null], [7014, 7634, null], [7634, 8004, null], [8004, 8215, null], [8215, 8797, null], [8797, 8899, null], [8899, 9203, null], [9203, 9508, null], [9508, 9508, null], [9508, 9733, null], [9733, 10660, null], [10660, 11180, null], [11180, 11789, null], [11789, 12384, null], [12384, 12599, null], [12599, 13081, null], [13081, 13609, null], [13609, 13634, null], [13634, 13857, null], [13857, 14783, null], [14783, 15473, null], [15473, 15823, null], [15823, 16038, null], [16038, 16559, null], [16559, 16656, null], [16656, 17162, null], [17162, 17406, null], [17406, 17427, null], [17427, 17642, null], [17642, 17858, null], [17858, 18144, null], [18144, 18474, null], [18474, 19116, null], [19116, 19138, null], [19138, 19349, null], [19349, 19477, null], [19477, 19827, null], [19827, 20038, null], [20038, 20693, null], [20693, 20977, null], [20977, 21263, null], [21263, 21331, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21331, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21331, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21331, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21331, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21331, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21331, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21331, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21331, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21331, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21331, null]], "pdf_page_numbers": [[0, 93, 1], [93, 253, 2], [253, 377, 3], [377, 532, 4], [532, 679, 5], [679, 1036, 6], [1036, 1522, 7], [1522, 1643, 8], [1643, 1866, 9], [1866, 2472, 10], [2472, 2932, 11], [2932, 2973, 12], [2973, 3120, 13], [3120, 3787, 14], [3787, 4233, 15], [4233, 4328, 16], [4328, 4475, 17], [4475, 5084, 18], [5084, 5231, 19], [5231, 5528, 20], [5528, 5789, 21], [5789, 6313, 22], [6313, 6744, 23], [6744, 6744, 24], [6744, 6891, 25], [6891, 7014, 26], [7014, 7634, 27], [7634, 8004, 28], [8004, 8215, 29], [8215, 8797, 30], [8797, 8899, 31], [8899, 9203, 32], [9203, 9508, 33], [9508, 9508, 34], [9508, 9733, 35], [9733, 10660, 36], [10660, 11180, 37], [11180, 11789, 38], [11789, 12384, 39], [12384, 12599, 40], [12599, 13081, 41], [13081, 13609, 42], [13609, 13634, 43], [13634, 13857, 44], [13857, 14783, 45], [14783, 15473, 46], [15473, 15823, 47], [15823, 16038, 48], [16038, 16559, 49], [16559, 16656, 50], [16656, 17162, 51], [17162, 17406, 52], [17406, 17427, 53], [17427, 17642, 54], [17642, 17858, 55], [17858, 18144, 56], [18144, 18474, 57], [18474, 19116, 58], [19116, 19138, 59], [19138, 19349, 60], [19349, 19477, 61], [19477, 19827, 62], [19827, 20038, 63], [20038, 20693, 64], [20693, 20977, 65], [20977, 21263, 66], [21263, 21331, 67]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21331, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
3c7e6a0653eefb4a64e239f36d158fd25b4723e3
Designing a proactive multi-touch display to support professional networking and planning at an interdisciplinary conference Paul Gestwicki Computer Science Department Ball State University pvgestwicki@bsu.edu Carrie Arnold Institute for Digital Intermedia Arts Ball State University cmarnold2@bsu.edu Joshua Gevirtz Computer Science Department Ball State University jgevirtz2@bsu.edu April 15, 2010 Ball State University Computer Science Department Technical report 2010–02 1 Introduction We present herein the design of Confluence, a system that enables conference attendees to explore the conceptual space of a multidisciplinary conference, to make deliberate decisions about the relative values of sessions with respect to personal interests, and to promote impromptu mutual revelation in a fluid social context. It is a novel proactive interface whose main components are are a multi-touch tabletop interface, radio frequency identification, and custom information visualization software. The system was designed specifically for the 2009 International Digital Media & Arts Association Conference, which is notable for the breadth of interest of its attendees, including art, humanities, technology, social sciences, usability, education, and business. The mission of the International Digital Media & Arts Association (iDMAa) is to promote the development, application, and understanding of digital media and arts. The organization represents educators, practitioners, scholars, and organizations with interest in digital media and arts. The organization’s seventh conference was held in 2009, and it served as a test for Confluence. Complications of Multidisciplinary Conferences Conferences serve several important purposes for both academics and practitioners. Conference organizers promote the theme of a conference by gathering speakers, papers, tutorials, panels, and so on. Thus, the organizers seek to advance understanding and the state of the art within their areas of interest. Individual attendees have the opportunity to disseminate their work, learn new skills, and develop social and professional networks. McCarthy et al. describe this as mutual revelation, “allowing attendees to learn more about others and their work, as well as being open to opportunities to tell others about themselves and their own work.” --- Effective mutual revelation depends upon attendees’ shared interests and similar expertise. An attendee who knows little about the conference domain will have difficulty integrating information shared by other attendees. Normally, this problem is avoided implicitly since, generally, only individuals who are interested in the conference domain would attend a disciplinary conference. However, multidisciplinary and interdisciplinary conferences pose a particular problem: the attendees, by definition, come from a wide variety of background and represent a broad range of expertise. That is, it is no longer the outliers who are potentially perplexed by conference activities but the majority. Proactive Interfaces A proactive interface is one that shows content based on a combination of program logic and proximity of individuals. This is in contrast to conventional interfaces, which are reactive: they respond to discrete user interactions such as mouse, keyboard, or voice commands. In this project, we use radio-frequency identification (RFID) to detect the approach and departure of conference attendees. The interface changes proactively based on who is nearby, rather than strictly reactively. We are not the first to see the potential for proactive interfaces to enhance the conference experience. Three prominent examples are AutoSpeakerID, Ticket2Talk, and the SpotMe Conference Navigator. AutoSpeakerID displays the information of speakers and questioners during sessions, providing people with a quick visual synopsis of who is asking a question, their interests, and their affiliation.3 Ticket2Talk serves as a conversation piece during breaks: it shows pictures of people near the display, along with a profile and interests.4 Both applications encourage social networking, though in different ways. Ticket2Talk is a direct integration with presentations, while AutoSpeakerID provides a more ambient experience. These two have been positively evaluated for their installation contexts, although the developers admit concerns about distraction and intrusiveness.5 The SpotMe Conference Navigator is notable for its applicability to conferences: it runs on a PDA and notifies the user when another attendee is nearby with a similar profile.6 Tagging Interests Confluence helps attendees decide which sessions to attend by highlighting those that may be of interest, and by this, promotes enhanced mutual revelation. The team was inspired by the use of tags on Web-based social media as a means to this end. A finite set of tags was chosen rather than user-specified tags, since the latter introduces problems that are beyond the scope of this work.7 The set of tags, shown in Figure 1, was developed by analyzing past conference schedules and consulting with members of the community. These interest tags were incorporated into both the conference registration site and the paper submission site. Thus, every paper is tagged with interests that are represented in the work, and every attendee has the opportunity during registration to identify his or her interests. Multi-touch Interaction on the Microsoft Surface While the RFID provides a capacity for proactive interfaces, the Microsoft Surface provides multi-touch interaction in a tabletop computing environment. The tabletop orientation facilitates multiple users, and the recognition of multiple concurrent touches promotes collaborative use. Primary among the design principles for multi-touch tabletop computing is direct manipulation: users interact directly with the interface content rather than through indirect widgets such as menus and text --- 4 McDonald et al., 8–10 5 McDonald et al., 18–20 7 Cameron Marlow et al. “HT06, tagging paper, taxonomy, Flickr, academic article, to read.” In HYPERTEXT ’06: Proceedings of the seventeenth conference on Hypertext and hypermedia. (ACM, 2006), 38. commands. Designing direct manipulation multi-touch interfaces is a difficult design challenge as this technology is still in its infancy. All users in the intended audience are intimately familiar with the WIMP (Windows, Icons, Menus, and Pointers) paradigm, where learned affordances can be exploited to expose novel functionality. However, in multi-touch collaborative environments, practically all users (aside from the occasional researcher who focuses on this technology) are novices or advanced beginners. Therefore, in order to empower novice users while maintaining continuous collaboration in a fluid social environment, the interaction model must leverage clear affordances while supporting scaffolding toward more advanced use cases. 2 Design The Surface-based interface is divided into three sections, as shown in Figure 2. These three sections are the Navigation Bar, the Workspace, and the Gutter. Figure 3 provides a screenshot of a pre-release build of Confluence, showing a sample interaction with the system. Navigation Bar The Navigation Bar displays the conference schedule. The 2009 iDMAa conference is a three-day event, and the three buttons at the top of the navigation bar represent the three days. Pressing one of these buttons selects that day, and the rest of the Navigation Bar is populated with the events from that day. For example, in Figure 3, the Thursday button is highlighted and so Thursday’s events are shown below it. Animated transitions are used to improve the aesthetics and the utility of the system. Specifically, animated transitions leverage cultural conventions to assist in developing the user’s mental model of time: the days’ schedules fly in from the left or right depending on whether the newly selected day is in the past or the future, in --- 10Donald A. Norman. “Affordances, conventions, and design.” In interactions. (ACM, 1999), 41. Figure 2: The three principle regions of the Confluence interface Figure 3: Screenshot of a pre-release build of Confluence accordance with this culture’s left-to-right reading order.\textsuperscript{11} At any time, the Navigation Bar clearly shows, through color highlighting, which day and which conference event are selected. The name of the currently selected event in the Navigation Bar is also embedded into the Workspace background, so users who are not aligned for convenient reading of the Navigation Bar can also ascertain the current context by inspecting the Workspace background. Thus, the Navigation Bar and the associated Workspace background labels provide the context for the rest of the interactions, stable locations from which new users may acclimate themselves to the more complex Workspace and Gutter interactions. **Workspace and Cards** The workspace is the largest region of the interface, and it is where we expect the majority of attention to be focused. The workspace is a “scatter view,” an area in which items are placed that can be moved, scaled, and arranged using multi-touch interactions. Scatter views have featured prominently in the Microsoft Surface promotional materials and sample applications, and the interaction model is reminiscent of the popular multi-touch iPhone and iPod Touch devices from Apple: one finger will move an item, two fingers will scale an item, and interacting with any item will bring it to the “top.” We have also integrated a common idiom wherein the cards behave as if they have two sides. A quick tap of a card will cause it to “flip over,” revealing complementary information on the other side. The Keynote, Plenary, and Presentation Cards are similar in design and intent. As representations of conference presentations, one side shows the title, speaker, and abstract. The reverse side shows the title, speaker, and a photograph of the speaker. Also, this side contains a QR-code that links to the speaker’s profile page on the interactive conference Web site. The Session Cards are different: these are designed to assist users in seeing the contents of concurrent sessions at a glance. The Session Cards list the name of the session along with the names and authors of each of the presentations within that session. The reverse side shows a montage of speaker photographs along with the QR-code that encodes the event URL. As in the Navigation Bar, animated transitions are applied for aesthetic and utilitarian purposes. When a new conference event is selected, those cards in the workspace are animated away from the navigation bar, which is also the dominant reading direction of the workspace background image. The new selections animate to random positions in the workspace from the navigation bar side of the interface, visually indicating that it was the navigation bar that caused the transition. **Gutter, Attendee Icons, and Interest Lines** The Gutter runs along the side of the workspace, and it is in this area that Attendee Icons are shown. These icons represent individuals who are detected to be nearby via the RFID system, and they show the attendee’s name as well as photograph, when available. Note that this means that the Gutter should never be empty when the interface is approached by a properly tagged attendee: rather, the attendee will immediately see a recognizable representation of him- or herself as part of the interface. Strongly related to the Attendee icons in the Gutter are the Interest Lines. These lines connect Attendee Icons to those Cards in the Workspace that have corresponding interests. As mentioned above, these interest tags are selected by authors during paper submission and by attendees during registration. The intensity of the line indicates the degree of similarity. This is designed not only to help an individual attendee to easily see the conference events of potential interest, but also as a means for enhancing mutual revelation among concurrent Confluence users. The presence of the interest lines is intended to spark conversation among concurrent users, since each will be able to see how all are connected to the currently selected conference event. \textsuperscript{11} Norman, 41 Integration with the iDMAa Web Site Confluence was developed in parallel with an experimental conference Web site that integrates social media with the iDMAa conference, as has been explored in related projects such as Pathable.\textsuperscript{12} The site was developed by the Institute for Intermedia Arts and Animation, and it features a unique page for each conference event.\textsuperscript{13} These pages link to professional and contact information about the speakers, including links to social networking services such as Facebook and Twitter. As mentioned above, each card in the Workspace contains a QR-code that encodes a URL, as shown in Figure 2. Attendees with appropriate mobile devices, such as the Apple iPhone or Android-enabled phones, will be able to scan the QR code and be taken directly to the corresponding page on the interactive iDMAa conference site. 3 System Architecture The system was implemented as a WPF-based Surface project in C# using Microsoft Visual Studio Professional, the .NET application framework with Windows Presentation Foundations, and Microsoft Surface SDK 1.0 SP1. The software integrates log4net, a logging library for Microsoft’s .NET application framework.\textsuperscript{14} Log4net is part of the Apache project and is licensed under the Apache License, Version 2.0.\textsuperscript{15} This library is used for both debugging and for recording user interactions. The system architecture uses a Domain Model to represent conference data, including the conference schedule, attendees, and interests.\textsuperscript{16} This data is processed during system initialization into an object-oriented, in-memory model. This serves as the Model of a reification of the Model-View-Controller architectural pattern.\textsuperscript{17} The Model is closely aligned with the domain, while the View and Controller classes are more closely aligned to the Microsoft Surface and Windows Presentation Foundation libraries and the specific needs of our interface. The non-interactive portions of the system, including the domain model and XML processing subsystems, were developed following principles of test-driven development.\textsuperscript{18} Unit testing was conducted using Moq.\textsuperscript{19} \textsuperscript{12}Shelly D. Farnham et al. “Leveraging social software for social networking and community development at events.” In Proceedings of the fourth international conference on Communities and technologies. (ACM, 2009), 237–239. \textsuperscript{17}Fowler, 330-332 \textsuperscript{18}Kent Beck. Test Driven Development: By Example (Addison-Wesley Professional, 2002). \textsuperscript{19}Moq: The simplest mocking library for .NET 3.5 and Silverlight with deep C# 3.0 integration. http://code.google.com/ Test-driven development and unit-testing helped ensure the functionality of these core subsystems as the internal and external data representations were iteratively developed. **Presence Detection through RFID** RFID technology is employed in our project in fashion similar to that of Ticket2Talk and Neighborhood Window.\(^{20}\) We used an ALR-9650 RFID reader and associated tags by Alien Technology. The antenna is placed above the Microsoft Surface unit on which Confluence software is running. Passive RFID tags are put on each attendee’s name tag. The RFID reader was connected, through a standard ethernet connection, to the university’s network. Upon initialization, Confluence listens for broadcasts sent by the reader. Once a broadcast is received, the software establishes a connection with the RFID reader and creates a thread that periodically polls it for tag presence information. When this thread detects that an attendee has left or arrived, the information is propagated to the necessary portions of the program. The presence detection subsystem is based on the ProD framework.\(^{21}\) The approach and departure of attendees is detected by the RFID reader. Since there is a finite amount of space for Attendee Icons, a queuing system ensures that attendees are processed in the order they arrive. **Data Representation** Confluence loads conference information from a set of configuration files. The two most important are encoded in ConferenceML and PeopleML, XML-based file formats that were designed specifically for this project. ConferenceML is used to represent all of the conference data, including the schedule of events, the interests corresponding to those events, and the speakers at each event. PeopleML represents the conference attendees, including their names, affiliations, and interests. These two types of information are represented separately due to the volatile nature of attendee lists. Conference events are generally fixed in advance of the conference, but the list of attendees almost always changes due to on-site registrations and unforeseen impediments to attendance. Representing attendees separately from the conference allows us to easily upload new attendee information to the installation. The Navigation Bar and all Workspace cards are automatically generated from ConferenceML. This facilitated the rapid prototyping of design ideas, since the initial design began well in advance of the iDMAa 2009 call for papers and therefore well before schedule finalization. Certainly, a graphic artist could likely create a visual representation of a specific schedule that is more visually appealing than a generic, computationally-generated one. However, given the time and human resources constraints of this project, the benefits of automatic generation and the rapid prototyping it facilitated were determined to outweigh the theoretical aesthetic costs. ### 4 Discussion Confluence is a hybrid of proactive and reactive interaction, the former through RFID and the latter through the Microsoft Surface. The team followed an iterative design process for both aspects of the system. For the multi-touch interface, physical prototypes were first used to simulate on-screen interactions (Figure 5). This led to the development of electronic prototypes (Figure 6) for both the proactive and reactive subsystems, and these prototypes were used to generate feedback into the iterative design process. For example, the RFID system was simulated in Surface control so that the multi-touch behavior could be tested as if the presence detection system were completely operational. Problems with the reactive system could be fixed while the proactive system was still being developed, and vice versa. We found that a direct manipulation, multi-touch interaction paradigm lends itself naturally to physical prototyping. Physical prototyping not --- \(^{20}\) McDonald et al., 8–13 \(^{21}\) Congleton et al., 223–226 Figure 5: Paper prototyping on the Microsoft Surface Figure 6: Electronic prototyping on the Microsoft Surface only helped us simulate possible interactions, but it allowed us to see how quickly our objects would clutter up the workspace. **Early Designs** The design and development of this project began in Summer 2009 following discussions with the Institute for Digital Intermedia Arts at Ball State University and representatives of the International Digital Media & Arts Association. The team spent most of the Summer acclimating themselves with the technology and the philosophy of hybrid proactive/reactive interface development. This led to the design of a system that encourages social networking through an interactive “interest graph,” similar in some ways to Neighborhood Window.\(^\text{22}\) This system used the GraphViz library for a force-directed, spring model for automatic graph drawing.\(^\text{23}\) However, there were two major complications with this design. First, the automatic layout of items conflicted with the direct manipulation that is both natural on a tabletop computing device and recommended by the Surface design guidelines.\(^\text{24}\) Second, we found that the working area of the Microsoft Surface was too limited to display this information adequately, according to our desired visual designs. In retrospect, these conclusions would have arisen more quickly had the team incorporated physical prototyping more rigorously; while the specific software developed in this first iteration was abandoned, the lessons allowed us to make much faster progress on the design of what became Confluence. **Complications of Website Integration** As mentioned above, Confluence integrates with an interactive conference Web site through embedded QR codes. The site allows attendees to share professional contact information and also to participate in Web-based discussion about the conference and associated projects. Early designs of Confluence included similar functionality, such as the ability to drag representations of individuals together to share contact information. However, this design introduced two significant problems. First and more importantly, it is not possible within our design to authenticate that a user is a specific conference attendee, and so it would be possible for third parties to create social media connections between two other people. The presence detection system would have ensured that at least the attendees’ badges would be nearby to initiate such a connection, but this design clearly introduces identification uncertainty. The second problem is that the semantics of combining individuals was determined not to be immediately clear. Hence, the interaction would require a notification and verification. Traditional dialog box confirmations would be out of place in a collaborative multitouch interaction space, and so the visual design would need to be augmented to include specific references to social network interactions. Such a design was outside of the scope of the project, although it could be considered for future work. **5 Evaluation Plan** User interface evaluation is a necessary component of human-computer interaction research as well as the practical design process. For this project, we have incorporated three forms of evaluation: pre-conference design evaluation, post-conference quantitative usage evaluation, and post-conference subjective evaluation. The pre-conference design evaluation involved input from several user interface experts as well as prospective conference attendees. Interface experts’ suggestions were incorporated into the visual design and interaction designs where possible, although several were unable to be incorporated before the conference. Because the system is designed specifically for use at a conference, it was impossible to test ahead of time in a realistic venue. Instead, the team solicited input from potential conference attendees — those with an interest in \(^{22}\)McDonald et al., 10–13 digital media and arts — to determine if the interface would be usable by the intended audience. Several modifications were made to the original design based on feedback from experts and users, as well as by narrowing of the project focus and goals. During the iDMAa conference, all interactions with the system will be logged. It is not possible to ascertain the originator of any interactions, and so interactions will be anonymous. The logs will indicate which RFID tags are nearby and from this we can determine who was in the vicinity during interactions, but it is not possible to know who is actively using the system or their intentions. The logs themselves are text files that are designed to present as much information as possible while still being easily parsed for post-conference analysis. Logs are also used for system debugging. The salient events that are logged for usage evaluation include: approach of an attendee; departure of an attendee; display of an Attendee Icon; removal of an Attendee Icon; selection of a day from the navigation bar; selection of a conference event (keynote, session, concurrent session, or plenary); and direct manipulation of an event’s virtual card. The subjective evaluation will be conducted via an on-line survey. A link to this survey will be emailed to conference attendees shortly after the conference concludes. Attendees who used Confluence will be asked a series of questions regarding their experience with the visualization system. Responses will be gathered on a five-point Likert scale for ease of encoding. Any additional comments will be analyzed using qualitative coding techniques. 6 Conclusions and Future Work Confluence integrates proactive and reactive interfaces by combining RFID presence processing with a multi-touch tabletop interface. This design is intended to improve the mutual revelation and professional networking that takes place at conferences, which can be challenging at multidisciplinary conferences such as iDMAa. Following the best practices of proactive interfaces facilitated the design and development of this system. The ProD framework also provided a useful reference model for data flow in a presence detection and processing system. The evaluation plan is designed to provide insight into the utility of the current system as well as direction for future development. Several features have been explored and prototyped during the iterative design process, and many of these have not been incorporated into the iDMAa experience. Particular features that are expected to further enhance the goals of the system include integration with social networking sites such as Facebook and LinkedIn as well as real-time editing of interests and on-line profiles, akin to the approach taken by Pathable. Authentication and security are two of the primary challenges of including these into a system such as ours. We have also explored the potential for integration of machine learning and data mining algorithms. These could be used, for example, to incorporate explicit event suggestions to users based on their interests, an enhancement of the current ambient recommendation system that shows connectivity between Attendee Icons and Event Cards. Acknowledgements This work was supported by the Institute for Digital Intermedia Art, Emerging Media Initiative, and Department of Computer Science at Ball State University as well as the International Digital Media & Arts Association. We are especially grateful for the assistance and support of John Fillwalk, Jesse Allison, Ina-Marie Henning, Jessica Gestwicki, Erin Moore, and Dave Ferguson. 25 McDonald et al., 27–28 26 Congleton et al., 223–226 27 Farnham et al., 237–239 References Norman, Donald A. “Affordance, conventions, and design.” interactions 6, 3: (1999) 38–43.
{"Source-Url": "http://www.cs.bsu.edu/techreports/2010-002.pdf", "len_cl100k_base": 5114, "olmocr-version": "0.1.48", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 24303, "total-output-tokens": 7162, "length": "2e12", "weborganizer": {"__label__adult": 0.0005655288696289062, "__label__art_design": 0.01219940185546875, "__label__crime_law": 0.000377655029296875, "__label__education_jobs": 0.00991058349609375, "__label__entertainment": 0.00028967857360839844, "__label__fashion_beauty": 0.0004274845123291016, "__label__finance_business": 0.00038695335388183594, "__label__food_dining": 0.0005354881286621094, "__label__games": 0.0008878707885742188, "__label__hardware": 0.0038967132568359375, "__label__health": 0.0007653236389160156, "__label__history": 0.0008115768432617188, "__label__home_hobbies": 0.0002646446228027344, "__label__industrial": 0.0006041526794433594, "__label__literature": 0.0007300376892089844, "__label__politics": 0.0003123283386230469, "__label__religion": 0.0008587837219238281, "__label__science_tech": 0.10980224609375, "__label__social_life": 0.00034046173095703125, "__label__software": 0.0302581787109375, "__label__software_dev": 0.82421875, "__label__sports_fitness": 0.0003428459167480469, "__label__transportation": 0.0007076263427734375, "__label__travel": 0.0003938674926757813}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31699, 0.04565]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31699, 0.14731]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31699, 0.92261]], "google_gemma-3-12b-it_contains_pii": [[0, 2725, false], [2725, 7024, null], [7024, 9188, null], [9188, 9313, null], [9313, 13416, null], [13416, 16691, null], [16691, 20660, null], [20660, 20772, null], [20772, 25092, null], [25092, 28810, null], [28810, 31699, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2725, true], [2725, 7024, null], [7024, 9188, null], [9188, 9313, null], [9313, 13416, null], [13416, 16691, null], [16691, 20660, null], [20660, 20772, null], [20772, 25092, null], [25092, 28810, null], [28810, 31699, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31699, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31699, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31699, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31699, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31699, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31699, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31699, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31699, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31699, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31699, null]], "pdf_page_numbers": [[0, 2725, 1], [2725, 7024, 2], [7024, 9188, 3], [9188, 9313, 4], [9313, 13416, 5], [13416, 16691, 6], [16691, 20660, 7], [20660, 20772, 8], [20772, 25092, 9], [25092, 28810, 10], [28810, 31699, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31699, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
30379fb0d3d65033d219177596ddf682bfbd9a40
NoXperanto: Crowdsourced Polyglot Persistence Antonio Maccioni, Orlando Cassano, Yongming Luo, Juan Castrejón, Genoveva Vargas-Solar To cite this version: HAL Id: hal-01584798 https://hal.archives-ouvertes.fr/hal-01584798 Submitted on 11 Sep 2017 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. NoXperanto: Crowdsourced Polyglot Persistence Antonio Maccioni, Orlando Cassano, Yongming Luo, Juan Castrejón, and Genoveva Vargas-Solar Abstract—This paper proposes NoXperanto, a novel crowdsourcing approach to address querying over data collections managed by polyglot persistence settings. The main contribution of NoXperanto is the ability to solve complex queries involving different data stores by exploiting queries from expert users (i.e. a crowd of database administrators, data engineers, domain experts, etc.), assuming that these users can submit meaningful queries. NoXperanto exploits the results of “meaningful queries” in order to facilitate the forthcoming query answering processes. In particular, queries results are used to: (i) help non-expert users in using the multi-database environment and (ii) improve performances of the multi-database environment, which not only uses disk and memory resources, but heavily rely on network bandwidth. NoXperanto employs a layer to keep track of the information produced by the crowd modeled as a Property Graph and managed in a Graph Database Management System (GDBMS). Index Terms—Polyglot persistence, crowdsourcing, multi-databases, big data, property graph, graph databases. I. INTRODUCTION Big datasets are not uniform collections of homogeneous data and, for this reason, they are not logically stored in one single database. In addition, the size of big datasets is such that it is not possible to adopt data cleaning, dataset preparation and integration using classic methods. Otherwise, data would never be ready for being analyzed and exploited in reasonable time. Often, these datasets are sharded across distributed persistence supports that adopt different access and management policies, different degrees of availability, fault tolerance and consistency. Polyglot Persistence [1] is a brand new term to indicate the combination of approaches that cope with a variety of persistence systems and that are used for “integrating” heterogeneous datasets. As part of the emerging polyglot persistence movement [1], the simultaneous use of multiple SQL, NoSQL and NewSQL data stores is gradually becoming a common practice in modern application development [2], [3]. They enable the integration of these data stores for managing big datasets in a scalable and loosely coupled way. This approach seems to be a pertinent strategy to deal with big datasets integration. Nonetheless, the combination of these heterogeneous data stores, flexible schemas and non-standard APIs, represent an added complexity for application developers. For instance, considering that data are spread across multiple data stores, each of which possibly relies on distinct data models (graph, document, etc.), developers must be familiar with a high number of implementation details, in order to effectively work with and maintain the overall data entities. Providing an integrated view of the underlying data collections for enabling querying, is still an open issue, often solved at the application code level. Given that there is no uniformization of schemas and there is a lack of meta-information available about the data (i.e. mappings, meta-data, semantic equivalences, data similarities), query results are not integrated collections of data, often, they are bags of non related data collections. It follows that, the quality of querying in such polyglot systems is poor. Contribution. This paper proposes NoXperanto, a novel crowdsourcing approach to address querying over data collections managed by polyglot persistence settings. We avoid the pursue of uniformity, but rather, we preserve the variety of original systems and languages. The main contribution of NoXperanto is the ability to solve complex queries involving different data stores by exploiting queries from expert users (i.e. a crowd of database administrators, data engineers, domain experts, etc.), assuming that these users can submit meaningful queries. Crowds have been revealed to be effective to solve queries over a single database [4], but they have never been used to query a multi-database system, where the required expertise (about schemas, models, data formats and instances of the database) cannot be superficial. In fact, from the structure of the queries, we can infer who is expert. NoXperanto exploits the results of such “meaningful queries” in order to facilitate the forthcoming query answering processes. In particular, the results of queries are used to: (i) help non-expert users in using the multi-database environment and (ii) improve performances of the multi-database environment, which not only uses disk and memory resources, but heavily rely on network bandwidth. 1The modality to fulfil a target by relying on the contributions coming from a large group of people. NoXPERANTO employs a layer to keep track of the information produced by the crowd. This is modeled as a Property Graph and thus, it is persisted by a Graph Database Management System (GDBMS). Outline. The remainder of the paper is organized as follows. While Section III introduces the concepts that we need throughout the paper, Section II discusses works that are related to our. Section IV explains our approach in detail. Section V describes implementation issues. Section VI concludes the paper with future perspectives of research. II. RELATED WORKS Most information systems are supported by distributed and heterogeneous data sources: multi-databases, data warehouses and Web portals. Such systems are, in fact, mediation infrastructures that emerged more than 20 years ago to enable transparent access to multiple data sources through querying, analysis, navigation and management facilities. With the advent of NoSQL stores and polyglot persistence approaches, problems of data integration emerge with some specifics: the high heterogeneity of data models (e.g., NoSQL underlying data models are not standardized), the absence of semantic descriptions, the volume of data to integrate, just to name a few. Transformation and other well-known techniques to bridge different databases seem hard to employ at large scale [5]; they would bring data duplication that is unfeasible when the data size is huge. In order to deal with heterogeneity, the majority of information systems define an integrated schema. However, the integration process, generally controlled by the administrator of the system, is hardly suited for managing data in highly distributed and evolving environments such as the Web. Hardly, coupled multi-sources storing huge datasets with semantic information (e.g. global schemas, mappings, rewriting rules) can be restrictive and expensive for applications that require simplicity, performance and large-scale dynamicity. Advances in polyglot persistence querying are addressing the integration of heterogeneous data by providing pivot querying languages like UnQL that provide some operators for integrating graph, document, key value and relational data. There exists equivalent works which consider a unified language, proposing some SQL extension [6], [7], [8], [9], [10], or consider a metamodel approach such as the Apache’s MetaModel project. To overcome existing problems of data integration, we propose a solution to exploit a crowd of experts in a transparent way. Experts are all the users who are able to submit meaningful queries. We do not assume having a unified query language, but rather we follow the idea of the polyglot persistence movement to support multiple languages. The idea of using crowdsourcing for answering queries over a database has been covered in previous works [7], [8], [11], [9], [4], [10], [12]. In particular, CrowdDB [4] integrates human inputs within a query plan that contains operations that usual database systems cannot always answer correctly (i.e. matching, ranking, etc.). RandomDB [12] answers non deterministic queries using social network crowds with the aim to replace existing random number generators. III. QUERYING A POLYGLOT MULTI-STORE This section introduces basic concepts that are needed to introduce our approach in the next section. Property Graph. Graph data models are able to generalize other data models: - **relational data**: both schema and instance of relational data can be modeled with graphs [13]; - **XML data**: XML document is modeled as a tree and a tree is an acyclic, undirected graph; - **document stores**: each document consists of nested key-value pairs (therefore a tree), so a document store can be modeled as a set of trees; - **key-value stores**: they corresponds to a set of nodes of the property graph. NoXPERANTO does not generalize, nor integrates, other models; rather, it adopts a graph data model in order to link more easily the parts of the entities that are spread across different databases. In particular, we use the property graph model [14]. Briefly, a property graph is a multi-graph (a graph where two nodes can be connected by more than one edge) where every node and every edge has associated a set of properties. A property is a key-value pair, denoted in the format of \(<key, value>\). An instance of property graph is reported in Figure 1. This represents people (i.e. the nodes) and their relationships. For example the node representing a person whose name is Genoveva supervises the node representing the person whose name is Juan. In this case, name : Genoveva is a property where name is the key and Genoveva is the value. Though the property graph is directed, we simplify it using the undirected version. The property graph is the “de-facto” standard model adopted by the GDBMSs (i.e., Neo4J, OrientDB, etc.). We will see in the Section V more details about it, since we make use of a GDBMS. Running Example. To facilitate the understanding of the approach, let us consider the following running example. We have an infrastructure of databases, called movieBlog, behind a big blog website talking about movies. Information about entities such as the movies are splitted into different databases. The infrastructure consists in a multi-database, where we have a document store containing blog entries, a relational database storing information about actors, and a graph database keeping track of the movies’ cast. Figure 2 depicts our scenario. The definition of the databases of our running example are in Figure 3. ![Diagram of databases](image) **Fig. 2.** The movie blog environment. The example defines three databases of different kind: a relational database (i.e., table actors), a graph database (i.e. graph movies) and a document store (i.e. document blogEntries). The keys of these databases are the attributes Actor, id and _id, respectively for the databases actors, movies and blogEntries. **Polyglot Environment.** Databases contain entities (e.g., a tuple in a relational database or a document in a document store). Every entity can be referred with a key (e.g., a primary key in a relational database or a document id in a document store). Queries are expressions that aim at retrieving; (i) an entity, (ii) a part of an entity or (iii) a combination of entities. Without referring to a particular language or algebra, we can refer to these tasks with general operator of selection (σ), projection (π) and join (∧), respectively. A σ query over a polyglot environment still involves separate answering processes, where database instances are independent from each other. In this case, the query can be dispatched to all the underlying databases, after an eventual syntactic translation of the query expression. A π is straightforward since it operates locally by extracting a sub-part of a result. While the semantics of σ and π in a polyglot environment are well-defined, the semantics of the ⊧◁ needs to be clarified. In fact, if we join heterogeneous databases we can hardly rely upon a single algebra. We define such semantics as follows. Let us suppose to have two entities e_i and e_j belonging, respectively, to two heterogeneous databases. Among the others, the two entities have attributes a_i and a_j, respectively. Let us also suppose that there exists a comparing operator ⊖. A join e_i ⊖ e_j denotes the union between the information content of e_i and the content of e_j if the predicate e_i.a_i = e_j.a_j is satisfied. Clearly, we can develop a join operator in a polyglot environment at a code-level exploiting existing join algorithms (e.g., nested-loop style join). In this context, such operations are very expensive as we cannot rely upon the optimizations of the DBMSs but we are compelled to load all the data in memory and, sometimes, to transfer them over a network. Next section explains how NOXPERANTO is able to answer queries using a crowd of experts and, in particular, it will focus on how the join operators can be computed efficiently. **IV. NOXPERANTO** This section explains our approach in detail. We first give an overview of the overall approach. Then, we conclude the section describing some use case in order to point out the advantages of the system. A. Crowdsourcing Approach In NOXPERANTO we aim at solving complex queries over a system containing several heterogeneous databases. To perform such queries we have to keep track of the relationships among entities stored in different databases. We employ two ways to indicate these relationships: one is explicit and the other is implicit. **Explicit Working Mode.** In the explicit manner, the user can define how two classes of entities in different databases are related. For example, in Figure 4 we define that an entity of the database blogEntries is the same of an entity in the database movies if the value of blogEntries.movie is equal to the value of movies.titles. In this case we can exploit several techniques (i.e. ontology matching, schema matching, string matching, etc.) to find the instances of such definitions. They work at schema level and are very expensive to be performed at run-time. We can mitigate this complexity by using an hybrid approach between this explicit mode with the implicit mode that is explained next, but this lies outside the scope of this paper. However, in many cases the administrator does not explicitly specify such definitions. It turns out that an automatic discovery of the relationships might be very useful. NOXPERANTO provides such a mechanism through the implicit modality. **Implicit Working Mode.** The implicit working mode is driven by a crowdsourcing approach. Crowdsourcing is employed in different contexts to solve difficult problems and to build up knowledge bases relying on the effort of many people. Usually, a problem is split into many sub-problems that are solved through the so called microtasks. A microtask is a complicated task for a computer but it is easy for a person. The final problem is solved by considering all the microtasks that people of the crowd have processed. Often, these people are unaware of the problem or do not even know that they are processing a microtask. The implicit working mode expects the system to be able to recognize the relationships. In this case, the system uses the knowledge of a crowd of experts, which appears when they submit complex queries. If those queries are also meaningful, in the sense that they produce a non empty set of results, we persist the relationships between entities of the two (or more) databases in our property graph layer. More in detail, we extract the predicate of the join conditions from such queries to define a crowd link. For example, let us imagine an expert submitting $\sigma_{\text{id}, \text{Nationality}}(\text{actors} \bowtie \text{Actor} = \text{id movie})$. In a SQL-like language, the query will look like the following: ```sql SELECT id, Nationality FROM actors, movies WHERE actors.Actor = movies.id ``` The result of the query is $\{\text{Keanu Reeves, Canada}, \text{Carrie-Anne Moss, Canada}\}$, thus, it is a non empty result set and the query is meaningful. In the answering process we have identified some relationships between entities in movies and entities in actors. For each of them, NOXPERANTO persists a crowd link on the property graph layer as in Figure 5. **B. The Approach at Work** In this section we describe a sequence of four use cases to show the real benefits of a crowdsourcing approach in this context. The cases alternate queries from a non expert and expert users. For the sake of simplicity we consider a SQL-like syntax but of course, it is just the semantics of the language and does not refer to any relational database. We refer the multi-database managed by NOXPERANTO with movieBlog. In this way, we query our datasets in a transparent way. The four cases are defined as follows: - **Use case 1**: a non-expert user asks for all information about “The Matrix”. It is a simple keyword search-like query (either submitted by a human being or by a programmatic access), so we cannot assume that the user is an expert. ```sql SELECT * FROM movieBlog WHERE movieBlog.title == 'The Matrix' ``` The result of the query is $\{\text{year=1999}\}$ since we have found an attribute title within the databases in movieBlog and an entity where the value of such attribute is “The Matrix”. - **Use case 2**: a domain expert perform the following query: ```sql SELECT * FROM movies JOIN blogEntries ON blogEntries.movie == movies.title ``` Note that the expert does not generally refer to movieBlog but to a precise database. The result of the query is: ```sql {year=1999, content="A computer hacker learns from mysterious rebels about the true nature..." author="jc Castrejon"} ``` We can say that there is a relationship between blogEntries.movie and movies.title. Since a non-expert user could not write such a join with meaningful results, we determine that this user was an expert user. Consequently, she can be included within our crowd. In fact, the system stores a crowd link for each of the results as in Figure 6. We have bridged the document store with the graph databases at runtime. - **Use case 3**: a non-expert user submits the query of Use case 1, but in this case the result is different. We provided information to the user that we were not able to retrieve before. This additional information is provided by answering a join-free query. This is due to the information within the layer, which allows to bridge sharded parts of the same entity. - **Use case 4**: another expert user submits the query of Use case 2. The scenario is similar to use case 2, but the query answering is much more efficient. We do not have to perform an expensive join operation. We can directly exploit the presence of the crowd links to determine a pairs of entities to form the final results. V. IMPLEMENTATION ISSUES This section provides an overview of the implementation concerns that are required to develop the NoXPERANTO approach based on the requirements outlined in the previous sections. **Data Layer Issue.** We manage the heterogeneity of multiple data stores by relying on a data layer based on the property graph data model (see Section III). We implemented this layer on an emerging GDBMS, that is Neo4J. It provides a REST interface so that the interaction with the applications running in the polyglot environment is facilitated. Moreover, this interface would provide operations to manage data entities and links between them. For example, the specific syntactic sugar to specify when to consider the crowd links in the query answering. **Language Issue.** Our approach does not consider a unified query language for polyglot persistence applications, but rather rely on the existing language support provided by scalable data stores. Thus, we propose to provide extensions for each of these languages, based on the general query semantics described in Section III. In particular, these extensions rely on the REST interface of the property graph model to manage the link and join operations described in Section IV. To implement these language extensions we intend to follow a model-driven approach, as proposed in our previous work. In particular, the language definitions would be implemented using the Xtext framework [15], while the implementation of the link and join operations would be conducted with the Acceleo project, by relying on text templates that refer to the graph model REST interface. The native query mechanisms of each of the supported data stores would be used to retrieve the data in each of the systems. **Consistency Issue.** Our approach assumes an eventual consistency model [16] in the persistence of the crowd links among the distributed entities. As a consequence, even when a link between data entities has been identified, users executing the same query may not always receive the same result. Nonetheless, each of the data stores in the environment has its consistency semantics. To handle this heterogeneity in the consistency semantics we propose to extend our language support with operators to allow the user to specify the level of consistency that he expects from each of the supported data stores. We intend to implement this functionality based on the operators that few systems (e.g., Riak, a key-value data store) already provide to trade availability for consistency on a per-request basis. **Crowd Management Issue.** We have developed a small utility to manage the crowd of experts. This consists, basically, on a query parser and a small interface where the administrator can check the current state of the crowd links. VI. CONCLUSIONS AND PERSPECTIVES In this paper we have presented NoXPERANTO, an approach to solve queries over an heterogeneous environment of databases using the knowledge of a crowd of experts. This knowledge is extracted from the results of the queries. In NoXPERANTO we avoid expensive pre-processing. As a result, we are able to scale with respect to the number of the databases within our environment. Our future work will be devoted to finish the system, implementing an interface for setting up the multi-database environment and allowing the users to specify weather or not the system has to use the crowd links. The approach opens several research direction. In particular, we will investigate other scenarios where the results of a query can be exploited to facilitate forthcoming query answering. ACKNOWLEDGMENTS The ideas within this paper were developed during the EDBT Summer School 2013 in Aussois (France). The authors of this paper are grateful to the EDBT association for the organization of the Summer School and to all the speakers for their helpful suggestions. REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01584798/file/n50a7.pdf", "len_cl100k_base": 4832, "olmocr-version": "0.1.48", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22255, "total-output-tokens": 6304, "length": "2e12", "weborganizer": {"__label__adult": 0.0003597736358642578, "__label__art_design": 0.0004763603210449219, "__label__crime_law": 0.0004150867462158203, "__label__education_jobs": 0.00148773193359375, "__label__entertainment": 0.00013375282287597656, "__label__fashion_beauty": 0.00021004676818847656, "__label__finance_business": 0.0005297660827636719, "__label__food_dining": 0.00040435791015625, "__label__games": 0.0004949569702148438, "__label__hardware": 0.0008230209350585938, "__label__health": 0.0007100105285644531, "__label__history": 0.0004591941833496094, "__label__home_hobbies": 0.00012922286987304688, "__label__industrial": 0.0005054473876953125, "__label__literature": 0.0006241798400878906, "__label__politics": 0.0003478527069091797, "__label__religion": 0.0005006790161132812, "__label__science_tech": 0.1436767578125, "__label__social_life": 0.00017440319061279297, "__label__software": 0.0233612060546875, "__label__software_dev": 0.8232421875, "__label__sports_fitness": 0.00022935867309570312, "__label__transportation": 0.0005216598510742188, "__label__travel": 0.00024116039276123047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26282, 0.02209]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26282, 0.28817]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26282, 0.89476]], "google_gemma-3-12b-it_contains_pii": [[0, 991, false], [991, 5822, null], [5822, 10393, null], [10393, 14776, null], [14776, 18826, null], [18826, 23675, null], [23675, 26282, null]], "google_gemma-3-12b-it_is_public_document": [[0, 991, true], [991, 5822, null], [5822, 10393, null], [10393, 14776, null], [14776, 18826, null], [18826, 23675, null], [23675, 26282, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26282, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26282, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26282, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26282, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26282, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26282, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26282, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26282, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26282, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26282, null]], "pdf_page_numbers": [[0, 991, 1], [991, 5822, 2], [5822, 10393, 3], [10393, 14776, 4], [14776, 18826, 5], [18826, 23675, 6], [23675, 26282, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26282, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
654b290b6ccf8a90ac601f1cd5543c0cee22944b
Towards a Formal Model for Composable Container Systems Fabio Burco University of Udine Udine, Italy Marino Miculan University of Udine Udine, Italy marino.miculan@uniud.it Marco Peressotti University of Southern Denmark Odense, Denmark peressotti@imada.sdu.dk ABSTRACT In modern cloud-based architectures, containers play a central role: they provide powerful isolation mechanisms such that developers can focus on the logic and dependencies of applications while system administrators can focus on deployment and management issue. In this work, we propose a formal model for container-based systems, using the framework of Bigraphical Reactive Systems (BRSs). We first introduce local directed bigraphs, a graph-based formalism which allows us to deal with localized resources. Then, we define a signature for modelling containers and provide some examples of bigraphs modelling containers. These graphs can be analysed and manipulated using techniques from graph theory: properties about containers can be formalized as properties of the corresponding bigraphic representations. Moreover, it turns out that the composition of containers as performed by e.g. docker-compose, corresponds precisely to the composition of the corresponding bigraphs inside an “environment bigraph” which in turn is obtained directly from the YAML file used to define the composition of containers. CCS CONCEPTS • Computer systems organization → Cloud computing; • Software and its engineering → Architecture description languages; • System modeling languages; 1 INTRODUCTION Nowadays, containers are increasingly adopted in the design and implementation of cloud-based services. On one hand, containers’ isolation mechanisms support a clear separation of tasks, so that developers can focus on the logic and dependencies of applications, while system administrators can focus on deployment and management issues. With a container image, a service can be run on premises as well as on every major cloud provider (such as AWS and Azure). On the other, these isolation mechanisms allow cloud providers to split and dynamically share computing resources according to the a specified multi-tenant model. The flexibility and openness of containerized architectures support easier integration, scalability, dynamic deployment and reconfiguration. However, collecting, connecting and coordinating (orchestrating) containers into a complete working system is not an easy task. Some of the duties of the administrator are: provisioning and deployment of containers; providing each container the resources it needs (e.g. volumes for file storage); establishing networks between containers; exposing services running in a container to the outside world; remapping port addresses in order to avoid conflicts; etc. In order to simplify this task, the administrator is provided with tools and utilities supporting the deployment and management of containers; some examples are Docker’s docker-compose and Kubernetes’ kubectl. Still, these tools are complex and error-prone: in their configuration files (for example, docker-compose.yml) it is easy to overlook a required resource, or to mismatch service names, or to misconfigure a network connection hindering communications between containers or (even worse) allowing for unexpected and possibly dangerous communications. These problems may be caught at the system’s start-up (e.g., a missing image), but may arise suddenly at any time during the system lifetime, causing service malfunctions and interruptions. In fact, at the moment a system’s configuration is defined by trial-and-error: a draft configuration is tested and if errors occur some corrections are made; this operation is repeated several times before the deployment becomes operational, with no guarantee of further misbehaviours. This situation would benefit from formal models of container-based architectures. These models should abstract from the application level details, but still be detailed enough to capture the aspects concerning the composition and connections of containers. They should allow us to express formally important properties of these systems, and they should support tools for the analysis, verification and manipulation of system’s architectures and configurations. In this work, we propose such a model, within the framework of Bigraphical Reactive Systems (BRSs), a family of graph based formalisms introduced as a meta-model for distributed, mobile systems [11, 15, 18]. In this approach, system configurations are represented by bigraphs, graph-like data structures capable to describe at once both the locations and the logical connections of (possibly nested) components. The evolution of a system can be defined in a declarative way by means of a set of graph rewriting rules, which can replace and change components’ positions and connections. Like graphs, bigraphs are abstract objects with a precise mathematical formulation, but they have also a simple and accessible graphical representation; see Figure 1 for an example. For this reason, this formalism is a prefect candidate for closing the gap between abstract models for formal methods and tools, and concrete graphical languages for system designers and developers. Indeed, BRSs have been successfully applied to the formalization of a broad variety of domain-specific models, including context-aware systems and web-service orchestration languages; a non exhaustive list is [1, 5, 7, 11, 13, 20, 24, 25]. Beside their normative power, BRSs are appealing because they provide a range of general results and tools, which can be readily instantiated with the specific model under scrutiny: libraries for bigraph manipulation (e.g., [2] and jLibBig [16, 17]), simulation tools [12, 14], graphical editors [8], model checkers [23], modular composition [22], etc. Moreover, since bigraphs can be naturally composed, this model allows for modular design of container-based services. The rest of the paper is structured as follows. In Section 2 we introduce local directed bigraphs. In Section 3 we apply this formalism to model container-based systems, with some examples and showing the correspondence between bigraph composition and container compositions à la docker-compose. Some applications of this model to the formalization and verification of security and safety properties about containers are briefly described in Section 4. In Section 5 we briefly mention sorting disciplines for ruling out spurious (i.e., non well-formed) states, which can be useful for enforcing structural properties of architectures. Finally, in Section 6 we draw some conclusions and sketch directions for further works. 2 LOCAL DIRECTED BIGRAPHS In order to define the formal model for container-based systems, in this section we introduce local directed bigraphs, a variant of directed bigraphs [10] which allows us to deal with localized resources. Definition 2.1. A (local) interface is a list $X = (X_0, \ldots, X_n)$ where each $X_i$ is a pair of disjoint finite sets $(X_i^+, X_i^-)$. Elements of $X_i^+$, $X_i^-$ are called positive (resp. negative) names at $i$. The pair $(X_0^+, X_0^-)$ contains the global (i.e., non localized) names. We define $$ X^+ \triangleq \biguplus_{i=0}^n X_i^+ \quad X^- \triangleq \biguplus_{i=0}^n X_i^- width(X) \triangleq n $$ Two interfaces $X$, $Y$ can be juxtaposed yielding a new interface: $$ X \otimes Y \triangleq ((X_0^+ \cup Y_0^+ \cup X_0^- \cup Y_0^-), (X_1^+, X_1^-), \ldots, (X_n^+, X_n^-), (Y_1^+, Y_1^-), \ldots, (Y_m^+, Y_m^-)) $$ This operation is associative, and the unit is $\emptyset \triangleq ((\emptyset, \emptyset))$. A (polarized) signature $\mathcal{K}$ is a set of elements which form the syntactic basis of bigraphs. Each type $c \in \mathcal{K}$ is in fact a polarized arity $c = (n, m)$, for some $n, m \in \mathbb{N}$. Definition 2.2. Let $\mathcal{K}$ be a signature and $I, O$ two local interfaces. A local directed bigraph (ldb) $B$ from $I$ to $O$, written $B : I \to O$, is a tuple $B = (V, E, ctrl, prnt, link)$ where - $V$ and $E$ are the sets of nodes and edges respectively; - $ctrl : V \to \mathcal{K}$ is called the control map, and assigns each node an arity; - $prnt : width(I) \cup V \to V \cup width(O)$ is called the parent map, and describes the nesting structure of the bigraph (i.e., it has to be a forest); - $link : Prt(B) \to LnK(B)$ is the link map, which describes the directed graph structure, and it is given by the disjoint union of $n + 1$ maps $link_i : Prt_i(B) \to LnK_i(B)$ such that $$ \forall x \in link(I^+_i \cap O^-_i) : |link^-1(x)| = 1, \text{ and } \forall y \in link(O^+_i \cap O^-_i) : |link^-1(y)| = 1; $$ where $port$, points and links are defined as follows, for $i \in \{0, \ldots, n\}$: $$ Prt_i^+(B) \triangleq \sum_{v \in V \text{ prnt}^+(v) = i} \pi_1(ctrl(v)) \quad Prt_i^-(B) \triangleq \sum_{v \in V \text{ prnt}^-(v) = i} \pi_2(ctrl(v)) $$ $$ Prt_i(B) \triangleq I^+_0 \cup O^-_0 \cup I^+_1 \cup O^-_1 \cup Prt_i^+(B) \quad Pnt \triangleq \biguplus_{t=1}^n Pnt_t $$ $$ LnK_i(B) \triangleq I^+_0 \cup O^-_0 \cup I^+_1 \cup O^-_1 \cup E \cup Prt_i^-(B) \quad LnK \triangleq \biguplus_{t=1}^n LnK_t $$ $I$ and $O$ are called inner and outer interfaces of $B$, respectively. The localities of the outer interface are called roots, while those of the inner interface sites. Notice that, by definition of prnt, nodes and sites have to be put under either another node or a root. Hence, differently from names, nodes and sites cannot be global—in fact, there is no root 0. When interfaces correspond, bigraphs can be “grafted” putting roots of one inside sites of the other and connecting links respecting directions as expected. This yields a definition of bigraph composition: for $B_1 : X \to Y$, $B_2 : Y \to Z$ two ldb, their composition $B_2 \circ B_1 : X \to Z$ is defined as $$ B_2 \circ B_1 = (V_1 \cup V_2, E_1 \cup E_2, ctrl_1 \cup ctrl_2, prnt_1 \cup prnt_2, link_1 \cup link_2) $$ where $prnt : width(X) \cup V \to V \cup width(Z)$ and $link : Prt(B_2 \circ B_1) \to LnK(B_2 \circ B_1)$ are defined as expected: $$ \begin{align*} prnt_1(w) & \quad \text{if } w \in width(X) \cup V \\ prnt_2(w) & \quad \text{if } w \in width(Y) \end{align*} $$ $$ \begin{align*} link_1(p) & \quad \text{if } prlnk(p) \in LnK(B_2 \circ B_1) \\ link_2(p) & \quad \text{otherwise} \end{align*} $$ where $$ \begin{align*} prlnk & \triangleq \text{link}_1 \cup \text{link}_2 \\ Prt(B_2 \circ B_1) & \triangleq Y^+ \cup Z^- \cup Prt^+(B_1) \cup Prt^+(B_2) \\ LnK(B_2 \circ B_1) & \triangleq X^- \cup Z^+ \cup Prt^-(B_1) \cup Prt^-(B_2). \end{align*} $$ The identity bigraph on $I$ is $Id_I = (\emptyset, \emptyset, \emptyset, \emptyset, \emptyset, \emptyset)$. Bigraphs can be combined also by juxtaposition, or tensor product. For $B_1 : I_1 \to O_1$, $B_2 : I_2 \to O_2$ two ldb, their product $B_1 \otimes B_2 : I_1 \otimes I_2 \to O_1 \otimes O_2$ is defined as $$ B_1 \otimes B_2 = (V_1 \cup V_2, E_1 \cup E_2, ctrl_1 \cup ctrl_2, prnt_1 \cup prnt_2, link_1 \cup link_2) $$ Therefore, given a signature $\mathcal{K}$, the local interfaces and local directed bigraphs over $\mathcal{K}$ form the objects and arrows of a monoidal category $\mathcal{LDB}(\mathcal{K})$. ### 3 BIGRAPHS FOR CONTAINERS Bigraphs provide a flexible and immediate framework for designing models and studying their relations in a principled way [19]. The main advantage of this approach is that it provides us with a mathematically sound hierarchy of models providing different levels of abstraction to cater to the various analysis and properties of interest. We introduce the general approach by defining a model for a possible abstraction level possible model. Although it is impossible to fully explore the hierarchy of possible models in the scope of this work, we identify aspects related to compositional-ity and modularity that is reasonable to expect in any bigraphical model of container systems and show how to capture them using primitives of the framework available in any model. #### 3.1 A signature for containers Once we have defined the algebraic framework of our model, we can introduce a signature for containers. The signature we consider in this paper, and depicted in Figure 2, is the following: $$\mathcal{K} = \{\text{container} : (0^+, 1^-), \text{process}_{r,s} : (r^+, s^-), \text{request} : (1^+, 1^-), \text{network} : (1^+, 1^-), \text{volume} : (1^+, 1^-)\}$$ where - type container is for nodes that represent a container identified by the name connected to its only input port; - type process$_{r,s}$ is for nodes that represent running processes that consumes services connected to its $s$ input ports and offers services over its $r$ output ports (for simplicity we will often omit $r$ and $s$ ans simply write process instead of process$_{r,s}$); - type request is for nodes that represent requests being processes by services implemented by the container; - type network is for nodes that represent network interfaces in the container, network interfaces are connected to form a network through the link graph (see e.g. Figure 3); - type volume is for nodes that represent volumes in the container with linkage providing the associated directory in the host filesystem. These basic elements can be nested and connected, as defined in Section 2, yielding bigraphs such as the one in Figure 1. Its inner interface is $$\langle (\emptyset, \emptyset), ([s_1, s_2, i_{1n}^1, i_{1n}^2], \{r_1\}) \rangle$$ and its outer interface is: $$\langle (\emptyset, \emptyset), ([v_1, i_{out}^1, i_{out}^2, n_1, n_2], \{p_1, p_2, p_3, C\}) \rangle$$ This bigraph has one root, represented by the red dotted rectangle. Under this root there is one container node, which contains three process nodes, one volume node, two network nodes, a request node, and one site (the gray area). Arrows connect node ports and names, respecting their polarity. The intended meaning of arrows is that of "resource accesses", or dependencies. In this example, the container offers services to (i.e., accepts requests from) the surrounding environment on port $p_1, p_2, p_3$, and needs to access a volume $v$ and two networks $n_1, n_2$. The site is a "hole", meaning that it can be filled by adding another bigraph containing processes (which can access to services offered inside the container through $s_1, s_2$) and resources (which proc can access through $r_1$). Filling a hole with another bigraph corresponds to the composition defined in the previous subsection, and as such it is subject to precise formal conditions, similar to composition of typed functions; in particular, a name of one interface can be connected to that of another interface if their polarity is the same. #### 3.2 Composition of containers is composition of bigraphs An important example of bigraph composition is represented by the composition of containers, as performed by, e.g., docker-compose. In this case, the context bigraph can be obtained automatically from the docker-compose.yml file. As an example, let us consider the docker-compose.yml in Figure 3. Its corresponding "context" bigraph is shown in Figure 4(a). This bigraph has one root (representing the whole resulting system), as many holes as components ("services") to be assembled, the (possibly shared) networks and volumes that each container requires, and exposes the (possibly shared) ports to the external environment. Three bigraphs with the correct interfaces (Figure 4(b)) can be composed into the environment, yielding the system in Figure 4(c). This resulting system can be seen as a "pod", and which can be composed into the site (of the right interface) of other bigraphs, in a modular fashion. The correspondence between docker-compose.yml files and environment bigraphs can be made formal; in fact, we have implemented a tool which translates docker-compose YAML files into composition bigraphs, taking advantage of the library jLibBig for bigraph manipulation. As an example, the composition in Figure 3(c) satisfies this property: A desirable correctness property about a container configuration is connected to by a network in each container in network from C bigraph modeling a container composition (i.e., obtained from a container tries to access a service which cannot be reached. This property could lead to run-time misbehaviours, as soon as a container tries to access a service which cannot be reached through any shared networks”. Violating this policy if there is a “channel”, through any number of containers, volumes, and networks, connecting two networks of different security levels against the established order. Figure 3: Example of docker-compose.yml configuration file. 4 APPLICATION: ANALYSIS OF CONTAINERS ARCHITECTURES Once a container architecture is represented as a bigraph, it can be easily analyzed and manipulated. Many properties about containers can be formalized as properties of the corresponding bigraphic representations and hence verified using well-known techniques from static analysis and model checking. In this section we exemplify this approach by describing two properties about containers which can be easily verified on their bigraphic representation (and which are not verified by docker-compose). In fact, we have already implemented a prototype checker to verify these properties. 4.1 Links correctness A desirable correctness property about a container configuration is the following: “no container requires links to containers that cannot be reached through any shared networks”. Violating this property could lead to run-time misbehaviours, as soon as a container tries to access a service which cannot be reached. This property can be easily formalized as a property on the bigraph modeling a container composition (i.e., obtained from a YAML file): for all different containers C1, C2, if there exists a link from C1 to the (only) port of C2 then there exist a node N1 of type network in C1, a N2 in C2, which are connected to the same name. As an example, the composition in Figure 3(c) satisfies this property: l_mysql is connected to db and indeed there is a name (“front”) which is connected to by a network in each container wp and db; similarly fo pma and db (via “back”). This property can be easily verified by a simple reachability check, as implemented in our prototype tool. 4.2 Network security levels safety Let us suppose that networks connecting the containers are ordered according to a security hierarchy, specified by the user as a set of ordering assertions of the form \( n > m \). A simple security isolation policy can require that information from (more confidential) network \( n \) should not leak to a (lower secure) network \( m \)—but the flow in the other direction is allowed. A composition configuration violates this policy if there is a “channel”, through any number of containers, volumes, and networks, connecting two networks of different security levels against the established order. We can verify if a given configuration violates this policy by looking for order-violating paths between networks nodes in the corresponding bigraph. To this end, the bigraph \( B \) modelling the system is visited in order to build a bipartite graph \( G \) as follows: - nodes of \( G \) are each container, network and volume of \( B \); - for each \( c \) container and \( r \) a network from which \( c \) can read, the arc \((r, c)\) is added to \( G \); if \( c \) can also write to \( r \), then the arc \((c, r)\) is added to \( G \). The same applies to volumes. Then, for each pair of nodes \( h, l \) such that \( h > l \) (in the transitive closure of the order), we check that there is no directed path from \( h \) to \( l \) in the graph \( G \). If this is the case, then the configuration respects the separation policy. Otherwise, an information leakage is possible and hence a warning is raised. As an example, let us consider again the system in Figure 3(c), and let us suppose that “back > front”. In the corresponding bipartite graph, there is a path from “front” to back, via the container db. 5 SORTING Besides a signature, bigraphical models usually include a sorting discipline for ruling out spurious (i.e. non well-formed) states. In their more general form sortings are judgments over bigraphs and can be used to model additional properties such as “only the backend interacts with the database” in rigorous and declarative ways. There are several classes of sortings with specific properties that are of interest for automated verification, in particular decomposable sortings [4] and match sorting [2]. Decomposable sorting are stable under decomposition: if a bigraph is well-sorted then all its decompositions are well-sorted. Match sorting disciplines specify sets forbidden bigraphs: a bigraph is well-sorted if it is impossible to find an occurrence (a match) of any of the forbidden bigraphs. Although decomposable sortings allow for some clever checking algorithms [4, 6, 9], these are in general reduced in models with low levels of nesting as the ones expected for containers. Instead, checking if a bigraph is well-sorted according to a match sorting discipline can be done efficiently since the matching problem is fixed-parameter tractable [3], and hence for every set of forbidden bigraphs there is an algorithm that can check for occurrences in polynomial time. 4.4.3 Gerarchia delle reti costruire questi insiemi a partire dai nodi di tipo container. 4.4.2 Condivisione di reti un errore all’utente. L’esecuzione in questo caso produrrebbe dei risultati inconsistenti. Docker Compose non e condividono nessuna rete. In questo caso dev’essere riportato un errore al 4.4. Propriet` a di consistenza ettua alcun controllo sulla verifica 6 CONCLUSIONS In this paper we have introduced a bigraphic model of container-based systems. In this model, containers and container configurations are represented by bigraphs, which are graph-like data structures capable to describe at once both the locations and the logical connections of (possibly nested) components of a system. Composition of bigraphs correspond precisely to the composition of containers, as performed, e.g., by docker-compose. This representation can be used for analysing and verifying properties of container-based. Up to our knowledge, this is one of the first formal models for container-based systems. Maybe the closest work is [21], where a model-driven management of Docker containers has been proposed. This model enables the verification of containers architecture at design time. One difference with our work is that our model is compositional by construction, allowing naturally a modular design and analysis of containers and components. In this paper we have considered only the static aspects of container based systems. As future work, we intend to extend our approach to dynamic properties of the systems. Evolution of bigraphic systems are usually specified by means of a set of graph-rewriting rules. In our setting, these rewriting rules can be used to model various system reconfigurations, e.g. scaling up/down or component replacement. Correspondingly, we will be able to take into account other properties, such as safety invariants and liveness properties. ACKNOWLEDGMENTS This work was partially supported by the Independent Research Fund Denmark, Natural Sciences, grant DFF-7014-00041, and Italian PRIN 2017 grant IT MATTERS. REFERENCES
{"Source-Url": "https://export.arxiv.org/pdf/1912.01107", "len_cl100k_base": 5704, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 24937, "total-output-tokens": 8056, "length": "2e12", "weborganizer": {"__label__adult": 0.00044655799865722656, "__label__art_design": 0.0008182525634765625, "__label__crime_law": 0.0005087852478027344, "__label__education_jobs": 0.0010919570922851562, "__label__entertainment": 0.00013554096221923828, "__label__fashion_beauty": 0.00024771690368652344, "__label__finance_business": 0.0005855560302734375, "__label__food_dining": 0.0004856586456298828, "__label__games": 0.0006909370422363281, "__label__hardware": 0.0013322830200195312, "__label__health": 0.0010585784912109375, "__label__history": 0.0005006790161132812, "__label__home_hobbies": 0.00019025802612304688, "__label__industrial": 0.0007176399230957031, "__label__literature": 0.0005545616149902344, "__label__politics": 0.0004656314849853515, "__label__religion": 0.0006656646728515625, "__label__science_tech": 0.2529296875, "__label__social_life": 0.00018417835235595703, "__label__software": 0.0116424560546875, "__label__software_dev": 0.72314453125, "__label__sports_fitness": 0.00035881996154785156, "__label__transportation": 0.0009393692016601562, "__label__travel": 0.00030612945556640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29006, 0.02448]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29006, 0.5607]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29006, 0.86541]], "google_gemma-3-12b-it_contains_pii": [[0, 6238, false], [6238, 11107, null], [11107, 16055, null], [16055, 21494, null], [21494, 23047, null], [23047, 29006, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6238, true], [6238, 11107, null], [11107, 16055, null], [16055, 21494, null], [21494, 23047, null], [23047, 29006, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29006, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29006, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29006, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29006, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29006, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29006, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29006, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29006, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29006, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29006, null]], "pdf_page_numbers": [[0, 6238, 1], [6238, 11107, 2], [11107, 16055, 3], [16055, 21494, 4], [21494, 23047, 5], [23047, 29006, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29006, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
32ab4e79527989721ef036994f0a21ebc9ccb0c8
Precomputing method lookup Jul, Eric Publication date: 2009 Document version Peer reviewed version Citation for published version (APA): Abstract. This paper looks at Method Lookup and discusses the Emerald approach to making Method Lookup more efficient by precomputing method lookup as early as possible — even moving it back into the compilation phase, if possible, thus eliminating method lookup entirely for many simple procedure calls. 1 Introduction 1.1 Original Motivation Smalltalk has been a very influential language. However, some of its features designed for flexibility, were also hard to implement efficiently. We believe that at least some of Smalltalk’s performance problems are caused by the absence of static typing: If only the compiler had more information available to it about the set of operations that can be invoked on an object, it could surely optimize the process of finding the right code, i.e., performing method lookup. This inspired the designers of the programming language Emerald [1] to design a mechanism for making Method Lookup more efficient. In the following, we describe this mechanism which was successful in eliminating most Method Lookup in Emerald. Subsequent advances such as inline caches have largely eliminated the “lookup penalty”. 1.2 Abstract and Concrete Types From our experience with Eden, we knew that a distributed system was never complete: it was always open to extension by new applications and new objects. Today, in the era of the Internet, the fact that the world is “under construction” has become a cliché, but in the early 1980s the idea that all systems should be extensible — we called it the “open world assumption” — was new. A consequence of this assumption is that an Emerald program needed to be able to operate on objects that do not exist at the time that the program was written, and, more significantly, on objects whose type is not known when the application was written. How could this be? Clearly, an application must have some expectations about the operations that could be invoked on a new object, otherwise the application could not hope to use the object at all. If an existing program $P$ had minimal expectations of a newly injected object, such as requiring only that the new object accept the run invocation, many objects would satisfy those expectations. In contrast, if another program $Q$ required that the new object understand a larger set of operations, such as redisplay, resize, move and iconify, fewer objects would be suitable. We derived most of Emerald’s type system from the open world assumption. We coined the term \textit{concrete type} to describe the set of operations understood by an actual, concrete object, and the term \textit{abstract type} to describe the declared type of a piece of programming language syntax, such as an expression or an identifier. The basic question that the type system attempted to answer was whether or not a given object (characterized by a concrete type) supported enough operations to be used in a particular context (characterized by an abstract type). Whenever an object was bound to an identifier, which could happen when any of the various forms of assignment or parameter binding were used, we required that the concrete type of the object \textit{conform} to the abstract type declared for the identifier. In essence, conformity ensured that the concrete type was “bigger” than the abstract type, that is, the object understood a superset of the required operations, and that the types of the parameters and results of its operations also conformed appropriately. Basing Emerald’s type system on conformity distinguished it from contemporary systems such as CLU, Russell, Modula-2, and Euclid, all of which required equality of types. It also distinguished Emerald’s type system from systems in languages like Simula that were based on sub-classing, that is, on the ancestry of the object’s implementation. In a distributed system, the important questions are not about the implementation of an object (which is what the subclassing relation captures) but about the operations that it implements. \subsection*{1.3 Type Checking and Binding of Types} Another consequence of the open world assumption was that sometimes type checking had to be performed at run time, for the very simple reason that neither the object to be invoked nor the code that created it existed until after the invoker was compiled. This requirement was familiar to us from our experience with the Eden Programming Language\cite{2}. However, Eden used completely different type systems (and data models) for those objects that could be created dynamically and those that were known at compile time. For Emerald, we wanted to use a single consistent object model and type system. Herein lies an apparent contradiction. By definition, compile-time type checking is done at compile time, and an implementation of a typed language should be able to guarantee at compile time that no type errors will occur. However, there are situations where an application must insist on deferring type checking, typically because an object with which it wishes to communicate will not be available until run time. Our solution to this dilemma provided for the consistent application of conformity checking at either compile time or run time. If enough was known about an object at compile time to guarantee that its type conformed to that required by its context, the compiler certified the usage to be type-correct. If not enough was known, the type-check was deferred to run time. In order to obtain useful diagnostics, we made the design decision that such a deferral would occur only if the programmer requested it explicitly, which was done using the \texttt{view...as} primitive, which was partially inspired by qualification in Simula 67\cite{3,4}. Consider the example ```emerald var unknownFile: File r ← (view unknownFile as Directory).Lookup["README"] ``` Without the `view...as Directory` clause, the compiler would have indicated a type error, because `unknownFile`, as a `File`, would not understand the `Lookup` operation. With the clause, the compiler treated `unknownFile` as a `Directory` object, which would understand `Lookup`. In consequence, `view...as` required a dynamic check that the type of the object bound to `unknownFile` did indeed conform to `Directory`. Thus, successfully type-checking an Emerald program at compile time did not imply that no type errors would occur at run time; instead it guaranteed that any type errors that did occur at run time would do so at a place where the programmer had explicitly requested a dynamic type check. The `view...as` primitive later appeared in C++. Note that `view...as` is similar to casting in Java in that the interface view changes. However, in terms of type system and implementation, there is a substantial difference: Java does not check the types when casting but merely that the class of the casted object implements the specified interface (or an interface that inherits the specified interface). In Emerald, the `implements` relationship is not defined by a syntactic construct but rather implicitly by conformity: Any object that has the operations specified by the interface (Abstract Type) implements the interface. Partially inspired by the `inspect` statement of Simula 67 [3, 4], we also introduced a Boolean operator that returned the result of a type check. This allowed a programmer to check for conformity before attempting a `view...as`. ### 1.4 Operation Invocation A performance problem plaguing object systems, e.g., Smalltalk, that were contemporary with Emerald was the cost of finding the code to execute when an operation was invoked on an object. This process was then generally known by the name “method lookup”; indeed it still is, in Emerald it is called operation invocation. In Smalltalk, method lookup involved searching method dictionaries starting at the class of the target object and continuing up the inheritance class hierarchy until the code was located. We thought that if Emerald didn’t do static type checking, each operation invocation would require searching for an implementation of an operation with the correct name, which would be expensive — although, because we did not provide inheritance, not as expensive as in Smalltalk. In a language like Simula in which each expression had a static type that uniquely identified its implementation, each legal message could be assigned a small integer and these integers could be used as indices into a table of pointers to the code of the various methods. In this way, Simula was able to use table lookup rather than search to find a method (and C++ still does so). We though that static typing would give Emerald the same advantage, and this was one of the motivations for Emerald’s static type system. However, even with static typing, there is still a problem in Emerald: except for the above-mentioned primitive types, knowing the type of an identifier at compile time tells us nothing about the implementation of the object to which it will be bound at run time. This is true even if the program submitted to the compiler contains only a single implementation that conforms to the declared type, because it is always possible for another implementation to arrive over the network from some other compiler. Thus, the Emerald implementation would still have to search for the appropriate method; the advantage that static typing would give us would be a guarantee that such a method existed. 1.5 More Efficient Method Lookup using The AbCon Mechanism The most dynamic form of Method Lookup is to, for a given invocation, search the concrete type for the operation that is to be invoked. Consider the example from above: ```plaintext var unknownFile: File ... r ← (view unknownFile as Directory).Lookup("README ") result ← r.Lookup(something) ``` When invoking the method `Lookup` file object assigned to `r`, the implementation can merely access the object, find its reference to its own concrete type, and then search the concrete type for the method. The Emerald implementation used several techniques to avoid this expensive dynamic search process. First, it is often the case that dataflow analysis can be used to ascertain that an object has a specific concrete type, and the Emerald compiler used dataflow analysis quite extensively to avoid method lookup altogether, by compiling a direct subroutine call to the appropriate method. Second, in those cases where dataflow analysis could not assign a unique concrete type to the target expression, we avoided the cost of searching for the correct method by inventing a data structure that took advantage of Emerald’s abstract typing. This data structure was called an AbCon, because it mapped Abstract operations to Concrete implementations. AbCons are the responsibility of the run-time system: it constructs an AbCon for each ⟨type, implementation⟩ pair that it encountered. An object reference consists not of a single pointer, but of a pair of pointers: a pointer to the object itself, and a pointer to the appropriate AbCon, as shown in Figure 1. The AbCon is basically a vector containing pointers to some of the operations in the concrete representation of the object. The number and order of the operations in the vector are determined by the abstract type of the variable; operations on the object that are not in the variable’s abstract type cannot be invoked, and so they do not need to be represented in the AbCon. In Figure 1, the abstract type `InputFile` supports just the two operations `Read` and `Seek`, so the vector is of size two, even though the concrete objects assigned to \( f \) might support many more operations. An important point is that the size of the vector and the indexes to it can be determined at compile time. For example, using Figure 1, when calling \( f.\mathrm{Seek} \), the compiler knows the Abstract type (\textit{InputFile}), and can thus find the index of the \textit{Seek} operation and generate an indirect jump via the AbCon vector. In situation (a) in Figure 1, the call would end up at \textit{Distfile.Seek}. Doing the method lookup with an AbCon is thus reduced to a load of the AbCon vector Address, an indexing, and a load of the appropriate slot. AbCon vectors are, in principle, created upon assignment of a variable. In the example above, the view expression returns an object reference including a pointer to an appropriate AbCon, which is dynamically created, if necessary. It appears that we have to generate new AbCons on EVERY assignment, but in practise this is not the case. In a simple assignment between two variable of the same Abstract Type, the AbCon will be the same as both Abstract Type and Concrete Type are the same. Thus the assignment is merely copying the two pointer. This covers many assignment. In an assignment, if the Abstract Types differ, then a new AbCon needs to be generated. However, this needs only be done once for each (Abstract Type, Concrete Type) pair. AbCons increased the cost of each assignment slightly, but made operation invocation as efficient as using a virtual function table. In practice it was almost never necessary to generate them during an assignment, because the number of different concrete types that an expression would take on was limited, often to one, or just a handful. Many of the AbCons that the compiler can see are needed are generated at load time. If the compiler can deduce the concrete type of the assigned object, then it would merely insert a store of the address of the relevant AbCon and the AbCon address would be inserted into the code at load time. In this case, an assignment would be a copy of the object pointer and a store of a constant address. Furthermore, in many cases the compiler could figure out, using data flow analysis, the single concrete type of an assigned object; the compiler was therefore able to generate a direct subroutine call to the concrete operation, or even in-line the operation, if it were small. In addition, if the variable used can hold reference to one single concrete type, then the entire AbCon scheme can be elided. In such a case, the variable is implemented just like a pointer variable in C and the call is as efficient as a procedure call in C. In the case of an object of a new concrete type that arrives over the network at run time, it is necessary to generate a new AbCon dynamically, but this would typically occur in connection with the arrival of the object. 1.6 Indexing AbCons The compiler uses the following scheme to index AbCons: First, each of the operation names is mapped to a unique id which is fetched from a shared database. Second, the operations are sorted using their unique ids. Third, each operation is assigned a sequential index based on the sort. Thus the AbCon vector is dense and efficiently indexed by a small integer. 1.7 Single and Multiple Inheritance Emerald does not have inheritance in the Smalltalk or C++ sense of the word. However, the AbCon mechanism is suitable for languages with traditional single or multiple inheritance: The AbCons are generated for each pair of (interface, class). Essentially, the method lookup is done at the time that the AbCon is generated rather than upon every call. Because the interface is fixed, a call can still be executed in constant time regardless of inheritance. 1.8 Caching AbCons Each time an AbCon is to be generated, the run time system first does a double key hash lookup of the AbCon (using the unique id of the Abstract and of the Concrete Type as a double key). If the AbCon already exists, the pointer to it is returned. If it does not, a new AbCon is generated for the pair and inserted into the hash table. Thus, except for the first time an AbCon is generated, the time to find an AbCon is constant (assuming that a suitable hashing algorithm is used). Any AbCon that the compiler can see is needed, is generated at load time as the necessary is present at that time. 1.9 Performance Summary for AbCons AbCons incur overhead as follows: On assignment, some assignments are a double pointer load and store instead of one. On method calls: in the worst case, the call can be performed using a load (of the AbCon vector), a load of the content of the indexed element, and a jump to it. For calls where the compiler can deduce the Concrete Type, the call is the same as a procedure call in C. The potentially most damaging overhead is the extra storage incurred by the AbCon pointers in those variables requiring them — their storage size is doubled. This can be significant for large arrays. However, in practice, it seems that most large arrays are not used for storing polymorphic data. 1.10 Comparison with Other Schemes Compared to a contemporary language such as Smalltalk, method lookup in Emerald is much more efficient. Indeed, we achieved execution times comparable to similar C programs for a number of benchmarks ([6]). Some years later, the Self project invented the Polymorphic Inline Cache [7], which is successful at eliminating message lookup for precisely the same reason that AbCons rarely need to be generated at assignment time: the number of concrete types that an expression can take is usually small, and after the program has run for a while, the cache always hits. Alpern et al [8] give an excellent overview of previous techniques for interface dispatch. They describe an itable which is a virtual method table for a class, restricted to those methods that match a particular interface, i.e., essentially an AbCon, but indexed by method rather than an index. The problem is that for a given method lookup, the implementation first must lookup the appropriate itable that matches the interface in question, and thereafter search the itable for the method in question. Alpern et al [8] propose a new interface dispatch mechanism, called the interface method table (IMT). They propose an IMT for each class. The IMT is essentially a hash table that contains the id of interface methods and the method’s address. The IMT is populated dynamically as the virtual machine discovers that a class implements an interface; it then adds that interface’s method to the class’s IMT. As long as there is no conflict, the call sequence can merely load the appropriate address right out of the IMT and jump to the method. Because the IMT is a hash table, there is the possibility of conflict. Such a conflict is detected when a new entry into the IMT is made that conflicts with a previous entry. In such a case, the virtual machine generates a conflict resolution stub that picks the correct interface method and jumps to the correct method in the appropriate class. This scheme thus tries to combine a fast lookup while reducing the size of the IMT. However, compared to Emerald’s AbCons, even the shortest sequence for interface dispatch contains an extra load (of the id of the interface method called). And AbCons are dense (because the compiler knows the interface) and therefore shorter than the IMT hash tables, and there is no possibility of conflict (because it is not a hash table). Zendra et al [9] describe a different approach where tables are eliminated and replaced by a simple static binary branch code using a type inference algorithm. ## 2 Summary We describe the Emerald mechanism for precomputing method lookup to make method calls more efficient. The mechanism is made possible by having a strong type system requiring a type of all expressions and variables. Method calls can, in general, be performed quite efficiently (using only two memory loads) in constant, low time. The cost is that some variables need to have an extra pointer thus increasing space overhead and increasing the cost of assignment. And in cases where the compiler can determine the concrete type of an object, the compiler can elide the extra overhead and call the operation directly. This extra overhead can also be avoid in cases as the AbCon reference can be removed entirely. *Parts of this paper has appeared in earlier Emerald articles [5,10].* References Fig. 1: This figure, taken from reference [5], shows a variable $f$ of abstract type InputFile. At the top of the figure (part a), $f$ references an object of concrete type DiskFile, and $f$'s AbCon (called an Operation vector in the legend) is a two-element vector containing references to two DiskFile methods. At the bottom of the figure (part b), $f$ references an object of concrete type InCoreFile, and $f$'s AbCon has been changed to a two-element vector that references the correspondingly named methods of InCoreFile. (Figure ©1987 IEEE; reproduced by permission.)
{"Source-Url": "https://static-curis.ku.dk/portal/files/170214511/ICOOOLPS2008_paper09_Jul_final.pdf", "len_cl100k_base": 4238, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 21884, "total-output-tokens": 5259, "length": "2e12", "weborganizer": {"__label__adult": 0.000354766845703125, "__label__art_design": 0.0002237558364868164, "__label__crime_law": 0.00025916099548339844, "__label__education_jobs": 0.00033783912658691406, "__label__entertainment": 4.8100948333740234e-05, "__label__fashion_beauty": 0.00013554096221923828, "__label__finance_business": 0.0001462697982788086, "__label__food_dining": 0.00034356117248535156, "__label__games": 0.00030422210693359375, "__label__hardware": 0.0006694793701171875, "__label__health": 0.0003991127014160156, "__label__history": 0.00019180774688720703, "__label__home_hobbies": 6.592273712158203e-05, "__label__industrial": 0.00029158592224121094, "__label__literature": 0.00022268295288085935, "__label__politics": 0.00023651123046875, "__label__religion": 0.0004410743713378906, "__label__science_tech": 0.005184173583984375, "__label__social_life": 7.522106170654297e-05, "__label__software": 0.0034198760986328125, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00028514862060546875, "__label__transportation": 0.0004198551177978515, "__label__travel": 0.00020420551300048828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22769, 0.02951]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22769, 0.72557]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22769, 0.93445]], "google_gemma-3-12b-it_contains_pii": [[0, 424, false], [424, 2821, null], [2821, 6159, null], [6159, 9095, null], [9095, 12012, null], [12012, 15128, null], [15128, 17508, null], [17508, 20490, null], [20490, 22196, null], [22196, 22769, null]], "google_gemma-3-12b-it_is_public_document": [[0, 424, true], [424, 2821, null], [2821, 6159, null], [6159, 9095, null], [9095, 12012, null], [12012, 15128, null], [15128, 17508, null], [17508, 20490, null], [20490, 22196, null], [22196, 22769, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22769, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22769, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22769, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22769, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22769, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22769, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22769, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22769, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22769, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22769, null]], "pdf_page_numbers": [[0, 424, 1], [424, 2821, 2], [2821, 6159, 3], [6159, 9095, 4], [9095, 12012, 5], [12012, 15128, 6], [15128, 17508, 7], [17508, 20490, 8], [20490, 22196, 9], [22196, 22769, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22769, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
5d255ca1631cc65fbb023f6dd7d5d688129f97a8
Test and Verification Solutions Getting you to market sooner by providing easy access to outsourcing solutions Introduction to Design Verification Kerstin Eder, TVS Introduction: Increasing Design Complexity Multiple Power Domains, Security, Virtualisation Nearly five million lines of code to enable Media gateway Introduction: Shorter Time-to-market windows - Shipment windows: 90s - Shipment windows: early 2000s - Shipment windows: today Volume Design Time Confidence 95%+ Desired Actual Risks Quality Predictability Productivity Time Scheduled tapeout Final tapeout Introduction: Verification vs Validation - **Verification:** - confirms that a system has a given input / output behaviour, sometimes called the transfer function of a system. - **Validation:** - confirms that a system has a given behaviour, i.e. - validation confirms that the system’s transfer functions results in the intended system behaviour when the system is employed in its target environment, e.g. as a component of an embedded system. “Design Verification is the process used to demonstrate the correctness of a design w.r.t. the requirements and specification.” Types of verification: - Functional verification - Timing verification - ... - What about performance? Conceptual representation of the verification process - Important question: What are you verifying? - Choosing a common origin and reconvergence point determines what is being verified and what type of method is best to use. - The specification of a FIFO (.doc) - The implementation of a FIFO (.v) - Transformation - Verification - Higher Abstraction Level - Lower Abstraction Level In practice, the specification is often a document written in a natural language by individuals of varying degrees of ability to communicate. An individual (or group of individuals) must read and interpret the specification and transform it into the design. Human introduced errors are introduced by (mis)interpretation. DANGER: When a designer verifies her/his own design – then she/he is verifying her/his own interpretation of the design – not the specification! Introduction: Verification Independence - Verification should be kept **independent** from Design - Verification engineers refer to the specification in disputes with the design team - Designers and Verification engineers both interpret the specification Verification relies on both not making the same interpretation mistake! Introduction: Functional Verification Approaches - **Verification** - **Static** - Reviews - Code Analysis - Linters - Equivalence Checking - Formal - Property Checking - Dynamic - Simulation - Prototyping - Silicon - FPGA - Dynamic Formal - Theorem Proving Introduction: Observability and Controllability - **Observability**: Indicates the ease at which the verification engineer can identify when the design acts appropriately versus when it demonstrates incorrect behavior. - **Controllability**: Indicates the ease at which the verification engineer creates the specific scenarios that are of interest. Introduction: Levels of Observability - Black Box - White Box - Grey Box Introduction: Formal Property Checking Properties of a design are formally proven or disproved. - Used to check for generic problems or violations of user-defined properties of the behaviour of the design. - Usually employed at higher levels of abstractions. - Properties are derived from the specification. - Properties are expressed as formulae in some (temporal) logic. - Checking is typically performed on a Finite State Machine model of the design. - This model needs to be derived from the RTL. ```plaintext under env_constraint if condition then expectation ``` Property checking can also be performed on higher levels of abstraction. Introduction: Role of Simulation - Most widely used verification technique in practice - Complexity of designs makes exhaustive simulation impossible in terms of cost/time - Engineers need to be selective - Employ state of the art coverage-driven verification methods - Test generation challenge - Simulation can drive a design deep into its state space - Can find bugs buried deep inside the logic of the design - Understand the limits of simulation: - Simulation can only show the presence of bugs but can never prove their absence! Introduction: Simulation vs Functional Formal Verification In practice, capacity limits and completeness issues restrict formal verification to selected parts of the design. Naïve interpretation of exhaustive formal verification: Verify ALL properties. Only selected parts of the design can be covered during simulation. In practice, capacity limits and completeness issues restrict formal verification to selected parts of the design. Introduction: How big is Exhaustive? - Consider simulating a typical CPU design - 500k gates, 20k DFFs, 500 inputs - 70 billion sim cycles, running on 200 linux boxes for a week - How big: $2^{36}$ cycles - Consider formally verifying this design - Input sequences: cycles $2^{\text{inputs+state}} = 2^{20500}$ - What about X’s: $2^{15000}$ (5,000 X-assignments + 10,000 non-reset DFFs) - How big: $2^{20500}$ cycles ($2^{15000}$ combinations of X is not significant here!) - That’s a big number! - Cycles to simulate the 500k design: $2^{36}$ (70 billion) - Cycles to formally verify a 32-bit adder: $2^{64}$ (18 billion billion) - Number of stars in universe: - $2^{70}$ ($10^{21}$) - Number of atoms in the universe: - $2^{260}$ ($10^{78}$) - Possible X combinations in 500k design: - $2^{15000}$ ($10^{4515} \times 3$) - Cycles to formally verify the 500k design: - $2^{20500}$ ($10^{6171}$) Coverage Types - **Code** coverage - **Structural** coverage - **Functional** coverage - **Other classifications** - Implicit vs. explicit - Specification vs. implementation Code Coverage - Basics - Coverage models are based on the HDL code - Implicit, implementation coverage - Coverage models are syntactic - Model definition is based on syntax and structure of the HDL - Generic models – fit (almost) any programming language - Used in both software and hardware design Implicit coverage models that are based on common structures in the code - FSMs, Queues, Pipelines, … State-machines are the essence of RTL design **FSM coverage models** are the most commonly used structural coverage models Types of FSM coverage models - State - Transition (or arc) - Path Coverage questions not answered by code coverage tools - Did every instruction take every exception? - Did two instructions access the register at the same time? - How many times did cache miss take more than 10 cycles? - Does the implementation cover the functionality specified? - ...(and many more) Code coverage indicates how thoroughly the test suite exercises the source code! - Can be used to identify outstanding corner cases Code coverage lets you know if you are not done! - It does not indicate anything about the functional correctness of the code! 100% code coverage does not mean very much. 😒 Need another form of coverage! Functional Coverage - It is important to cover the **functionality** of the DUV. - Most functional requirements can’t easily be mapped into lines of code! - **Functional coverage models** are designed to assure that various aspects of the functionality of the design are verified properly, they link the requirements/specification with the implementation. - Functional coverage models are specific to a given design or family of designs. - Models may cover - The design in terms of inputs and the outputs - The design’s internal states or micro-architectural features - Protocols - Specific scenarios from the verification plan - Combinatorial or sequential features of the design Types of Functional Coverage - **Discrete set of functional coverage tasks** - Set of unrelated or loosely related coverage tasks often derived from the requirements/specification - Often used for corner cases - Driving data when a FIFO is full - Reading from an empty FIFO - In many cases, there is a close link between functional coverage tasks and **assertions** - **Structured functional coverage models** - The coverage tasks are defined in a structure that defines relations between the coverage task - **Cross-product** (Cartesian Product) most commonly supported A cross-product coverage model is composed of the following parts: 1. A semantic **description** of the model (story) 2. A list of the **attributes** mentioned in the story 3. A set of all the **possible values** for each attribute (the attribute value **domains**) 4. A list of **restrictions** on the legal combinations in the cross-product of attribute values ## Combining Coverage Metrics - **Do we need both code and functional coverage? YES!** <table> <thead> <tr> <th>Functional Coverage</th> <th>Code Coverage</th> <th>Interpretation</th> </tr> </thead> <tbody> <tr> <td>Low</td> <td>Low</td> <td>There is verification work to do.</td> </tr> <tr> <td>Low</td> <td>High</td> <td>Multi-cycle scenarios, corner cases, cross-correlations still to be covered.</td> </tr> <tr> <td>High</td> <td>Low</td> <td>Verification plan and/or functional coverage metrics inadequate. Check for “dead” code.</td> </tr> <tr> <td>High</td> <td>High</td> <td>Increased confidence in quality.</td> </tr> </tbody> </table> - **Coverage models complement each other.** - Mutation testing adds value in terms of test suite qualification. - **No single coverage model is complete on its own.** Case Study: Constrained pseudo random test generation - Include - Advanced FIFO TB architecture - Self-checking - Scoreboards - Monitors - Design size reduction: reduce size of FIFO to 2 entries - demonstrate fewer tests needed to get coverage - Same corner cases - Directed tests no longer work – promote parametrization in TB HW assertions: - **combinatorial** (i.e. “zero-time”) conditions that ensure functional correctness - must be valid at all times - “This buffer never overflows.” - “This register always holds a single-digit value.” and - **temporal conditions** - to verify sequential functional behaviour over a period of time - “The grant signal must be asserted for a single clock cycle.” - “A request must always be followed by a grant or an abort within 5 clock cycles.” - Need temporal assertion specification language! - System Verilog Assertions - PSL/Sugar Property Types: Safety - **Safety**: Nothing bad ever happens - The FIFO *never* overflows. - The system *never* allows more than one process to use a shared device simultaneously. - Requests are *always* answered within 5 cycles. These properties can be falsified by a finite simulation run. Property Types: Liveness - **Liveness:** Something good will eventually happen - The system *eventually* terminates. - Every request is *eventually* answered. In theory, liveness properties can only be falsified by an infinite simulation run. - Practically, we can assume that the “graceful end-of-test” represents infinite time. - If the good thing did not happen after this period, we assume that it will never happen, and thus the property is falsified. Remember, simulation can only show the presence of bugs, but never prove their absence! An assertion has never fired - what does this mean? - Does not necessarily mean that it can never be violated! - Unless simulation is exhaustive..., which in practice it never will be. - It might not have fired because it was never evaluated. Assertion coverage: Measures how often an assertion condition has been evaluated. Completion Criteria Common criteria for completion are: - **Coverage targets** - (coverage closure) - **Target metrics** - bug rate drop - **Resolution of open issues** - **Review** - **Regression results** - **Coverage Complete?** - Yes - No - **Bug Rate Dropped** - Yes - No - **No Open Issues** - Yes - No - **Review** - Yes - No - **Clean Regression** - Yes **Ready to ship** Test and Verification Solutions Getting you to market sooner by providing easy access to outsourcing solutions Example DUV Kerstin Eder, TVS Example DUV Specification - **Inputs:** - wr indicates valid data is driven on the data_in bus - data_in is the data to be pushed into the DUV - rd pops the next data item from the DUV in the next cycle - clear resets the DUV Example DUV Specification **Outputs:** - `data_out_valid` indicates that valid data is driven on the `data_out` bus - `data_out` is the data item requested from the DUV - `empty` indicates that the DUV is empty - `full` indicates that the DUV is full Example DUV Specification - **Black Box Verification** - The design is a FIFO. - Reading and writing can be done in the same cycle. - Data becomes valid for reading one cycle after it is written. - No data is returned for a read when the DUV is empty. - Clearing takes one cycle. - During clearing read and write are disabled. - Inputs arriving during a clear are ignored. - Data written to a full DUV will be dropped. - The FIFO is 8 entries deep. Properties of the DUV Black box view: - (P) empty and full are never asserted together. - (P) After clear the FIFO is empty. - (D) After writing 8 data items the FIFO is full. - (D) Data items are moving through the FIFO unchanged in terms of data content and in terms of data order. - (D) No data is duplicated. - (D) No data is lost. Assertions: - Distinguish between protocol properties and design properties/coverage. - Protocol coverage is more easily re-usable. Basic techniques to combat Complexity - **Collapse the size of the FIFO to only 2 (or 4) entries and demonstrate** - Impact on verification effort - Parametrization of design and tb is important - **Reduce the width of the FIFO’s data path** - Show this reduces complexity of data coverage - Show this helps Formal Verification The mechanics of an advanced test bench - Test - Functional Coverage - Checker - Monitor - Assertions - Code Coverage Stimulus generator: - constraint - addr - data Driver: - active - passive Design Under Test: - assert Coverage: - assertions Functional Coverage: - coverage Monitor: - coverage The diagram illustrates the interaction between various components in an advanced test bench, including the test phase, stimulus generator, driver, monitor, assertions, and code coverage. Some hardware verification examples CPU Verification Instruction Stream Generator → assembler → CPU RTL → Compare → CPU C Model → Accuracy? Some hardware verification examples USB Verification - **Packet Generator** - **Driver** - **DUT** - **Response** - **Scoreboard** - **Assertions** - **Coverage** Bubble Sort “Proof of Concept” for SW Testing • **Program Specification** – Input lists of integers, floats, ascii, etc. – Reject lists of mixed types – Convert unsorted lists to sorted lists • **Can we test the program with constrained input generation?** – Generate valid and invalid inputs – Direct generation towards corner cases – Check outputs for correctness • Without re-writing an identical checker program – Measure what we have tested Results of Bubble Sort “Proof of Concept” Lists of • Integers • Floats • Ascii • etc List Generator Checkers Software Under Test Coverage Metrics Lists Constrain towards • Empty lists • Equal values • Reverse ordering • Check output list is ordered • Output list contents == input list contents • Empty List • Reverse ordered • Error cases (mix integers, floats, ascii) • Etc. Virtual System Level Test Environment Event Stream Generator Sensor Inputs Software Under Test Actuator Outputs Checkers Logfiles Metrics Virtual System Level Checkers - Assert “never do anything wrong” - Always fail safe - Assert “always respond correctly” - If A&B&C occur then check X happens - Assertion coverage “check A&B&C occurs” for free - Analyse log files - Look for anomalies - Did the actuator outputs occur in the correct order Functional Coverage Requirements coverage “Cross-product” coverage A cross-product coverage model is composed of the following parts: 1. A semantic **description** of the model (story) 2. A list of the **attributes** mentioned in the story 3. A set of all the **possible values** for each attribute (the attribute value **domains**) 4. A list of **restrictions** on the legal combinations in the cross-product of attribute values A **functional coverage space** is defined as the Cartesian product over the attribute value domains. Situation coverage Safety compliance (asureSIGN) - **Managing Requirements** - Importing and editing requirements - **Decomposing requirements to verification goals** - **Tracking test execution** - Automating import of test results - Automate accumulation and aggregation of test results - **Impact analysis** - Managing changes in requirements and tests - **Demonstrating compliance to DO254 & DO178C** - **Managing multiple projects** asureSIGN™ at the heart of HW/SW V&V - Requirements: Excel, Doors, Jira, etc. - SystemC Simulation - Hardware Simulation: Coverage (Cadence), Assertions (Mentor, Aldec, etc.) - Directed test results - Formal Verification: OneSpin - Manual API - Automated SW Test Tool - SW Test Tools - Matlab - Lab Results - Requirements Engineering tools XML API UCIS API Run API Decomposing requirements to features and tests Import Requirements (Doors, Excel, Word, etc) Edit Requirements Map requirements to verification goals The mapped verification goal Sign off a requirement with a manual test (e.g. in the lab) Safety compliance (asureSIGN) - Managing Requirements - Importing and editing requirements - Decomposing requirements to verification goals - Tracking test execution - Automating import of test results - Automate accumulation and aggregation of test results - Impact analysis - Managing changes in requirements and tests - Demonstrating compliance to DO254 & DO178C - Managing multiple projects Tracking test execution: Automating import of test results - Accellera standard - asureSIGN - UCIS - XML - API - Manual Entry - Manual Testing - Software Test Tools - Hardware Verification Results - Hardware Simulation - Formal Verification Automate accumulation and aggregation of test results Accumulate results over multiple regressions Record results from each test Aggregate results through the hierarchy Define and track against interim milestones (based on % of requirements tested) Safety compliance (asureSIGN) - Managing Requirements - Importing and editing requirements - Decomposing requirements to verification goals - Tracking test execution - Automating import of test results - Automate accumulation and aggregation of test results - Impact analysis - Managing changes in requirements and tests - Demonstrating safety compliance – for example - DO254/178C, ISO26262, IEC 60601, IEC 61508, EN 50128, IEC 61513 - Managing multiple projects Demonstrating compliance to DO254 & DO178C - Export XML for import back into Doors, etc. - Export PDF report for audit Select level of detail Pid = unique reference to requirement in external tool Export Metadata such as - Tool version numbers - Configuration data - Data owners
{"Source-Url": "https://www.testandverification.com/wp-content/uploads/tvs-intro-to-AHVT.pdf", "len_cl100k_base": 4393, "olmocr-version": "0.1.53", "pdf-total-pages": 51, "total-fallback-pages": 0, "total-input-tokens": 73464, "total-output-tokens": 6468, "length": "2e12", "weborganizer": {"__label__adult": 0.0004413127899169922, "__label__art_design": 0.0007796287536621094, "__label__crime_law": 0.0003952980041503906, "__label__education_jobs": 0.0012521743774414062, "__label__entertainment": 7.444620132446289e-05, "__label__fashion_beauty": 0.0002779960632324219, "__label__finance_business": 0.0007619857788085938, "__label__food_dining": 0.0003178119659423828, "__label__games": 0.0009813308715820312, "__label__hardware": 0.0111846923828125, "__label__health": 0.00035500526428222656, "__label__history": 0.0002753734588623047, "__label__home_hobbies": 0.00022041797637939453, "__label__industrial": 0.0023174285888671875, "__label__literature": 0.00020706653594970703, "__label__politics": 0.0002617835998535156, "__label__religion": 0.0005211830139160156, "__label__science_tech": 0.11767578125, "__label__social_life": 7.87973403930664e-05, "__label__software": 0.01386260986328125, "__label__software_dev": 0.84619140625, "__label__sports_fitness": 0.0004982948303222656, "__label__transportation": 0.0010194778442382812, "__label__travel": 0.00020420551300048828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19685, 0.00926]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19685, 0.40797]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19685, 0.85233]], "google_gemma-3-12b-it_contains_pii": [[0, 168, false], [168, 319, null], [319, 587, null], [587, 1040, null], [1040, 1272, null], [1272, 1660, null], [1660, 2129, null], [2129, 2462, null], [2462, 2779, null], [2779, 3130, null], [3130, 3206, null], [3206, 3854, null], [3854, 4400, null], [4400, 4970, null], [4970, 5909, null], [5909, 6089, null], [6089, 6395, null], [6395, 6689, null], [6689, 7331, null], [7331, 8028, null], [8028, 8619, null], [8619, 8983, null], [8983, 9752, null], [9752, 10095, null], [10095, 10677, null], [10677, 10978, null], [10978, 11447, null], [11447, 11862, null], [11862, 12272, null], [12272, 12416, null], [12416, 12651, null], [12651, 12903, null], [12903, 13371, null], [13371, 13843, null], [13843, 14181, null], [14181, 14672, null], [14672, 14814, null], [14814, 14980, null], [14980, 15446, null], [15446, 15832, null], [15832, 15977, null], [15977, 16298, null], [16298, 16974, null], [16974, 17407, null], [17407, 17775, null], [17775, 18019, null], [18019, 18428, null], [18428, 18670, null], [18670, 18923, null], [18923, 19403, null], [19403, 19685, null]], "google_gemma-3-12b-it_is_public_document": [[0, 168, true], [168, 319, null], [319, 587, null], [587, 1040, null], [1040, 1272, null], [1272, 1660, null], [1660, 2129, null], [2129, 2462, null], [2462, 2779, null], [2779, 3130, null], [3130, 3206, null], [3206, 3854, null], [3854, 4400, null], [4400, 4970, null], [4970, 5909, null], [5909, 6089, null], [6089, 6395, null], [6395, 6689, null], [6689, 7331, null], [7331, 8028, null], [8028, 8619, null], [8619, 8983, null], [8983, 9752, null], [9752, 10095, null], [10095, 10677, null], [10677, 10978, null], [10978, 11447, null], [11447, 11862, null], [11862, 12272, null], [12272, 12416, null], [12416, 12651, null], [12651, 12903, null], [12903, 13371, null], [13371, 13843, null], [13843, 14181, null], [14181, 14672, null], [14672, 14814, null], [14814, 14980, null], [14980, 15446, null], [15446, 15832, null], [15832, 15977, null], [15977, 16298, null], [16298, 16974, null], [16974, 17407, null], [17407, 17775, null], [17775, 18019, null], [18019, 18428, null], [18428, 18670, null], [18670, 18923, null], [18923, 19403, null], [19403, 19685, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19685, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19685, null]], "pdf_page_numbers": [[0, 168, 1], [168, 319, 2], [319, 587, 3], [587, 1040, 4], [1040, 1272, 5], [1272, 1660, 6], [1660, 2129, 7], [2129, 2462, 8], [2462, 2779, 9], [2779, 3130, 10], [3130, 3206, 11], [3206, 3854, 12], [3854, 4400, 13], [4400, 4970, 14], [4970, 5909, 15], [5909, 6089, 16], [6089, 6395, 17], [6395, 6689, 18], [6689, 7331, 19], [7331, 8028, 20], [8028, 8619, 21], [8619, 8983, 22], [8983, 9752, 23], [9752, 10095, 24], [10095, 10677, 25], [10677, 10978, 26], [10978, 11447, 27], [11447, 11862, 28], [11862, 12272, 29], [12272, 12416, 30], [12416, 12651, 31], [12651, 12903, 32], [12903, 13371, 33], [13371, 13843, 34], [13843, 14181, 35], [14181, 14672, 36], [14672, 14814, 37], [14814, 14980, 38], [14980, 15446, 39], [15446, 15832, 40], [15832, 15977, 41], [15977, 16298, 42], [16298, 16974, 43], [16974, 17407, 44], [17407, 17775, 45], [17775, 18019, 46], [18019, 18428, 47], [18428, 18670, 48], [18670, 18923, 49], [18923, 19403, 50], [19403, 19685, 51]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19685, 0.0124]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
2ecdba357914760b96296395e56d303c6872336b
**Introduction** - Verilog HDL is a Hardware Description Language (HDL) - HDL is a language used to describe a digital system, for example, a computer or a component of a computer. - Most popular HDLs are VHDL and Verilog - For analog systems AHDL - Mixed-mode systems MAST-HDL (Sabre) - Verilog programming is similar to C programming - VHDL programming is similar to PASCAL (some say like Ada) - Is an IEEE standard **Why Use HDL?** - NO OTHER CHOICE - For large digital systems, gate-level design is dead - Millions of transistors on a digital chip - HDL offers the mechanism to describe, test and synthesize such designs - Impossible to design on a gate or transistor level - Comments start with a "//" for one line or "/* to */" across several lines - Describe a system by a set of modules (equivalent to functions in C) **Explanation** - In module simple, we declared A and B as 8-bit registers and C a 1-bit register or flip-flop. Inside of the module, the one "always" and two "initial" constructs describe three threads of control, i.e., they run at the same time or concurrently. Within the initial construct, statements are executed sequentially much like in C or other traditional imperative programming languages. The always construct is the same as the initial construct except that it loops forever as long as the simulation runs. The notation #1 means to execute the statement after delay of one unit of simulated time. Therefore, the thread of control caused by the first initial construct will delay for 20 time units before calling the system task $\text{stop}$ and stop the simulation. The $\text{SystemC}$ system task allows the designer to print a message much like printf does in the language C. Every time unit that one of the listed variable's value changes, the Simulator system task prints a message. The system function time returns the current value of simulated time. <table> <thead> <tr> <th>Time (ns)</th> <th>A</th> <th>B</th> <th>C</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> <td>x</td> <td>x</td> </tr> <tr> <td>1</td> <td>1</td> <td>x</td> <td>x</td> </tr> <tr> <td>2</td> <td>1</td> <td>1</td> <td>x</td> </tr> <tr> <td>3</td> <td>1</td> <td>1</td> <td>x</td> </tr> <tr> <td>4</td> <td>1</td> <td>1</td> <td>0</td> </tr> <tr> <td>5</td> <td>1</td> <td>1</td> <td>0</td> </tr> <tr> <td>6</td> <td>1</td> <td>1</td> <td>0</td> </tr> <tr> <td>7</td> <td>1</td> <td>1</td> <td>0</td> </tr> <tr> <td>8</td> <td>1</td> <td>1</td> <td>0</td> </tr> <tr> <td>9</td> <td>1</td> <td>1</td> <td>0</td> </tr> <tr> <td>10</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>11</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>12</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>13</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>14</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>15</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>16</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>17</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>18</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>Stop</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> **Simulation output** - A first digital model in Verilog - Simple Register Transfer Level (RTL) example to show Verilog - The register A is incremented by one. Then first four bits of B is - Not the last four bits of A. If is the "and" reduction - Comments start with a "//" for one line or "/* to */" across several lines - The two "initial"s and "always" will run concurrently - Will stop the execution after 20 simulation units - The statements done at simulation time 0 (since no #k) - initial begin: stop_at - #20 $stop; - end - #3 statements done at simulation time 0 (since no #k) - initial begin: Init - A = 0; - #3 statements done at simulation time 0 (since no #k) - $display("Time A B C"); - always begin: main_process - $monitor(" %0d %b %b %b", $time, A, B, C); - end - endmodule **Levels of Description** - Switch Level: - layout of the wires, resistors and transistors on an IC chip - Easiest to synthesize, very difficult to write, not really used - Gate (Structural) Level: - logical gates, flip flops and their interconnection - Very easy to synthesize, a text based schematic entry system - RTL (dataflow) Level - The registers and the transfers of vectors of information between registers. - Most efficiently synthesizable level - Uses the concept of registers with combinational logic - Behavioral (algorithmic) Level - Highest level of abstraction - Description of algorithm without hardware implementation details - easiest to write and debug, most difficult to synthesize - We will focus on the RTL and structural level in the lab Lexical Conventions • Keywords, e.g., module, are reserved and in all lower case letters. Verilog is case sensitive. • Spaces are important in that they delimit tokens in the language. • Numbers are specified in the traditional form of a series of digits with or without a sign but also in the following form: • `<size><base format><number>` – `<size>`: number of bits (optional) – `<base format>`: is the single character `` followed by one of the following characters b, d, o and h, which stand for binary, decimal, octal and hex, respectively. – `<number>`: contains digits which are legal for the `<base format>` Examples: • 549 // decimal number • ’h 8FF // hex number • ’o 765 // octal number • 4'b11 // 4-bit binary number 0011 • 3'b10x // 3-bit binary, least significant bit unknown • 5'd3 // 5-bit decimal number • -4'b11 // 4-bit two's complement of 0011 or 1101 Lexical Conventions • String: is a sequence of characters enclosed in double quotes. "this is a string" • Operators (some examples:) – Arithmetic: +, -, *, / – Shift: <<, >> – Relational: <, <=, >, >=, ==, != – Logical &&, || • Identifier: Equivalent to variable names: Identifiers can be up to 1024 characters. Program Structure • A digital system as a set of modules • Each module has an interface to other modules (connectivity) • GOOD PRACTICE: place one module per file (not a requirement) • Modules may run concurrently • Usually one top level module which invokes instances of other modules • Usually called a stimulus block MODULES • represent bits of hardware ranging from simple gates to complete systems, e.g., a microprocessor. • Can either be specified behaviorally or structurally (or a combination of the two) • The structure of a module is the following: – module <module name> (<port list>); – <declares> – <module items> – endmodule • `<module name>`: is an identifier that uniquely names the module. • `<port list>`: is a list of input, output and output ports which are used to connect to other modules. • `<declares>`: section specifies data objects as registers, memories and wires as well as procedural constructs such as functions and tasks. • `<module items>`: may be initial constructs, always constructs, continuous assignments or instances of modules. Behavioral Example: NAND • Here is a behavior specification of a module NAND – // Behavioral Model of a Nand gate – // By Dan Hyde, August 9, 1995 – module NAND(in1, in2, out); – input in1, in2; – output out; – // continuous assign statement – assign out = ~(in1 & in2); – endmodule **Explanation** - The ports in1, in2 and out are labels on wires. The continuous assignment assign continuously watches for changes to variables in its right hand side and whenever that happens the right hand side is re-evaluated and the result immediately propagated to the left hand side (out). - The continuous assignment statement is used to model combinational circuits where the outputs change when one wiggles the input. - Here is a structural specification of a module AND obtained by connecting the output of one NAND to both inputs of another one. **Structural Example: AND** - module AND(in1, in2, out); - // Structural model of AND gate from two NANDS - input in1, in2; - output out; - wire w1; // two instances of the module NAND - NAND NAND1(in1, in2, w1); - NAND NAND2(w1, w1, out); - endmodule - This module has two instances of the NAND module called NAND1 and NAND2 connected together by an internal wire w1. **Instance** - The general form to invoke an instance of a module is: - <module name> <parameter list> <instance name> (<port list>); - <parameter list> are values of parameters passed to the instance. - An example parameter passed would be the delay for a gate. **Stimulus Block:** - The following module is a high level module which sets some test data and sets up the monitoring of variables. ```verilog module test_AND; // High level module to test the two other modules reg a, b; wire out1, out2; initial begin // Test data a = 0; b = 0; #1 a = 1; #1 b = 1; #1 a = 0; end initial begin // Set up monitoring $monitor("Time=%0d a=%b b=%b out1=%b out2=%b", $time, a, b, out1, out2); end // Instances of modules AND and NAND AND gate1(a, b, out2); NAND gate2(a, b, out1); endmodule ``` **Output.** ``` Time=0 a=0 b=0 out1=1 out2=0 Time=1 a=1 b=0 out1=1 out2=0 Time=2 a=1 b=1 out1=0 out2=1 Time=3 a=0 b=1 out1=1 out2=0 ``` Procedural vs. Continuous Assignments - Procedural assignment changes the state of a register - sequential logic - Clock controlled - Continuous statement is used to model combinational logic. - Continuous assignments drive wire variables and are evaluated and updated whenever an input operand changes value. It is important to understand and remember the difference. Physical Data Types - modeling registers (reg) and wires (wire). - register variables store the last value that was procedurally assigned to them - wire variables represent physical connections between structural entities such as gates - does not store anything, only a label on a wire - The reg and wire data objects may have the following possible values: - 0 logical zero or false - 1 logical one or true - x unknown logical value - z high impedance of tristate gate - reg variables are initialized to x at the start of the simulation. Any wire variable not connected to something has the x value. Register Sizes - Size of a register or wire in the declaration - reg [0:7] A, B; - wire [0:3] Dataout; - reg [7:0] C; - specify registers A and B to be 8-bit wide with the most significant bit the zeroth bit, whereas the most significant bit of register C is bit seven. The wire Dataout is 4 bits wide. - initial begin: int1 A = 8'h01011010; B = {A[0:3] | A[4:7], 4'b0000}; end B is set to the first four bits of A bitwise or-ed with the last four bits of A and then concatenated with 0000. B now holds a value of 11110000. The {} brackets mean the bits of the two or more arguments separated by commas are concatenated together. Control Constructs - Instead of C's { } brackets, Verilog HDL uses begin and end. - The { } brackets are used for concatenation of bit strings - if (A == 4) begin B = 2; end - else begin B = 4; end - case (<expression>) <value1>: <statement> <value2>: <statement> default: <statement> endcase - case (sig) 1'bz: $display("Signal is floating"); 1'bx: $display("Signal is unknown"); default: $display("Signal is %b", sig); endcase Repetition - The for statement is very close to C's for statement except that the ++ and -- operators do not exist in Verilog. Therefore, we need to use i = i + 1 - initial begin: int1 for(i = 0; i < 10; i = i + 1) begin $display("i= %0d", i); while(i < 10) begin i = 0; begin $display("i= %0d", i); i = i + 1; end end begin $display("i= %0d", i); i = i + 1; end end i = 0; while(i < 10) begin $display("i= %0d", i); i = i + 1; end repeat (5) end i = 0; while(i < 10) begin $display("i= %0d", i); i = i + 1; end repeat (5) end Blocking and Non-blocking Procedural Assignments - Blocking assignment statement (= operator) acts much like in traditional programming languages. - The whole statement is done before control passes on to the next statement. - Non-blocking (= operator) evaluates all the right-hand sides for the current time unit and assigns the left-hand sides at the end of the time unit. // testing blocking and non-blocking assignment module blocking; reg [0:7] A, B; initial begin: init1 A = 3; #1 A = A + 1; // blocking procedural assignment B = A + 1; $display("Blocking: A= %b B= %b", A, B); A = 3; #1 A <= A + 1; // non-blocking procedural assignment B <= A + 1; #1 $display("Non-blocking: A= %b B= %b", A, B); end endmodule Timing and Delay Control - If there is no timing control, simulation time does not advance. Simulated time can only progress by one of the following: - 1. gate or wire delay, if specified. - 2. a delay control, introduced by the # symbol. - 3. an event control, introduced by the @ symbol. - 4. the wait statement. - The order of execution of events in the same clock time may not be predictable. - #10 A = A + 1; - specifies to delay 10 time units before executing the procedural assignment statement. The # may be followed by an expression with variables. Events - The execution of a procedural statement can be triggered with a value change on a wire or register, or the occurrence of a named event. Some examples: - @r begin // controlled by any value change in A = B&C; // the register r end - @(posedge clock2) A = B&C; // controlled by positive edge of clock2 - @(negedge clock3) A = B&C; // controlled by negative edge of clock3 - forever @(negedge clock3) // controlled by negative edge of clock3 begin // This has the same effect as the previous statement A = B&C; end Naming events - Verilog also provides features to name an event and then to trigger the occurrence of that event. We must first declare the event: - event event6; - To trigger the event, we use the -> symbol: - A = B&C; - To control a block of code, we use the @ symbol as shown: - @(event6) begin <some procedural code> end Wait Statement - The wait statement allows a procedural statement or a block to be delayed until a condition becomes true. - wait (A == 3) - begin A = B&C; end - The difference between the behavior of a wait statement and an event is that - the wait statement is level sensitive whereas @(posedge clock); is triggered by a signal transition or is edge sensitive.
{"Source-Url": "http://class.ece.iastate.edu/arun/CprE281_F05/lectures/verilog.pdf", "len_cl100k_base": 4131, "olmocr-version": "0.1.48", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 13695, "total-output-tokens": 4259, "length": "2e12", "weborganizer": {"__label__adult": 0.0007891654968261719, "__label__art_design": 0.0007677078247070312, "__label__crime_law": 0.0006203651428222656, "__label__education_jobs": 0.000789642333984375, "__label__entertainment": 0.00013840198516845703, "__label__fashion_beauty": 0.0003888607025146485, "__label__finance_business": 0.00024187564849853516, "__label__food_dining": 0.000621795654296875, "__label__games": 0.0011243820190429688, "__label__hardware": 0.09222412109375, "__label__health": 0.0009589195251464844, "__label__history": 0.0003952980041503906, "__label__home_hobbies": 0.00044798851013183594, "__label__industrial": 0.0028324127197265625, "__label__literature": 0.00026154518127441406, "__label__politics": 0.00043129920959472656, "__label__religion": 0.00104522705078125, "__label__science_tech": 0.12359619140625, "__label__social_life": 8.827447891235352e-05, "__label__software": 0.0127716064453125, "__label__software_dev": 0.75732421875, "__label__sports_fitness": 0.0007753372192382812, "__label__transportation": 0.001300811767578125, "__label__travel": 0.00026798248291015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14121, 0.03655]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14121, 0.60168]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14121, 0.81523]], "google_gemma-3-12b-it_contains_pii": [[0, 4088, false], [4088, 6682, null], [6682, 8564, null], [8564, 11809, null], [11809, 14121, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4088, true], [4088, 6682, null], [6682, 8564, null], [8564, 11809, null], [11809, 14121, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14121, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14121, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14121, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14121, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 14121, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14121, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14121, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14121, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14121, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14121, null]], "pdf_page_numbers": [[0, 4088, 1], [4088, 6682, 2], [6682, 8564, 3], [8564, 11809, 4], [11809, 14121, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14121, 0.0694]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
3b0960d7d2bd4db182b63144dbc9b84e89ed78dc
GRAPH4CODE: A Machine Interpretable Knowledge Graph for Code Ibrahim Abdelaziz a,*, Julian Dolby a, James P. McCusker b and Kavitha Srinivas a a T.J. Watson Research Center, IBM Research, NY, USA E-mails: ibrahim.abdelaziz1@ibm.com, dolby@us.ibm.com, kavitha.srinivas@ibm.com b Rensselaer Polytechnic Institute (RPI), NY, USA E-mail: mccusj2@rpi.edu Abstract. Knowledge graphs have proven extremely useful in powering diverse applications in semantic search and natural language understanding. GRAPH4CODE is a knowledge graph about program code that can similarly power diverse applications such as program search, code understanding, refactoring, bug detection, and code automation. The graph uses generic techniques to capture the semantics of Python code: the key nodes in the graph are classes, functions and methods in popular Python modules. Edges indicate function usage (e.g., how data flows through function calls, as derived from program analysis of real code), and documentation about functions (e.g., code documentation, usage documentation, or forum discussions such as StackOverflow). We make extensive use of named graphs in RDF to make the knowledge graph extensible by the community. We describe a set of generic extraction techniques that we applied to over 1.3M Python files drawn from GitHub, over 2,300 Python modules, as well as 47M forum posts to generate a graph with over 2 billion triples. We also provide a number of initial use cases of the knowledge graph in code assistance, enforcing best practices, debugging and type inference. The graph and all its artifacts are available to the community for use. Keywords: Knowledge Graph, Code Analysis, Code Ontology 1. Introduction Knowledge graphs (e.g. DBpedia [1], Wikidata [2], Freebase [3], YAGO [4], NELL [5]) provide advantages for applications such as semantic parsing [6], recommendation systems [7], information retrieval [8], question answering [9, 10] and image classification [11]. Inspired by such knowledge graphs, we build GRAPH4CODE, a knowledge graph for program code. Many applications could benefit from such knowledge graphs, e.g. code search, code automation, refactoring, bug detection, and code optimization [12], and there many open repositories of code for material. In 2019-2020 alone, there have been over 100 papers1 using machine learning for problems that involve code, including problems that span natural language and code (e.g., summarizing code in natural language [13]). A knowledge graph that represents code along with natural language descriptions could enhance this research. We illustrate the value of such a knowledge graph with Figure 1, which shows an example of code search: a developer searches for StackOverflow posts relevant to the Python code from GitHub in the left panel. That code uses sklearn to split data into train and test sets (train_test_split), and creates an SVC model (svm.SVC) to train on the dataset (model.fit). On the right is a real post from StackOverflow relevant to this code, in that the code performs similar operations. However, treating program code as text or as an Abstract Syntax Tree (AST) makes this similarity extremely hard to detect. For instance, there is no easy way to tell that the model.fit is a call to the same library as clf_SVM_radial_basis.fit. Analysis must reveal that model and clf_SVM_radial_basis refer to the same type of object, and that the result of --- 1 All authors contributed equally. 1 https://ml4code.github.io/papers.html sklearn.train_test_split is an argument to fit in both programs. Such an abstract representation would ease understanding semantic similarity between programs, making searches for relevant forum posts more precise. Yet most representations in the literature have been application-specific, and most have relied on surface forms such as tokens or ASTs. We present generic techniques for building representations of code that enable applications such as in Figure 1. We deploy state of the art program analysis that generalizes across programming languages, and we show scaling to millions of programs. From public code, we extract a graph per program, which captures that program in terms of data and control flow. As an example, the program in Figure 1 has a data flow edge from a node denoting the sklearn.train_test_split to a node denoting sklearn.svm.SVC.fit. It would also have a control flow edge from sklearn.svm.SVC to sklearn.svm.SVC.fit. Representing programs as data flow and control flow is crucial because programs that behave similarly can look arbitrarily different at a token or AST level due to syntactic structure or choices of variable names. Conversely, programs that look similar (e.g., they invoke a call to a method called fit) can have entirely different meanings. We build such representations for 1.3 million Python programs on GitHub, each analyzed into its own separate graph. While these graphs capture how libraries get used, they are not sufficient; there are semantics in textual documentation of libraries, and in forum discussions of them. We therefore link library calls to documentation and forum discussions. We identified commonly used modules in Python and their dependencies, and added canonical nodes to represent each class, function or method for 2300+ modules. These nodes are linked using information retrieval techniques to the documentation of method and classes (from usage documentation, when available, or from direct introspection), and to forum posts which specify the method or the class (an example of which is shown in Figure 1 for the sklearn.svm.SVC.fit method). We performed this linking for 257K classes, 5.8M methods, and 278K functions, and processed 47M posts from StackOverflow and StackExchange. To our knowledge, this is the first knowledge graph that captures the semantics of code, and it associated artifacts. Our knowledge graph is an RDF graph since RDF provides good abstractions for modeling individual program data such as named graphs, and SPARQL, the query language for RDF provides extensive support for the sorts of operations that are crucial for understanding control and data flow in programs, especially transitivity. Each program graph is modeled separately, and we use existing properties from ontologies such as schema.org, SKOS, DublinCore, and SemanticScience Integrated ontology (SIO) to model relationships between program and documentation entities. In summary, our key contributions are as follows: - A comprehensive knowledge graph for 1.3 million Python programs on GitHub - A model to represent code and its natural language artifacts (Section 2) ... A language neutral approach to mine common code patterns (Section 3) Connections from code to associated forum discussions and documentation (Section 4) Use cases showing generality and promise of GRAPH4CODE (Section 5) All artifacts associated with GRAPH4CODE, along with detailed descriptions of the modeling, use cases, query templates and sample queries for use cases are available at https://wala.github.io/graph4code/. 2. Ontology and Modeling Figure 2 shows GRAPH4CODE’s graph model, which is based on the Semanticscience Integrated Ontology (SIO) [14], and Schema.org classes and properties. Classes and functions in code are modeled as g4c:Class and g4c:Function, and each has a URI based on its python import path along with the python: prefix. For instance, the function pandas.read_csv in Figure 1 is python:pandas.read_csv. Each invocation of a function is an instance of sio:SoftwareExecution. Function invocations in code analysis link to their functions by RDFS label. However, since the return types of functions are often unknown in Python, this linkage is not always predictable. For instance, in the right panel of Figure 1, the call df.fillna will reflect the analysis to that point, its RDFS label would be pandas.read_csv.fillna. Modeling of code analysis is detailed in Section 3. We model StackExchange questions and answers using properties and classes from Schema.org, while expressing the actual question text as sioc:content. We chose Schema.org because it models the social curation of questions and answers across the web. Mentions of specific Python classes, modules, and functions in the forum posts are linked to classes and function URIs using schema:about. We also extract software snippets in forum posts, and use schema:SoftwareSourceCode to express source code snippets from questions and answers. The resulting knowledge graph allows the querying of usage patterns for python functions and classes directly by URI, along with any forum post or documentation associated with them (See Section 5). 3. Mining Code Patterns 3.1. Extraction of Python Files from GitHub Our starting point to build GRAPH4CODE is 1.38 million code files from GitHub. To extract this dataset, we ran a SQL query on the Google BigQuery dataset. The query looks for all Python files and Python notebooks from repositories that had at least two watch events associated with it, and excludes large files. Duplicate files were eliminated, where a duplicate was defined as having the same MD5 hash as another file in the dataset. 3.2. Code Analysis Figure 3 extends the code in our running example (Figure 1) to illustrate how we construct the knowledge graph. In this example, a CSV file is first read using the Pandas library with a call to pandas.read_csv, with the call being represented as 1 on the right of the program. The object returned by the call has an unknown type, since most code in Python is not typed. Some filtering is performed on this object by filling in the missing values with a median with a call to where, which is call 2. The object returned by 2 is then split into train and test with a call to train_test_split, which is call 3. Two subsets of the train data are created (4) which are then used as arguments to the fit call (6), after creating the SVC object in call (5). The test data is similarly split into its X and Y components and used as arguments to the predict call (7). Figure 4 illustrates the output of program analysis. For this, we use WALA—an open source library which can perform inter-procedural program analysis for Python, Javascript, and Java. The output of WALA is a control flow (depicted as green edges) and data flow (depicted as blue edges) graph for each analyzed program. The control flow edges are straightforward in this example and capture the order of calls, and this is particularly useful when data flow need not be explicit, such as when a fit call (labeled 6) must precede a predict call (labeled 7) for the sklearn library. We discuss the data flow shown in Figure 4 in more detail to illustrate assumptions in our analysis and to describe our modeling. This figure is a subset of the actual model, but we show all the key relations at least once. This graph shows two key relations that capture the flow through the code: flowsTo (blue edges) captures the notion of data flow from one node to another, abstracting away details like argument numbers or names. Many application queries can be expressed as transitive paths on flowsTo. We use SPARQL’s property path operators extensively to accomplish this. immediatelyPrecedes (green edges) captures code order. Queries such as predict calls not preceded by fit calls can be expressed this way. Node 1 in Figure 4 corresponds to the execution of read_csv. Since we are trying to understand mainly how code uses libraries rather than the libraries themselves, we assume any call simply returns a new object, with no side effects. In Figure 3, we see data used as the receiver to the where (i.e. the object on which the call is made). Since Python does not have methods per se, this call has two steps as shown in Figure 4: predicates use the graph4code: prefix, http://purl.org/twc/graph4code/ Of course this may not be true, but allows us to scale analysis without delving into the analysis of libraries, which may be huge and, in languages like Python, are often written in C. It is legal to modify "methods", e.g. data.where = lambda ... --- 5https://github.com/wala/graph4code/blob/master/extraction_queries/bigquery.sql 6https://github.com/wala/WALA We see from Figure 3 that data is used as argument 1 to train_test_split, labelled 3. The graph denotes argument numbers using blank nodes as shown in Figure 4: there is a hasInput edge to the blank node which, in turn, has a hasValue to the argument number 1 and an isSpecializationOf edge to the call. All arguments have this representation, but we show only this one to minimize clutter. The call to test_train_split returns a tuple, which is split into train and test. This is captured in the first box labelled 4. Then each of train and test are split into their X and Y components for learning, which is shown in the second box label 4. The train components are used for fit, and the 4a node show the read of the test components from Dataset. The 4b nodes follow the test component to the predict call; the X field is a slice and so does not have a specific field. Note that this example does not have dataflow directly from fit to predict, but this graph also captures the ordering constraint between them as shown in Figure 4. Each program is analyzed into a separate RDF graph, with connections between different programs being brought about by rdfs:label to common calls (e.g. sklearn.svm.SVC.fit). Note that this type of modeling accommodates the addition of more graphs easily as more code gets available for analysis. 4. Linking Code to Documentation and Forum Posts 4.1. Extracting Documentation into the Graph To generate documentation for all functions and classes used in the 1.3 million files, we first processed all the import statements to gather popular libraries (libraries with more than 1000 imports across the files). 506 such libraries were identified, of which many were included in the Python language themselves. For each of these libraries, we tried to resolve their location on GitHub to get not only the documentation embedded in its code, but also associated usage documentation which tends to be in the form of markdown or restructured text files. We found 403 repositories using web searches on GitHub for the names of modules. The first source of documentation we collected is from the documentation embedded in these code files. However, documentation in the source is insufficient because Python libraries often depend on code written in other languages. As an example, the tensorflow.placeholder function is defined in C, and has no stub that allows the extraction of its documentation. Therefore, to gather additional documentation for the popular libraries, we created a virtual environment, installed it, and used Python inspect to gather the documentation. Currently, we only gather information about the latest version of the code. This is currently a limitation of the graph that we will address in future work. This step yielded 6.3M pieces of documentation for functions, classes and methods in 2300+ modules (introspection of each module brought in its dependencies). The extracted documentation is added to our knowledge graph where, for each class or function, we store its documentation string, base classes, parameter names and types, return types and so on. As an example, the following are some materialized information about pandas.read_csv in GRAPH4CODE: ```python py:pandas.read_csv dcterms:isPartOf py:pandas; skos:definition "Read a comma-separated values ..."; g4c:return_index 1; g4c:return_inferred_type py:pandas.DataFrame; g4c:param py:pandas.read_csv/p/2. py:pandas.read_csv/r/1 g4c:return py:pandas.read_csv/p/1 g4c:param py:pandas.read_csv/p/2. ``` 4.2. Extraction of StackOverflow and StackExchange Posts User forums such as Stackoverflow\textsuperscript{10} provide a lot of information in the form of questions and answers about code. Moreover, user votes on questions and answers can indicate the value of the question and the correctness of the answer. While Stackoverflow is geared towards programming, StackExchange\textsuperscript{11} provides a network of sites around many other topics such as data science, statistics, mathematics, and artificial intelligence. To further enrich our knowledge graph with natural language posts about code and other documentation, we linked each function, class and method to its relevant posts in Stackoverflow and StackExchange. In particular, we extracted 45M posts (question and answers) from StackOverflow and 2.7M posts from StackExchange in Statistical Analysis, Artificial Intelligence, Mathematics and Data Science forums. Each question is linked to all its answers and to all its metadata information like tags, votes, comments, codes, etc. We then built an elastic search index for each source, where each document is a single question with all its answers. The documents were indexed using a custom analyzer tailored for natural language as well as code snippets. Then, for each function, class and method, we perform a multi-match search\textsuperscript{12} over this index to retrieve the most relevant posts (a limit of 5K matches per query is imposed) and link it to the corresponding node in the knowledge graph. For example, the following question is linked to SVC class: \begin{verbatim} <https://stackoverflow.com/questions/a/47663694> rdf:type schema:Question; schema:about py:sklearn.svm.SVC; schema:name "How to run SVC classifier..."; </question> 4.3. Extracting Class Hierarchies As in the case of extracting documentation embedded in code, extraction of class hierarchies was based on Python introspection of the 2300+ modules. For example, the below triples list some of the subclasses of `BaseSVC`: \begin{verbatim} </base_svc> 5. Knowledge Graph: Properties and Uses As discussed earlier, a large literature exists on using various code abstractions such as ASTs, text, or even output of program analysis artifacts for all sorts of applications such as code refactoring, code search, code de-duplication detection, debugging, enforcement of best practices etc. The popularity of WALA\textsuperscript{13}, the program analysis tool used to generate the analysis based representation of GRAPH4CODE, attests to the fact that numerous applications exist for this type of representation of code. To our knowledge, GRAPH4CODE is the first attempt to build a knowledge graph over a large repository of programs and systematically link it to documentation and forum posts related to code. We believe that by doing so, we will enable a new class of applications that combine code semantics as expressed by program flow along with natural language descriptions of code. 5.1. Graph Statistics Table 1 shows the number of unique methods, classes, and functions in docstrings (embedded documentation in code). These correspond to all documentation pieces we found embedded in the code files or obtained through introspection (see Section 4.1). Overall, we extracted documentation for 278K functions, 257K classes, and 5.8M methods. Table 1 also shows the number of links made from other sources to docstrings documentation. Static analysis of the 1.3M code files created a total of 7.3M links (4.2M functions, 2.1M class and 959K methods). We also created links to web forums in Stackoverflow and StackExchange: GRAPH4CODE has currently 106K, 88K and 742K links from web forums to functions, classes and methods, respectively. This results in a knowledge graph with a total of 2.09 billion edges; 75M triples from docstrings, 246M from web forums and 1.77 billion from static analysis. \begin{verbatim} https://github.com/wala/graph4code/blob/master/extraction_queries/elastic_search.q https://github.com/wala/WALA \end{verbatim} Table 1 <table> <thead> <tr> <th>Functions(K)</th> <th>Classes(K)</th> <th>Methods(K)</th> </tr> </thead> <tbody> <tr> <td>Docstrings</td> <td>278</td> <td>257</td> </tr> <tr> <td>Web Forums Links</td> <td>106</td> <td>88</td> </tr> <tr> <td>Static Analysis Links</td> <td>4,230</td> <td>2,132</td> </tr> </tbody> </table> Number of functions, classes and methods in docstrings and the links connected to them from user forums and static analysis. This results in a knowledge graph with a 2.09 billion edges in total. 5.2. Querying GRAPH4CODE This section shows basic queries for retrieving information from GRAPH4CODE. The first query returns the documentation of a class or function, in this case `pandas.read_csv`. It also returns parameter and return types, when known. One can expand these parameters (`?param`) further to get their labels, documentation, inferred types, and check if they are optional. ``` select ?doc ?param ?return where { graph <http://purl.org/twc/graph4code/docstrings> { ?s rdfs:label "pandas.read_csv" ; skos:definition ?doc . optional { ?s g4c:param ?param .} optional { ?s g4c:return ?return .} } } ``` In addition to the documentation of `pandas.read_csv`, we can also get the forum posts that mention this function by appending the following to the query above. This will return all questions in StackOverflow and StackExchange forums about `pandas.read_csv` along with their answers. ``` graph ?g2 { ?ques schema:about ?s ; schema:name ?q_title ; ?a sioc:content ?answer . } ``` Another use of GRAPH4CODE is to understand how people use functions such as `pandas.read_csv`. In particular, the query below shows when `pandas.read_csv` is used, what are the `fit` functions that are typically applied on its output. ``` select distinct ?label where { graph ?g { ?read rdfs:label "pandas.read_csv" . ?fit schema:about "fit" . ?read graph4code:flowsTo ?fit . } } ``` 5.3. Uses of GRAPH4CODE In addition to simple templates that people can use to query the knowledge graph, we describe three use cases for GRAPH4CODE here in some detail. Additional use cases can be found in https://wala.github.io/graph4code/. 5.4. Code Assistance: Next Coding Step In this use case, we integrated GRAPH4CODE inside and IDE and used it for code assistance to find the most commonly used next steps, based on the context of their own code. Context, in this query, means data flow predecessors of the node of interest; in this case, we take a simple example of the single predecessor call that constructed the classifier. Figure 5 shows an example of a real Kaggle notebook, where users can select any expression in the code and get a list of the most common next steps along with the frequency with which the next step is observed. For example, in similar contexts after a `model.predict` call, data scientists typically do one of the following: 1) build a text report showing the main classification metrics (frequency: 16), 2) report the confusion matrix which is an important step to understand the classification errors (frequency: 10) and 3) save the prediction array as text (frequency: 8). This can help users by alerting them to best practices from other data scientists. In the example shown above, the suggested step of adding code to compute a confusion matrix is actually useful. The existing Kaggle notebook does not contain this... We show in Figure 6 the SPARQL query used to support this use case. The query variable ?this is the node that has an execution path of "sklearn.ensemble.RandomForestClassifier.predict" where the user is at in the program, and has paused to ask next steps from this call. ?pred refers to predecessors that flow into this node within the program, and it adds context to the call to predict. In this example it is just the constructor "sklearn.ensemble.RandomForestClassifier". ?next shows all the calls that tend to be called on the return type of ?this, ordering by counts across the graphs in the repository. For other code assistance use case, please see [15]. 5.4.1. Enforcing best practices There has been a long history in the programming languages literature of using static analysis to perform the detection of anti patterns or best practices for various application frameworks and systems code (e.g. [16], [17], and [18]). For instance, [18] described a set of anti patterns for usage of enterprise java beans frameworks. Frequently, the detection of anti patterns has been implemented using custom code over the output of static analysis, but see [19] for an approach that uses a custom declarative query language called PQL which can be used to detect anti patterns. In most cases, detection of several hundred anti-patterns can be enabled with just a handful of query templates ([18]). We enable this use case easily with our knowledge graph. To illustrate how this might work, we describe a best practices pattern for data scientists, which states that any data scientist who builds a machine learning model must use different data to train the model with a fit call, than the data they use to ultimately test the validity of the model with a predict call. Note that it is perfectly reasonable for the user to also use the predict call to assess the goodness of the model on the training data. All that is required is that some other data also be used to the predict call. Fortunately, SPARQL 1.1 is expressive enough to describe the anti pattern easily over the knowledge graph. Basically, we look for any dataset that flows into argument 1 of a fit call, that also flows into argument 1 of the predict call, and filter cases that do have two different datasets flowing into the predict calls on the exact same model. We ran this query over the knowledge graph, and found 245 examples of this anti pattern in the graph. Figure 7 shows one such result. Data flows from line 15 to the fit call on line 37, and to the predict call on line 40 on the same model nbr, but no test data is used to validate the model at all. We stress that this is just one example of the types of templates that can be built to enforce API specific or even organization specific best practices. As noted extensively in the literature, a small set of templates typically are needed to enforce many hundred anti-patterns. For data science scenarios alone, there are two other best practices we have implemented. One checks that users use some type of hyper-parameter optimization techniques when building models rather than setting hyper-parameters to models manually. A second checks that users compares more than one model for each dataset, because it is well known that no single algorithm works best on all datasets. Together these queries act as tutorials for how to construct such anti-pattern detection across a repository of code. 5.4.2. Debugging with Stackoverflow A common use of sites such as StackOverflow is to search for posts related to an issue with a developer’s code, often a crash. In this use case, we show an example of searching StackOverflow using the code context in Figure 1, based on the highlighted code locations found with dataflow to the fit call. Such a search on GRAPH4CODE does produce the StackOverflow result shown in Figure 1 based on links with the coding context, specifically the train_test_split and ```sparql select (count(?g) as ?c) ?next_label where { graph ?g { ?pred graph4code:flowsTo+ ?this . ?this rdfs:label ?next_label . ?this graph4code:flowsTo+ ?next . } group by ?next_label order by desc(?c) limit 3 } ``` Fig. 6. Query to find the next step in a program SVC.fit call as one might expect. Suppose we had given SVC a very large dataset, and the fit call had memory issues; we could augment the query to look for posts that mention ‘memory issue’, in addition to taking the code context shown in Figure 1 into consideration. Figure 8 shows the first result returned by such a query over the knowledge graph. As shown in the figure, this hit is ranked highest because it matches both the code context in Figure 1 highlighted with green ellipses, and the terms “memory issue” in the text. What is interesting is that, despite its irrelevant title, the answer is actually a valid one for the problem. A text search on StackOverflow with ‘sklearn’, ‘SVC’ and ‘memory issues’ as terms does not return this answer in the top 10 results. Figure 9 shows the second result, which is the first result returned by a text search on StackOverflow. Note that our system ranks this lower because the coding context does not match the result as closely. 6. Related Work To our knowledge, there is no comprehensive knowledge graph for code that integrates semantic analysis of code along with textual artifacts about code. Here we review related work around issues of how code has been typically represented in the literature, what sorts of datasets have been available for code, and ontologies or semantic models for code. 6.1. Code Representation A vast majority of work in the literature has used either tokens or abstract syntax trees as input representations of code (see [12] for a comprehensive survey on the topic). When these input code representations are used for a specific application, the target is usually a distributed representation of code (see again [12] for a breakdown of prior work), with a few that build various types of probabilistic graphical models from code. Few works have used data and control flow based representations of code as input to drive various applications. As an example, [20] used a program dependence graph to detect code duplication on Javascript programs, but the dependence is computed in an intra-procedural manner. Similarly, [21] augments an AST based representation of code along with local data flow and control flow edges to predict variable names or find the misuse of variables in code. [22] combines token based representations of code with edges based on object uses, and AST nodes to predict the documentation of a method. [23, 24] includes partial object use information from WALA for code completion tasks, but the primary abstraction in that work is (a) a vector representation of APIs for Java SWT, that they used in machine learning algorithms such as best matching neighbors to find the next best API for completion [23], or (b) as a Bayesian network which reflects the likelihood of a specific method call given the other method calls that have been observed [24]. [25, 26] employs a mostly intraprocedural (with heuristics for handling interprocedural analysis) to mine a large number of graphs augmented with control and data flow, for the purposes of code completion for Java API calls. This work is interesting because, like us, [25] it creates a large program graph database which models dependencies between parent and child graphs, from which a Bayesian model is constructed to predict the next set of API calls based on the current one. Our work can be distinguished from prior work in this area in two key ways: (a) our work targets interprocedural data and control flow, in the presence of first class functions and no typing, to create a more comprehensive representation of code, and (b) we use this representation to drive the construction of a multi-purpose knowledge graph of code that is connected to its textual artifacts. 6.2. Code Datasets Several research efforts started recently to focus on using machine learning for code summarization [27–29], code search [30] and models such as CodeBERT [31]. The datasets used by these approaches tend to be code- and task-specific. To the best of our knowledge, there is no work that tries to build a general knowledge graph for code and we believe that these approaches can directly benefit from GRAPH4CODE. We, however, leave this for future work. 6.3. Semantic Models of Code SemanGit [32] is a linked data version based on Github activities. Unlike GRAPH4CODE, SemanGit focus on modeling user activities on Github and not on understanding the code itself as in GRAPH4CODE. CodeOntology [33] in an ontology designed for mod- How to overcome SVM memory requirement I am using the SVM function (LinearSVC) in scikit-learn. My dataset and number of features is quite large, but my PC RAM is insufficient, which causes swapping, slowing things down. Please suggest how I can deal with this (besides increasing RAM). In short, without reducing the size of your data or increasing the RAM on your machine, you will not be able to use SVC here. As implemented in scikit-learn (via libsvm wrappers) the algorithm requires seeing all the data at once. One option for larger datasets is to move to a model that allows online fitting, via the partial_fit() method. One example of an online algorithm that is very close to SVC is the Stochastic Gradient Descent Classifier, implemented in sklearn.linear_model.SGDClassifier. Through its partial_fit() method, you can fit your data just a bit at a time, and not encounter the sort of memory issues that you might see in a one-batch algorithm like SVC. Here’s an example: ```python from sklearn.linear_model import SGDClassifier from sklearn.datasets import make_blobs # make some fake data X, y = make_blobs(n_samples=100000, random_state=0) # train on a subset of the data at a time clf = SGDClassifier() for i in range(10): subset = slice(100000 * i, 100000 * (i + 1)) clf.partial_fit(X[subset], y[subset], classes=np.unique(y)) ``` Fig. 9. Second search result for debugging SVC query 7. Conclusions We presented Graph4Code, a knowledge graph that connects code analysis with other diverse sources of knowledge about code such as documentation and user-generated content in StackOverflow and StackExchange. To demonstrate the promise of such a knowledge graph, we provided 1) a set of SPARQL templates to help users query the graph, 2) initial use cases in debugging, enforcing best practices and type inference. We hope that this knowledge graph will help the community build better tools for code automation, code search and bug detection and bring semantic web technologies into the programming languages research. References [17] B. Livshits and T. Zimmermann, DynaMine: finding common error patterns by mining software revision histo-
{"Source-Url": "http://www.semantic-web-journal.net/system/files/swj2575.pdf", "len_cl100k_base": 7402, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 46528, "total-output-tokens": 9954, "length": "2e12", "weborganizer": {"__label__adult": 0.0003383159637451172, "__label__art_design": 0.00021922588348388672, "__label__crime_law": 0.0003006458282470703, "__label__education_jobs": 0.00040793418884277344, "__label__entertainment": 4.38690185546875e-05, "__label__fashion_beauty": 0.00011497735977172852, "__label__finance_business": 0.00013113021850585938, "__label__food_dining": 0.0002570152282714844, "__label__games": 0.00033664703369140625, "__label__hardware": 0.0005564689636230469, "__label__health": 0.000331878662109375, "__label__history": 0.00013720989227294922, "__label__home_hobbies": 7.43865966796875e-05, "__label__industrial": 0.00021636486053466797, "__label__literature": 0.00020194053649902344, "__label__politics": 0.0001809597015380859, "__label__religion": 0.0003399848937988281, "__label__science_tech": 0.005275726318359375, "__label__social_life": 8.893013000488281e-05, "__label__software": 0.00568389892578125, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.0002288818359375, "__label__transportation": 0.0003490447998046875, "__label__travel": 0.00015056133270263672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39052, 0.03142]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39052, 0.56801]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39052, 0.87022]], "google_gemma-3-12b-it_contains_pii": [[0, 3513, false], [3513, 6663, null], [6663, 8880, null], [8880, 12260, null], [12260, 15798, null], [15798, 20062, null], [20062, 23460, null], [23460, 27960, null], [27960, 30837, null], [30837, 32450, null], [32450, 35091, null], [35091, 38235, null], [38235, 39052, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3513, true], [3513, 6663, null], [6663, 8880, null], [8880, 12260, null], [12260, 15798, null], [15798, 20062, null], [20062, 23460, null], [23460, 27960, null], [27960, 30837, null], [30837, 32450, null], [32450, 35091, null], [35091, 38235, null], [38235, 39052, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39052, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39052, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39052, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39052, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39052, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39052, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39052, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39052, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39052, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39052, null]], "pdf_page_numbers": [[0, 3513, 1], [3513, 6663, 2], [6663, 8880, 3], [8880, 12260, 4], [12260, 15798, 5], [15798, 20062, 6], [20062, 23460, 7], [23460, 27960, 8], [27960, 30837, 9], [30837, 32450, 10], [32450, 35091, 11], [35091, 38235, 12], [38235, 39052, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39052, 0.02381]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
3445fcbe4274f008a4a69919f874f01fc008cc1b
Revolutionizing Enterprise API Management: Enhancing Security and Performance through Modularization and Modernization Anvesh Gunuganti USA ABSTRACT This paper emphasizes modularization and modernization strategies in enterprise API management and their importance for the security and performance of the organization. The Digital world is an increasingly interlinked environment in which organizations can only prosper with efficient API management. Thus, these API systems must be highly secure and deliver the desired performance. Decomposition is a process of dividing a system with different levels of complexity into manageable modules. At the same time, modernization entails the replacement of old infrastructures and processes with the latest standards. Through deep analysis of complex systems and accepting current technological advancements, organizations are more likely to do a better job improving performance and security matters. This study raises topics related to compartmentalization, introducing the latest technologies, safety, and performance, and shows how to view and solve API governance and optimization problems. USA Keywords: Enterprise API Management, Modularization, Modernization, Security, Performance Introduction With the ever-increasing connected digital environment, enterprise API management has become irreplaceable because it enables flawless communication between multiple systems, which eventually helps develop organizations [1]. On the one hand, the most significant feature of security practices is their proficiency in providing lasting security. On the other hand, they achieve this by designing a system that is free of vulnerabilities and performs optimally. Security safeguards vital information and resources from malicious attacks. Performance, on the other hand, is directly linked to user satisfaction and scalability due to the efficiency and reliability of API interactions. This review will focus on the core notions connected with the modularization and modernization approaches within business ecosystems API management. By decomposing intricate systems into manageable parts, and working with the latest technological improvements, organizations can upgrade security and improve performance and agility. Overview of Enterprise API Management API management of the enterprise is a holistic base comprising the API governance, deployment, and maintenance within the organizational setting [2]. APIs thus become a bridge essential for providing an avenue for systems of different natures to communicate and operate efficiently with each other. In the modern digital ecosystem, where the software fuels most of the business and processes, the most crucial variable is the effective management of APIs, which fuels smooth operations, innovation, and digital transformations. Importance of Security and Performance Security and performance are two significant features of the foundation of enterprise API management, and they, therefore, support management in terms of effectiveness and reliability [3]. Safety updates cannot only be used to protect classified information, privileged information, and critical resources from implied dangers but can also be successfully applied to shield organizations against all kinds of threats, weaknesses, and cyberattacks. The explosive growth of interconnected systems and the complexity of cyber security threats set a new boundary for them; hence, organizations must protect sensitive data, considering data integrity, confidentiality, and availability. The emergence of a bad-performing API can cause latency problems, downtime, and service quality degradation. As a result, the productivity of the organization and customer satisfaction decline. Figure 1: Security Consideration in API Management Source: Adapted from [4]. Research Question How do modularization and modernization of enterprise API management impact security and performance? Focus of the Review: Modularization and Modernization The review highlights the application of modularization and modernization strategies within enterprise API management, in which security and performance play a vital role [5]. Modularization involves breaking up bulk monolithic systems into independent particle-sized modules and components that are self-managed and foster agility, flexibility, and re-usability in the development and deployment of software. Organized systems can be converted into manageable subsystems, which are a way to simplify development processes and prevent bottlenecks; therefore, this leads to the improvement of overall system durability and vulnerability via external factors. Figure 2: Modularization and Modernization Strategies [6] In the same way, modernization is a range of activities intended to revive and enhance existing infrastructure, technologies, and processes to keep pace with modern standards and best practices. This may imply the use of cloud-native architecture models, containerization, microservices, and DevOps approaches to add scalability, agility, and resource efficiency. Through adherence to modernization programs, companies can use modern technologies, make the process of API management easier, and build a sustainable platform to oppose changing market requirements and technological progress. Literature Review The literature review emphasizes the intricacies of matters like security, performance, and rules of modularization and modernization in the lumen of enterprise API management [7]. It describes the growing maturity of API governance in terms of its emergence and current situation, demonstrating that the field is very dynamic and robust infrastructure is critical. Definition and Evolution of API Management This section performs a meticulous job of looking more closely into the historical path of API management development, from its introduction to its present state of maturity. It goes into these critical subjects, theories, and paradigms, which have been instrumental in forming the API management world, such as Restful architecture and SOAP protocols. Besides, it illustrates advancements like GraphQL and event-driven APIs, enabling organizations to adapt to the new age of API development and management. By presenting the historical route of API management, this part assists the reader in better understanding what prompts its use and detecting the conditions, both the challenges and the opportunities that have paved the way for the recognizability of this option in modern times. The historical development of API management within the technological scenario is contextualized in this part, which assists us in understanding the factors due to which this evolution has taken place [8]. It demonstrates the significant crossroads, specific technology, and sectored trends that were to act as triggers for the API adoption that had taken place. Moreover, it looks into the contribution of API Management to the implementation of digital transformation projects, advanced R&D process acceleration and business nimbleness this part of the section deals with the issue through the studying of historical patterns and new paradigms. Thus, readers will understand the process and look at the field of API management more comfortably. Security Challenges and Solutions Subsequent discussion focuses on comprehensive security approaches and safety measures specifically intended for the prevention of data breaches and protection within distributed enterprise API environments. The fundamental part of this story is the implementation of up-to-date authentication and authorization algorithms, like OAuth and Open ID Conto, which end-to-user authentication problems. Besides, incorporating JSON Web Tokens (JWTs) for token-based authorization improves security by ensuring user credentials’ secret transmission and validation [9]. In addition, the data at rest encryption through Transport Layer Security (TLS) protocols guarantees that the data cannot be accessed and read even when it is still transmitted. On top of that, this discussion covers the new security standards and protocols, one may be OAuth 2. 0 multi-tools (mites), and the API security frameworks like the OWASP Top 10 are some of the tools that are needed to do away with API security problems. They are orientation tools; that is, they give companies extensive instructions on enhancing their cycling API ecosystems and staying immune to the growing threats and risks. Compliance with the industry’s best practices and keeping up with the latest security innovations will allow companies it to respond efficiently to security risks to solve the issues associated with API interactions in a preventive manner. To this end, implementing strong security measures and compliance with pre-established protocols by organizations will help improve the resilience of their API ecosystems, encompassing integrity, protection, and accessibility of sensitive data. Performance Optimization Techniques The research focuses on studying and identifying various strategies to create object orientation at two levels- high and simultaneously, bringing structure to these API ecosystems. These tactics correspond to the systems with middle-range optimization, such as caching, compression, and batching requests, as well as fusion-level architectures, e.g., rate limiting, load balancing, and synchronous processing. This research methodically examines the main features as well as the details of the implementation process of each approach. It guides organizations in developing a blueprint for efficient growth and feedback rates in their API life cycles. In addition, the research explores the rising trends along with viewpoints in the performance optimization domain, emphasizing serverless computing, edge computing, and content delivery networks (CDNs). These pathways are the premeditated prospects for future improvements that aim to improve API efficiency and scalability. Serverless computing can allocate resources dynamically and minimize the cost of edge computing; high performance and better responsiveness are achieved by computing being closer to the data source, followed by less latency. Besides, the CDN allows... content caching closer to end users’ locations, leading to more efficient allocation of content and faster data transmission. By examining these emerging practices and paradigms, this research equips businesses with valuable perspectives on the changing environment of performance enhancement in API ecosystems, allowing them to stay one step ahead of the game and discover effective strategies to utilize new technologies for innovation and competitiveness. **Literature Review** **Rationale for Case Study Approach** The case study technique was selected to provide detailed knowledge of real-life scenarios and their complexities, like the advancement and administration of APIs in an enterprise environment. This approach gives room for an in-depth examination of specific incidents, thus cultivating a sound knowledge of the areas ranging from difficulties and strategies to successful execution of API management. Through the concentration on practical instances like Aviv Aero, which is part of GE Aviation Business, a case study will demonstrate the complexities of the API implementation from the technical & organizational perspectives and strategic considerations. **Selection of Relevant Case Studies** The case studies were selected to balance the broad relevance of the identified examples of API management practices within enterprises, especially in industries with digital transformation. Notably, the relation between cases that show applying best practices of API management and business processes and outcomes was made evident in the discussion. The cases pushed for, for instance, those of Avio Aero - GE Aviation Business, were selected based on alignment to the research objective and their capability to generate possible solutions to the challenges encountered in managing API in free-spirited organizations. **Case Study Analysis** **Case Study 1** The first case study is about adopting modern API-based architecture solutions in government offices to boost the performance, transparency, and accessibility of the services [10]. It offers a broad perspective on API, which is divided into several subcategories, such as API designing fundamentals and the advantages of API integration in the government should be brought out. Lastly, the actual examples of automating tasks like online payment, application of permits, and citizen interaction with the government are provided to the audience by API. The paper also tries to describe important security and privacy issues that arise when using an API in government contexts, taking solving strategies and data protection into account. The situation study also analyses how contemporary AP-based solutions affect governmental efficiency and citizens’ satisfaction. These solutions, in turn, contribute to the standardization and delivery of services and respond to the customers’ interests. Additionally, the study supports the possibility of scalability and innovation by API-oriented methods being realized, indicating that governments have room to adapt them in accordance with changing citizen requirements and technological growth. Besides, this case study of the wide adoption of the API in government processes gives a necessary understanding of the revolutionary potentialities of modernization strategies and public service. **Case Study 2** The second case study deals with the difficulties generated by magnified systems like e-banking, retail, transport, or telecommunications [11]. Besides the scale and complexity, these systems are characterized by defects that constrain productivity and flexibility. Therefore, modernization strategies are proposed to solve these problems outlined in this case study. Modernization is a concept of decomposing the systems into smaller independent modules, including composition and Service-Oriented Architecture (SOA) that improve the process of maintenance, scalability, and adaptability. Through modernization, companies can enhance system productivity and ensure smooth integration with 3rd party systems. The most significant advantage is the increased ability to streamline the processes and the quickness of the response to the business changes. The main content of the case study is a review of modernization applications in various industries. Through practical and effective strategy, the amortization approach efficiently brings solutions to complexities associated with big system projects. In addition, it influences the advantages of modernization as a significant contributor to establishing and maintaining system levels of performance and interoperability, which fits the current business climate of dynamism [12]. **Comparative Findings of Case Studies** Under the two case reviews, different but similar views on the modernization and optimization of technology are presented. While the first case study concentrates on applying the modern tool API-based architectures for facilitating government processes, the second case is dedicated to considerable software system complexities, and the viable solution is modernization to meet system flexibility and scalability. Despite variances in the contexts, however, two case studies emphasize the necessity to take innovative actions to solve current issues, skills acquisition, and system sustenance. The case studies provide both theoretical and practical support by dissecting various tactics used to challenge the dynamic environment of technology and how the art is done to lead the organization to prosperity. **Findings and Discussion** As a product of the analysis, the study report would provide sufficient evidence of the contribution of modernization systems such as service-oriented modernization and the application of modern APIs. The modernization of the security is a crucial tactic for risk separation at the module level and enforcement of access control roles. Besides, modernization programs complete processing progressions by simplifying architectures and enhancing resource utilization. Besides the perks, the integration of legacy systems and organizational resistance are some difficulties that show the necessity of the stakeholders’ preparation in advance. **Security Enhancements Through Modularization** Implementing modularization for large software systems has proved much more effective in security enhancement activities than the developers thought before. Risks become specifically detected through the fragmentation of big architectures into separate, independent components. They can be cut off within the scope of these components, leaving a few compared to the initial ones as much the attack surfaces when security breaches occur. Then, modularity paves the way for multiple security measures to be implemented at the module level, giving the security policies more specific control and enforcement. By clearly identifying the interfaces where access limitations will be implemented, the mechanism can be laid down to guarantee that only adequately authorized units will have access to sensitive data or system resources. Performance Benefits of Modernization Modernizing software systems using such methods as trying out modern architecture based on the programming interfaces or implementing modularization techniques produced remarkable performance results. Modernization actions, which include streamlining system architectures, reducing complexity, and optimizing resource utilization, will go a long way to improve system responsiveness, throughput, and scalability [13]. For instance, via APIs, data can be easily transferred between different system components, leading to efficient communication and swift integration with external services. Also, it increases the degree of the system’s agility and flexibility, which allows for separate scaling of independent modules depending on the demand. Businesses can significantly benefit from performance optimizations because they offer improved user experience, higher operational efficiency, and supply to the growing needs of modern digital ecosystems. Lessons Learned and Challenges As a result of these case studies and discussions, several lessons have been taken into consideration, and several challenges have been identified when discussing the examination of the modernization process. Amongst its diverging and complex patterns, it is hard to say that modernization only encompasses a technological dimension; there are many other organizational and cultural ones to mention, too. Modernization projects with a high success rate should have a well-planned procedure that includes users’ involvement and will be managed by appropriate techniques that align with organizational goals. On the other hand, with such advantages associated with modernization, modernization poses various challenges that can’t be ignored. They include integration of legacy systems, resource constraints, and resistance to change. The first step in overcoming these obstacles is through leadership, collaboration between the different functional areas, and the readiness to innovate and experiment. Conclusion In short, the research calls attention to the role modularization and modernization could play in enterprise API management to strengthen security, accelerate performance, and adopt contemporary technologies and strategies. Hence, organizations could get rid of complex systems by splitting them into manageable units, which is a successful way to improve their digital infrastructure, reduce wastage and the resources needed, and increase efficiency. Among the key recommendations for tackling the subject, stakeholder engagement, leadership proactively, and capacity-building of the individuals are at the top of the list to be highly effective in the bid for successful transformation. Investigating the hot topics and technologies, building the future of API management systems, and dealing with industry changes envisage areas for further research. The embracement of the modularization and modernization strategy is the stepping stone for quality control of security posture pollution and effective execution. This, in turn, opens new doors for scale adjacency, agility, and interoperability. The modularization process makes the organization more flexible and adaptable; therefore, it can implement the best solution to timely business issues and unpredictable market dynamics. Additionally, modernization initiatives see that infrastructure and processes follow industry trends and the best practices to assist organizations in utilizing cutting-edge technologies and staying relevant in the digital age. Summary of Key Findings Modularization and modernization based on revolutionizing Enterprise API management offer excellent security, speed, and reliability improvements. Modularization as an organization’s innovative platform also allows one to segregate potential security risks within the relevant components and implement tight security regulations within the specified time frames with proper enforcement of access control mechanisms. The segmentation will improve security not only but also it will ease the manner of internal security updates and maintenance, thereby enabling proactive measures against new threats all the time. Such modernization initiatives facilitate improvements as they linearize buildings, use resources more efficiently, and improve system flexibility. They come as helpful means of response, such as improvement of throughput and scalability, which eventually compose the organizational competence to satisfy the needs of the business and create an excellent experience for the end users. Consequently, modularization and modernization approaches cooperate to form the backbone of an API system that should be modern, reliable, and secure, facilitating innovation and prosperity in today’s digital ecosystem. Practical Recommendations Organizations need to incorporate modularization approaches to achieve better security in API management. This entails breaking monolithic architectures into smaller modules and incorporating granular security measures at the module level and simple interfaces between the components. Incorporating all modernization programs, for instance, implementation of API-based architectures and modularity, among others, is pivotal and influential in boosting API performance. It encompasses improving the architecture of systems, the intelligent use of resources, and accepting trending technologies, i.e., cloud-native designs and microservices. Stakeholder engagement campaigns, change function management strategies, and forward leading are primary factors of successful modernization and improvement programs. Organizations should fund training and programs aiming at capacity building as employees need to respond to changes in technology and methods used [14]. Future Research Directions It is crucial for further research to expose the broader implications of modularization and modernization in enterprise API management, including the role of organizational culture, scalability, and innovation. The tremendous rudimental tendency and technology, including computing, edge computing, and blockchain, pose positive and negative roles in API management. Further study should focus on the authenticity, effectiveness, and compatibility of APIs and the ecosystem’s security implementation. The sphere of AI and machine learning in growing API security and functionality has yet to be explored. Another use case for future research is AI-backed strategies that can be used, including anomaly detection and predictive analytics, in API administration. Interdisciplinary communications consisting of computer science, business, and the social sciences will help acquire a comprehensive understanding of the enterprise API management feature complexity and the ability to develop new solutions for new challenges. References 3. Weir L (2019) Enterprise API Management: Design and deliver valuable business APIs. Packt Publishing Ltd https://books.google.com/books?hl=en&lr=&id=0OikDwAAQBAJ&oi=fnd&pg=PP1&q=two+foundational+pillars+underpinning+the+efficacy+and+reliability+of+enterprise+API+management&ots=XYTqIA678a&sig=nivEIE1KFJFiLSyb7scIs-9c8GU.
{"Source-Url": "https://www.onlinescientificresearch.com/articles/revolutionizing-enterprise-api-management-enhancing-security--and-performance-through-modularization-and-modernization.pdf", "len_cl100k_base": 4152, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 13803, "total-output-tokens": 5154, "length": "2e12", "weborganizer": {"__label__adult": 0.000392913818359375, "__label__art_design": 0.0005621910095214844, "__label__crime_law": 0.0006551742553710938, "__label__education_jobs": 0.00146484375, "__label__entertainment": 0.00016248226165771484, "__label__fashion_beauty": 0.00019037723541259768, "__label__finance_business": 0.010162353515625, "__label__food_dining": 0.0004010200500488281, "__label__games": 0.0006093978881835938, "__label__hardware": 0.0010833740234375, "__label__health": 0.0006103515625, "__label__history": 0.00038242340087890625, "__label__home_hobbies": 0.00010567903518676758, "__label__industrial": 0.0006999969482421875, "__label__literature": 0.0004246234893798828, "__label__politics": 0.0004553794860839844, "__label__religion": 0.0003685951232910156, "__label__science_tech": 0.07916259765625, "__label__social_life": 0.0001531839370727539, "__label__software": 0.08233642578125, "__label__software_dev": 0.818359375, "__label__sports_fitness": 0.0002290010452270508, "__label__transportation": 0.0006237030029296875, "__label__travel": 0.00024962425231933594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26763, 0.01736]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26763, 0.18124]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26763, 0.90842]], "google_gemma-3-12b-it_contains_pii": [[0, 3830, false], [3830, 10234, null], [10234, 17303, null], [17303, 24490, null], [24490, 26763, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3830, true], [3830, 10234, null], [10234, 17303, null], [17303, 24490, null], [24490, 26763, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26763, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26763, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26763, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26763, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26763, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26763, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26763, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26763, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26763, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26763, null]], "pdf_page_numbers": [[0, 3830, 1], [3830, 10234, 2], [10234, 17303, 3], [17303, 24490, 4], [24490, 26763, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26763, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
96abbf7daf58dd0e72de7055c91506b6775f3680
Scaling Software Experiments to the Thousands Christian Neuhaus\(^1\), Frank Feinbube\(^1\), Andreas Polze\(^1\) and Arkady Retik\(^2\) \(^1\)Hasso Plattner Institute, University of Potsdam, Prof-Dr.Helmert-Str. 2-3, Potsdam, Germany \(^2\)Microsoft Corporation, One Microsoft Way, Redmond, U.S.A. Keywords: Massive Open Online Courses, Online Experimentation Platform, Software Engineering, Operating Systems, InstantLab, Windows. Abstract: InstantLab is our online experimentation platform that is used for hosting exercises and experiments for operating systems and software engineering courses at HPI. In this paper, we discuss challenges and solutions for scaling InstantLab to provide experiment infrastructure for thousands of users in MOOC scenarios. We present InstantLab's XCloud architecture - a combination of a privat cloud resources at HPI combined with public cloud infrastructures via "cloudbursting". This way, we can provide specialized experiments using VM co-location and heterogeneous compute devices (such as GPGPU) that are not possible on public cloud infrastructures. Additionally, we discuss challenges and solutions dealing with embedding of special hardware, providing experiment feedback and managing access control. We propose trust-based access control as a way to handle resource management in MOOC settings. 1 INTRODUCTION Knowledge acquisition has never had it better: While our ancestors had to buy very expensive books, we can look up virtually any information on the internet. Universities follow that trend by making their teaching materials available online: Lectures can be streamed and downloaded from platforms like iTunes U. Websites like Coursera and Udacity provide complete courses with reading materials, video lectures and quizzes - sometimes also run directly by universities (e.g. Stanford Online or openHPI). However, educational resources should not be limited to passively consumed material: 10 years of experience in teaching operating systems courses to undergraduate students at HPI have shown us that actual hands-on experience in computer science is irreplaceable. InstantLab at HPI. InstantLab is a web platform that is used in our undergraduate curriculum to provide operating systems experiments for student exercises at minimum setup and administration overhead. InstantLab uses virtualization technology to address the problem of ever-changing system configurations: experiments in InstantLab are provided in pre-packaged containers. These containers can be deployed to a cloud infrastructure and contain virtual machine images and setup instructions to provide the exact execution environment required by the experiment. InstantLab’s core component is a web application, through which users can instantiate and conduct experiments. The running instances of these experiments can be accessed and controlled through a terminal connection, which is set up from within the user’s web browser (see figures 1, 2). Figure 1: Browser-based access to experiments. Figure 2: InstantLab: Live Experiments in the Browser. Over several terms of teaching operating systems courses, InstantLab has proven to be a helpful tool: it allows us to offer various software experiments with very little setup work, ranging from elaborate kernel-debugging exercises over demonstrations of historic operating systems (e.g. Minix, VMS) to more general software engineering exercises. 2 SCALING TO THE THOUSANDS While InstantLab is used in teaching in our undergraduate classes with great success, we aim to bring this level of hands-on experience and practical exercises to massive open online-courses (MOOCs). This is where it is needed the most: Students at regular universities usually have access to university-provided hardware resources. Students of online courses however come from diverse backgrounds and are scattered all over the world – the only equipment that can be assumed to be available to the participants is a web browser. In this section, we review state of the art of MOOCs and software experiments and identify the challenges that lie ahead. 2.1 State of the Art Recently, massive open online courses have received plenty of attention as a new way of providing education and teaching. This is reflected by a several web platforms supporting MOOCs (e.g. Coursera, Udacity, edX, Khan Academy, Stanford Online, openHPI,iversity). On the software side, new software frameworks (e.g. lernanta, Apereo OAE) and Software-as-a-Service platforms (e.g. Instructure Canvas) offer the technical foundation for MOOCs. In learning theory, the need and usefulness for learning from practical experience has been recognized (Gründewald et al., 2013; Kolb et al., 1984). Therefore, most MOOCs incorporate student assignments complementing the teaching material, where results are handed in and are used for feedback. Most of these assignments are quizzes, which can be easily evaluated by comparing the student’s input to a list of correct answers. More complex assignments (e.g. calculations, programming exercises) are usually conducted by students handing in results (e.g. calculation results, program code). A solution for automated generation of feedback for programming exercises was proposed by (Singh et al., 2013). However, conducting programming assignments and software experiments on students’ hardware creates several challenges: The heterogeneity of students’ hardware and software makes it difficult to ensure consistent, reproducible experiment conditions. This makes software problems hard to troubleshoot (Christian Willems, 2013). Additionally, licensing conditions and pricing of software products can limit the availability and prevent use for open online courses. Therefore, practical assignments in MOOCs are currently mostly limited to non-interactive tasks that do not offer students hands-on experience with practical software experiments. 2.2 Challenges and Chances In this paper we present an architecture for flexible and interactive software experiments for massive open online courses. We build on our experiences with InstantLab at HPI using pre-packaged software experiments that are executed in virtual machines and present an architecture that leverages public cloud infrastructure resources to cope with the high user load. However, offering live software experiments at MOOC-typical scale is fundamentally different from a classroom scenario: On one hand, the massive scale poses new challenges that have to be met. On the other hand, the large number of participants offers new opportunities of to improve education and the platform itself. In this paper, we address the following issues: - **GPU and Accelerator Hardware.** Embedding special hardware resources into cloud-hosted experiments is a difficult task: We show how new virtualization technology can assign physical hardware to different users and employ a compilation-simulation-execution pipeline to use scarce physical hardware resources only for tested and correct user programs (see section 3.3). - **Automated Feedback & Grading.** Learning from experience with practical experiments is only possible when feedback on a students performance is provided. To gather the required information, we propose methods for experiment monitoring by using virtual machine introspection to monitor students performance during assignments (see sections 4.5). - **Automated Resource Assignment.** Access to expensive experiment resources should only be granted to advanced and earnest participants of a course. Traditional access control mechanisms fall short as they require manual privilege assignment. Instead, we propose a trust-based access 3 XCloud ARCHITECTURE This section provides an overview of our XCloud architecture that allows us to set up, manage and maintain complex experiment testbeds of heterogeneous hardware on cloud infrastructure (see figure 3). Conceptually, it consists of three distinctive parts: The Middleware, the Execution Services and the Data Storage. The Middleware hosts a web application that allows to start experiments and access virtual desktop screens. Users can connect to it using a web browser or arbitrary network protocols (e.g. SSH). The Middleware manages user identities and mediates their access to the actual experiments hosted by the Execution Services. It also controls the deployment of experiments on appropriate hardware resources. The Execution Services manage hardware resources for execution of experiments and provide an infrastructure to monitor the user activity by a variety of means ranging from simple log mechanisms to more elaborate techniques such as virtual machine introspection. The Data Storage service is our reliable store for user data, experiment results and the activity log that holds all the information about the users activities. These logs are the basis for our trust-based access control as described in section 6. Furthermore, the Data Storage holds virtual machine disk images encapsulating our experiments. 3.1 XCloud by Example Within our InstantLab curriculum, we have two special teaching objectives that require the extension of the standard IaaS concepts and that are consequently incorporated into our XaaS model: network-related experiments and accelerator experiments. For network experiments, we extend the IaaS concept with an explicit notion of a (virtualized) network interconnect within the cloud. For accelerator experiments, we distinguish between different types of processing units (like x64 or Itanium) and dedicated accelerator boards (like Nvidia Tesla or Intel Xeon Phi). As a proof-of-concept, we currently run our experiments on our XaaS implementation that is based on HPs Converged Cloud and hosted within the FutureSOC-Lab at the Hasso Plattner Institute in Germany. 3.2 XIaaS Components With XCloud, we introduce XIaaS components (see Figure 4) which extend the IaaS abstraction to support our teaching objectives. A XIaaS component has the following characteristics: Network Interface. XIaaS components can not only be accessed using standard network services provided by their operating systems, but can also be interconnected to form a desired network topology. **Hardware Interface.** The hardware interfaces provided by XIAaaS components allow users to access arbitrary ports of the virtual hardware, like screen connection, Ethernet, USB, debugger connections, etc. using tunneling protocols. Furthermore, they can be redirected to hardware interfaces of other XIAaaS components, allowing for a variety of new usage opportunities. **Physical Devices.** To support modern accelerator cards or other specialized hardware setups, InstantLab provides generic deployment mechanisms that use hardware descriptions to configure the hardware according to the needs of the usage scenario. Direct access allows applications to use all the capabilities of the GPUs and exploit their full performance. This is beneficial for us, since representative performance and high fidelity is especially important in our scenario, since we want our students to learn about performance characteristics and performance optimization techniques of accelerators ((Kirk and mei Hwu, 2010),(Feinbube et al., 2010)). The overhead introduced by more sophisticated virtualization mechanisms could have a negative effect on execution speed and the range of supported features of the virtualized device. The lack of mediation between virtual machines and physical hardware makes it impossible to support features that are usually inherent to virtualization technologies: VM migration, VM hibernation, fault-tolerant execution, etc. The required interposition could be realized by implementing means to stop a GPU at a given point in time, read its state, and to transfer it to another GPU that will then seamlessly continue with the computation. For our scenario this is not as important, though, because student experiments do not require a highly reliable execution: they can simply be repeated if they fail. Another important feature that is usually expected from virtualization is the idea of multiple VMs sharing physical hardware. For us, the amount of multiplexing that is required depends primarily on the number of accelerators that we can provide and the number students that we want to allow to work with our MOOC exercise system concurrently. In an ideal world, we could provide every student with an direct, exclusive access to one accelerator, allowing them the best programming experience. The drawback would be that we would then need to either restrict the number of students or dynamically increasing the number of accelerators by relying on other cloud offerings resulting in additional costs. The other extreme would be a situation where we only have a very small number of accelerators for our exercises. This might be the case because the accelerator type is novel and not provided by any cloud hoster. To allow for realistic execution performance and high fidelity in such a scenario, the only option would be to use a job queue for the students’ task submissions and execute one task after another on each accelerator. Since the user experience is rather limited in these situations, we described some techniques to improve both, the user experience and the performance of MOOCs where many students are working with single unique physical experiments in (Tröger et al., 2008). One possibility is to separate the compilation of the students experiment code from the actual execution on the experiment. The compilation can be performed on generic VMs without dedicated accelerator hardware. This enables quick feedback during development without actually using the accelerator hardware. For running the compiled code, virtualization techniques should allow multiple users to work simultaneously with software experiments on the same physical hardware without interfering with each other. To support this scenario GPU needs multiplexing capabilities. While GPU vendors are currently working with vendors of virtualization solutions like VMware (Dowty and Sugerman, 2009) and Xen (Dalton et al., 2009) on architectures for performant GPU virtualization, some researchers found ways to allow for this even today: Ravi et al. implemented a GPU virtualization framework for cloud environments (Ravi et al., 2011) by leveraging the ability of modern GPUs to separate execution contexts and the ability to run concurrent kernels exclusively on dedicated GPU streaming processors. (NVIDIA Corporation, 2012) 4 USER ACTIVITY MONITORING The key difference between MOOCs and traditional schools or universities is the proportion between the number of students and teaching staff. Thousands of users regularly sign up for open online courses which are offered and organized by only a small team. This proportion means that personal support work for individual participants to assign experiment resources and provide tutoring and feedback has to be kept at a minimum. To address this problem, we propose automatic feedback and grading mechanisms (see section 5) and automatic resource assignment (see section 6) based on observed behavior of students on the platform. Both mechanisms require information about how users interact with the platform. In this section, we propose activity monitoring on the web platform and observation of active software experiments. The web platform itself provides valuable information of student activity during a course. The web platform can record student activity such as logins to the platform, access of video lectures and starting, finishing or abortion of experiments. This information is stored in the activity log (see figure 3) and evaluated to assess the students’ continuity in following the contents of the course curriculum. In addition, we extend the activity monitoring to active software experiments, which are running inside of virtual machines in our approach. This enables us to gather more detailed information required for feedback and grading which is not only based on the outcome of assignments but can consider how the given task was solved. A monitoring solution for active experiments should be susceptible to user manipulation (tamper-resistant), not alter the execution of the experiment (transparent) and have only a small performance impact (efficient). These criteria are not met by in-VM monitoring modules. Instead, we propose virtual machine introspection-based monitoring (Pfoh et al., 2009; Garfinkel et al., 2003). By implementing monitoring on the hypervisor level, it is protected against manipulation by the user and does not require changing the software images of experiments to install monitoring modules. Introspection on the hypervisor level is facing the challenge of bridging the semantic gap (Chen and Noble, 2001): Context on data structures is not available from the operating system but has to be inferred from context. However, since we know the exact software versions in our experiment images we can provide the necessary context information to interpret observations. While VM introspection can cover virtually all activity in experiments, certain events are of particular interest. The use of privileged operations (system calls, hardware access, I/O) can be efficiently monitored as they trap into the hypervisor – this gives a good overview of activity on the system. Additionally, the memory can be monitored to detect loaded software modules or check the integrity of code sections. Further information can be gathered by monitoring the activity on the network link. Software support for VM introspection is available for different platforms and hypervisors. The open source library LibVMI is compatible with KVM and Xen hypervisors while VProbes is offered for VMWare solutions. 5 AUTOMATED FEEDBACK AND GRADING To fully benefit from learning at scale, both the feedback and the grading procedures must be supported by automated tools. The quality of these tools must increase with both present students and those from former courses. Since most people do not have outstanding autodidactic abilities, for MOOC-based courses getting a large number of participants involved is crucial. A good learning environment can only be provided if the discussion forums are populated with active people. Their collaborations in exer- Table 1: Means and metrics for student achievements as a basis for feedback and grading. <table> <thead> <tr> <th>Scope</th> <th>Means</th> <th>Metrics</th> </tr> </thead> <tbody> <tr> <td><strong>Students (System Interaction Tasks)</strong></td> <td></td> <td></td> </tr> <tr> <td>1.1 Knowledge</td> <td>Quizzes; clozes</td> <td>correct / incorrect answers</td> </tr> <tr> <td>1.2 Actions</td> <td>VM Introspection, scripts</td> <td>correct / incorrect VM states; violations</td> </tr> <tr> <td><strong>Student Programs (Programming Tasks)</strong></td> <td></td> <td></td> </tr> <tr> <td>2.1 Source code</td> <td>keyword scans; pattern detection; AST analysis;</td> <td>correct / incorrect keywords and libraries</td> </tr> <tr> <td>2.2 Actions</td> <td>Integration tests; run-time monitoring; sensor data;</td> <td>correct / incorrect results and states See 1.2 (student actions) as well.</td> </tr> </tbody> </table> Cycles and discussions of the learning material not only help the students to get a better understanding of the course contents, but also document the problems that students found in lectures and exercises and thus are good source for ideas on how to improve and extend the course for the future. One of the most sensitive topics, though, is the grading of learning results. For the purpose of our lectures, students are graded by their knowledge and actions, as well as, the source code and behavior of their respective programs. Table 1 gives an overview of the means that can be used to acquire information about their qualification and the according metrics. We provide four levels of information that can be combined to evaluate and grade our students. The knowledge that the student acquired in the course can be automatically tested and graded using quizzes and clozes. They get instant feedback about their understanding of the course material and it is very straightforward to use this information to generate grades automatically. Since our courses heavily rely on hands-on experiences and students directly manipulating system states, documenting those states is crucial. We use the VM Introspection mechanisms to check whether or not a student achieved a desired system state (see section “Experiment monitoring”). Furthermore special scripts provided by the course teaching team can be introduced and run inside the VM for additional state checks. Many user interactions are already measured for our trust-based access control technology (see section 6). The level of trustworthiness and the number of deviations from expected behavior can also be taken into account when it comes to direct feedback and grading of a student’s performance. Programming exercises offer additional information that can be used for student assessments: the source code and the run-time behavior of the students’ programs. Source code can not only be checked for correct compilation, but analyzed in more sophisticated ways. Often it is required, that students use predefined keywords or library functions to achieve the assigned task. Other keywords or library calls may be forbidden. Simple white- and black-lists and text scans can be used for these purposes. Based on such a checker, patterns could be investigated, e.g. if after an “open()” call there is also a “close()” call. Doing this in a more elaborate way, though, requires to work on the abstract syntax tree (AST) representation of the program code. Besides simple pattern detection ASTs allow to compare solutions with one another. Student programs can be clustered and compared with a variety of standard solutions. The actions that a running student program executes can be checked in the same fashion as the students actions himself. Furthermore integration tests can be applied to see if the programs behavior and results match the expectations. Language runtimes like the Java Virtual Machine also allow to use sampling, profiling, reflection and other runtime monitoring mechanisms to evaluate the behavior of student programs in detail. If physical experiments are involved sensor data provided by these setups can also be used. All the information that can be acquired at those four levels can be used to provided immediate feedback and automated grading. 6 AUTOMATIC RESOURCE ASSIGNMENT Offering live VM-based software experiments to a large audience is a resource-intensive workload. Due to budget constraints of teaching institutions, the amount of these resources is limited. While cloud platforms can scale up to handle high-load situations, this also increases cost. In addition, specialized hardware (e.g. accelerator cards, GPGPUs) often cannot be used by different users simultaneously (i.e. no overcommitment) and are only present in limited numbers. Therefore, a resource assignment scheme is required that can automatically assign experiment resources to earnest participants of the course and detect and prevent misuse of the provided resources. To achieve a self-managing solution, we propose a trust based access control (TBAC) scheme, which assigns resources to students based on their previous behaviour on the platform. TBAC is a dynamic access control scheme based on the notions of reputation and trust (Josang et al., 2007): Reputation is information that describes an actor’s standing in a system. Based on such information, trust levels can express a subjective level of confidence that this actor will behave in a desired way in the future. These trust levels can be represented as values on numerical scales \( x \in [0; 1] \), where low values indicate low confidence and 1 indicates certainty. These levels can be calculated based on personal experience with actors and also include reputation information gathered from other actors (federated reputation system) or a centralized authority (centralized reputation system). Based on these concepts, Trust-based Access Control (TBAC) represents a dynamic extension to traditional models of access control (mandatory, discretionary, role-based) ((Boursas, 2009)(Chakraborty and Ray, 2006)(Dimmock et al., 2004)) to incorporate trust between actors into the access policy definitions. Policies can then specify access criteria based on a certain level of trust. Access is granted, if the requesting actor has a sufficient level of trust assigned and denied otherwise. This concept has been proposed for use in pervasive computing ((Almenérez et al., 2005)), P2P-networks (Tran et al., 2005) and is used to govern user permissions on the popular website Stackoverflow. The advantage of using TBAC for resource assignment is self-management of the system: trust levels are automatically derived from a user’s behaviour on the platform, by using the information gathered through monitoring of interactions with the platform and experiments, which is stored in the activity log (see figure 5). Access policies for resource-intensive software experiments can then be specified using these trust-levels. This makes policies specific to experiments, but not to users – eliminating the need for manual creation of user-specific access policies. The properties of the curriculum of online courses can be used in designing such TBAC systems and policies. We assume that the curriculum starts out with introductory lessons, where students familiarize themselves with basic material, where software experiments are not yet required (see figure 6). In subsequent lessons, the content will become more demanding and require practical experiments, which should only be available to students who successfully completed the previous lessons. We make use of these properties by employing a step-wise approach in building trust levels. To get access to the software experiments in a new curriculum stage, students have to build according trust levels by completing assignments in the previous stage. As trust levels start out at 0 and do not fall below this level, the mechanism cannot be circumvented by starting with a fresh account. ### 7 SUMMARY Massive Open Online Courses enjoy great popularity and are used by millions of users worldwide. However, the possibilities for practical assignments and hands-on exercise are mostly limited to simple quizzes. In this paper, we introduce InstantLab – a platform for running software experiments on cloud infrastructure and making them accessible through a browser-based remote desktop connection. To truly scale this approach to thousands of users, several challenges have to be met: We address the problem of embedding specialized accelerator hardware into virtualized experiments. As experiment resources are limited, they should be protected from abuse and assigned to users with an earnest learning intent. We employ an automatic resource assignment based on trust levels which are derived from users activities on the platform. As practical experiments require detailed feedback on the students action to be effective: We draw information from runtime virtual machine introspection to provide detailed feedback about assignments during and after experiments. ### REFERENCES
{"Source-Url": "http://www.scitepress.org/Papers/2014/49655/49655.pdf", "len_cl100k_base": 5218, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24383, "total-output-tokens": 6705, "length": "2e12", "weborganizer": {"__label__adult": 0.000579833984375, "__label__art_design": 0.000843048095703125, "__label__crime_law": 0.0005970001220703125, "__label__education_jobs": 0.11199951171875, "__label__entertainment": 0.0001857280731201172, "__label__fashion_beauty": 0.0003209114074707031, "__label__finance_business": 0.0009355545043945312, "__label__food_dining": 0.0007143020629882812, "__label__games": 0.00104522705078125, "__label__hardware": 0.003101348876953125, "__label__health": 0.0011653900146484375, "__label__history": 0.0007491111755371094, "__label__home_hobbies": 0.00027632713317871094, "__label__industrial": 0.0010004043579101562, "__label__literature": 0.0006465911865234375, "__label__politics": 0.0004372596740722656, "__label__religion": 0.0008502006530761719, "__label__science_tech": 0.105224609375, "__label__social_life": 0.0004200935363769531, "__label__software": 0.047515869140625, "__label__software_dev": 0.71923828125, "__label__sports_fitness": 0.0004849433898925781, "__label__transportation": 0.0012140274047851562, "__label__travel": 0.0004935264587402344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31386, 0.02263]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31386, 0.49747]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31386, 0.90171]], "google_gemma-3-12b-it_contains_pii": [[0, 3082, false], [3082, 7694, null], [7694, 10086, null], [10086, 13596, null], [13596, 18382, null], [18382, 23340, null], [23340, 27638, null], [27638, 31386, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3082, true], [3082, 7694, null], [7694, 10086, null], [10086, 13596, null], [13596, 18382, null], [18382, 23340, null], [23340, 27638, null], [27638, 31386, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31386, null]], "pdf_page_numbers": [[0, 3082, 1], [3082, 7694, 2], [7694, 10086, 3], [10086, 13596, 4], [13596, 18382, 5], [18382, 23340, 6], [23340, 27638, 7], [27638, 31386, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31386, 0.08791]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
c5ca187c7ae2ed5540611142fcf0318c271af61f
Abstract: This paper presents how Functional Programming (FP) helps to provide an other formal semantics (relation between the syntax and the model of computation) for Business Process Modeling (BPM); a semantics relatively different from Object Oriented semantics. More precisely, it proposes a general methodology to model business processes using mathematical functions and higher-order functions. We describe the basic part of Business Process Modeling, behavioral semantics via Petri Nets (PN) and Functional implementation of the models. Also, we will see how the business process model is translated into its equivalent form in Petri Nets and how these can be described through Functional Programming. Keywords: Functional Programming, Semantics, Business Process Modeling, Petri Nets. 1. INTRODUCTION The term "Business Process Modeling" was coined in the 1960s in the area of systems engineering by S. Williams and until the 1990s, it became popular. BPM is the way of describing processes within enterprises, so that a model (abstract representation which can be manipulated) can be analyzed and improved, (Aguilar-Saven, 2004). A business process is viewed as a set of related, structured activities or tasks within an organization whose objective is to produce a specific product or service. A task needs to be finished before its deadline or in a definite time to work towards goal. There are many techniques to model processes such as a flow chart, functional flow block, Unified Modeling Language (UML), Business Process Model and Notations (BPMN), etc. Functional Programming (FP) is a style of programming in which programs are executed by evaluating expressions. Functional Programming focuses on simplicity and generality, (Hughes, 1989), functions are considered first-class, which states that they can be passed as parameters to other functions or be returned as a result of a function. FP does computation as the evaluation of mathematical functions without changing state and mutating data. In comparison to Object-Oriented Programming (OOP), order of execution is less important, functions with no side-effects, uses recursion concept to iterate collection data, supports both "Abstraction over Data" and "Abstraction over Behavior", etc. Semantics is the study of meaning, (Lyons, 1977). In terms of programming language theory, semantics is the field related with the mathematical study of the meaning of programming languages. It describes the processes a computer follows when executing a program in that specific language. This can be viewed by describing the relationship between the input and output of a program, or an execution of how the program will execute on a certain platform, developing a model of computation. In this paper, Functional Programming has been used to propose a formal semantics of business model. Many research papers have been presented and discuss about the concepts of Business Process Modeling with Petri Nets or Workflow nets or Object-oriented but a few are based on Functional Programming. In particular, iTask system (iTasks) is a task-oriented programming toolkit for programming workflow support application in Clean. The paper is organized into three sections as follows: The next section discusses and introduces Business Process Modeling and Functional Programming. Our proposition is presented in Section 3. Finally, Section 4 concludes this paper by summarizing the main points introduced in the paper and its future aspects. 2. CONCEPTS AND STATE OF THE ART 2.1 Business Process Modeling A business process is a set of tasks and resources required to achieve some services. It is also stated as a set of activities that once completed will accomplish goal. There are constraints and rules that have to be met. Basically, there are three types of business process: (1) Management processes that govern the operations of a production system. (2) Operational processes that constitute the core business and create the primary value stream. (3) Supporting processes which supports the core processes. A business process can be decomposed into several sub processes, with specific attributes. A process model is a representation (graphical or textual) of business processes represented as a set of sequential or parallel process activities combined together to achieve a common goal. Using a model, it becomes possible to find out how the system will behave and the properties it will acquire. There are different modeling languages that are used to model business processes such as BPMN, (Herden et al., 2015), suitability of the BPMN for Business Process Modeling using workflow patterns, (Wohed et al., 2006), Modeling business process through activity diagrams patterns, (Andre et al., 2014). (Mili et al., 2010) describes the classification of business process modeling languages. The two most used graphical notation for business processes are BPMN and Unified Modeling Language Activity Diagram (UML AD) which are discussed in (Geambasu, 2012). As an example, the Figure 1 models the behavior of the ATM machine for withdrawing money with the Business Process Modeling Language (BMPL). The above model shows how the data flow between the different activities (represented as boxes), how conditions take places and how the flow of control takes place. The activities have some data associated with them, the data related to the current activity and its result will act as an input to the next activity. The arrow shows the transition between the two activities which involves the transfer of data and also the flow of control between activities and it also describes the change in the state of the activities. The diamond is used as a condition which has two outputs and the selection of the output is depend on the output of the previous activity which acts as an input to the condition. The different semantics that are used to give a behavioral description of the business process model are Petri Nets or Workflow nets. A Petri Net is represented as a bipartite graph that have tuple $N = (P, T, A)$ where $P$ is the set of places, $T$ is the set of transitions and $A \subset (P \times T) \cup (T \times P)$ is the set of flow relations, (Zhang et al., 2010). They are the basic model of parallel and distributed system. They were documented by Carl Adam Petri in 1962 in his PhD thesis "Kommunikation mit Automaten". The basic idea behind it, is to draw the changes of state with transitions in a system. Petri Net is a strong technique for modeling and analysis of the system behavior where resource sharing, concurrency and synchronization are a significant matter to take into account, (Reisig, 1985). A classical Petri Net is that which consists of place nodes holding resources, the number of resources is denoted by the number of anonymous tokens on the places, transition nodes consuming and producing resources, and arcs between the nodes specifying dependencies between transitions and places with resources on them. A transition is only enabled if all the input places have sufficient tokens, and an enabled transition can go off by consuming tokens from all the input arcs while synchronously producing tokens along each output arcs. The Figure 2 gives an example of Petri Net which contains places ($S1$...$S7$), place contains tokens and transitions ($T1$...$T7$) that are joined by directed arcs. A workflow net is a Petri Net that has two special nodes, the first node is known as starting node that is start place and the other node is known as final node that is end place. In a Worflow net every transition is on a path from start to end place. (van der Aalst, 1998) introduced workflow management as an application domain for Petri Nets. The Figure 3 gives an example of Workflow net. For a basic purpose, it is sufficient to consider classical Petri Nets to model business process, (van Hee et al., 2013). But sometimes to evaluate process, the classical Petri Nets are not sufficient, that’s why the extension of Petri Nets are needed that are know as high-level Petri Nets, especially Coloured Petri Nets (CPN), (Jensen and Kristensen, 2009). In CPN, places are defined with the type that is colour set of tokens, tokens and resources are replaced by individual (or coloured) tokens defined with typed values of some inscription language, sometimes at same place multiple tokens may be found. Transitions and arcs are defined by program code that operates on the token values. Incoming arcs are annotated with variables or patterns which has a code to the transition and to the output arcs. Transition is also parameterized by the values of input tokens, can have additional binding values, can have guards (or conditions) that determine whether the given set of tokens is sufficient for firing of the transition, (Reinke, 2000). A workflow is a representation of a sequence of activities, declared as a work of persons, of a simple or complex mechanism, or machines. A workflow can be considered a view or picturization of real work. The flow may refer refer to a document, services or a product that is being Functional Programming Functional Programming is a programming paradigm which models computation as the evaluation of expressions, programs are defined by a set of functions and immutable data. Functional Programming is a declarative programming model which means programming is done with expression or declaration instead of statements. Functional Programming requires functions to be treated like other values and can be passed as arguments. A function is a relation or expression that involves one or more variables, or a function \( f \) can be defined as a subset of a Cartesian product of two sets \( X \) and \( Y \), such as for all \((x, y)\) and \((x', y')\) in \( f \), if \( x = x' \) then \( y = y' \), (Coquand, 2008). Functional Programming mainly focuses on "What Information is desired that are Inputs" and "What Transformations are required that is Actual Logic". The different languages that support Functional Programming are Ocaml, Erlang, Haskell, Scala, etc. The main features of Functional Programming languages are: 1. Higher-order functions: are those functions that take other functions as their arguments and returns a function as its result. 2. Immutable data: instead of altering the actual values, copies are created and the original is preserved. 3. Referential transparency: it states that the computation yields the same value each time they are invoked. 4. Recursion: a function which calls itself, it is the only way to iterate in FP. The advantages of Functional Programming languages are bugs-free code, efficient parallel programming, supports nested functions, increase re-usability, better modularity, robust and reliable code, better performance. The disadvantage of Functional Programming language is requirement of lot of memory. In comparison to imperative programming it provides better support for structured programming and programs are shorter and easier to understand than their equivalent imperative programs. We have chosen Haskell as a tool, so now we will discuss about it. Haskell is a purely (a function has no side-effects) functional programming language. The language is named after Haskell Brooks Curry, whose work in mathematical logic plays as a foundation for functional languages. In Haskell, the evaluation of a program is corresponding to deriving (Show) a function as its result. The "map" function in Haskell is an example of a higher-order function. The "map" takes arguments as a function \( f \) with a list of elements and applies that function to every element in the list and produce a new list. This is how the type signature of "map" is defined in Haskell: \[ \text{map} :: (a \rightarrow b) \rightarrow [a] \rightarrow [b] \] ```haskell map :: (a -> b) -> [a] -> [b] ``` ```haskell map [ ] = [ ] ``` ```haskell map f (x : xs) = f x : map f xs ``` It says it takes an "a" and returns a "b", a list of a's and return a list of b's. An example of "map" function is: ```haskell removeNonLowerCase :: [Char] -> [Char] removeNonLowerCase s = [ a | a <- s , a `elem` [ 'a' .. 'z' ] ] ``` The first line describes the type signature that tells, what is the type of a variable. A type signature can be defined as: ```haskell type = Int | type -> type | [type , type] | (type , type) ``` The type is defined in the standard library as: ```haskell data Person = Person { firstName :: String , lastName :: String , age :: Int } ``` The first line describes the type signature that tells, what is the type of a variable. A type signature can be defined as: map (+3) [2, 5, 9, 15] returns [5, 8, 12, 18] The benefits of Haskell are: (1) Code is shorter, cleaner and high maintainable. (2) Higher-order function to reduce the amount of repetition. (3) Requires shorter development times. (4) List comprehension to make a list based on existing list. (5) Less prone to errors, higher reliability. (6) It is easy to learn and programs are often easier to understand. (7) Lambda expression to create functions without giving them explicit names. 3. PROPOSITION In this section we will discuss Business Process Modeling using Functional Programming. The Figure 4 shows big picture of our proposition concept. ![Business Process Modeling Diagram] Fig. 4. Big picture of our proposition concept. Earlier we have already discussed about the model of ATM machine for withdrawing money (Figure 1). Now, we will describe the same model in terms of Petri Net as a part of our contribution. The Figure 5 shows how the different processes of ATM model can be illustrated through Petri Net, in this we have divided the conditional part by a pair of transitions. ![Petri Net for the ATM Figure 1] Fig. 5. Petri Net for the ATM Figure 1. The activities can be denoted through circles and each circle represents a place which is a simply container where resources may be located. The resources can be of any kind of data, the number of resources are indicated by the number of anonymous tokens on the places. The arrow represents a transition, which takes resources from the previous place and provides it to the next place, that is the output place. A transition is only enabled when tokens are available on all input places. When a transition is fired it consumes a token from all the input places and provides tokens to each output place. We use Coloured Petri Nets to differentiate tokens when multiple tokens are available in the same place. Till now we have discussed how to represent business processes using Petri Nets. Now, we will see how the different elements of a business model can be viewed in terms of Functional Programming. The activities that are actions for a particular purpose, information that is data related to a particular activity and links which are connection between the different activities. More precisely, we can say that activities in the model can be represented through functions, the information can be represented through data types and the transitions that are transformations of one activity to another activity that can be represented through higher-order functions or list comprehension. Table 1 shows the relation of the different elements of a model in respect of Functional Programming. <table> <thead> <tr> <th>Business Process</th> <th>Functional Program</th> </tr> </thead> <tbody> <tr> <td>Activity</td> <td>Function (Data -&gt; Data)</td> </tr> <tr> <td>Information</td> <td>Datatype</td> </tr> <tr> <td>Link</td> <td>Higher-order function</td> </tr> </tbody> </table> Table 1. Translation of Business Process into Function Programs. As we will be using Haskell language as a tool for implementing our proposition, so now we will define how Haskell can be mapped with Petri Net and we will also present a general support code for the proposition. In Haskell, we have places with names, types, initial marking, transitions with names, optional conditions or guards and arrows from input places to output places. Transitions cannot be mapped to a functions directly because for a given marking a single transition might be enabled for several combination of input tokens and there can be a possibility that the mapping would give transition functions of different types but here for easy understanding we are using the same type of functions for this proposition. The Haskell code for single step simulation with a textual trace for the Petri Net model (Figure 5) is presented in different parts with explanation as follows: In Figure 6, the third line says that we have defined the type "Mark" which is the list of places in the net representing the type of marking in them, each place may contain multiple tokens in it. The "m0 :: Mark" describes that "m0" is a type of "Mark" and represents the initial marking on the first place. The lines 12, 13, 14 and 15 describes that we are making a synonym for an already existing data type, "Name" will have the name of the transition which is a string value, for the condition part, it will have parameter of type "Mark" and returns a result of "Boolean" type, for the action part, it will have "Mark" as a parameter and returns a result of same type and "Transition" will have three things Name, Condition and Action. In Figure 7, "ts :: [Transition]" describes that "ts" is a type of list of "Transition". The general form of the "Transition" is defined as \[ (\text{Name}, \text{Condition}, \text{Action}) \] where the "Name" represents the name of the transitions, "Condition" is represented through anonymous function which will check the input place is not empty and "Action" is defined as, the transition of "first markings" from input places to next output place and the process continues until all the markings from the list of markings on input place has been exhausted or certain conditions were not met causing the transition to stop. In Figure 8, shows the function "printTS" which prints the name of the transitions. This function prints the transitions if they are available otherwise it will wait and check for other transitions to print. In Figure 9, shows the function "enabledTs" which checks the available transition in the net. Depending on the availability of tokens on input place it checks if the transition is possible and if the condition is met it takes the available transitions and enables them. In Figure 10, shows the "applyTs" function which compares the name given by the user with the name of available enabled transitions, if the name matches then the defined Action will be applied to the model or the model will end and user will have to start it again. The graphical specification of communication structure (Nets) is represented in the form of textual specification so that the programmer can have easy access to perform experiments and data manipulation. The limitation of this code that it has single marking on the input place and every transition will perform a single action on that marking which is predefined in the code. Depending on the user, the code can be further improved by taking different markings on different places of different types like Int, Long, String, Double. And also, the code can be further modified so that the user can provide markings during runtime which can also be saved to a text file and can be used later. This code can be found at: https://github.com/SAINIAbhishek/Haskell-Program. 4. CONCLUSION AND FUTURE WORK In this paper we have presented a new semantics for Business Process Modeling and how to model business processes using Petri Nets. We have also presented Business Process Modeling using Functional Programming for which we have used Haskell language. The code has been presented as an example to help our proposition. We have also drawn a comparison between Functional Programming and Object Oriented Programming. We have also discussed important features, advantages and disadvantages of Functional Programming and also discussed the benefits of Haskell language. For future work we plan to develop a generator that will automatically translate the business process model into the Functional Programming concepts. We have to do more experiments on more complex models to find out its efficiency. We proposed to develop a Graphical User Interface (GUI) that can be considered to facilitate the use as a tool/simulator for translation purposes. REFERENCES
{"Source-Url": "https://docdrop.org/static/drop-pdf/FP-for-BM-gJgA3.pdf", "len_cl100k_base": 4262, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17645, "total-output-tokens": 5201, "length": "2e12", "weborganizer": {"__label__adult": 0.0003819465637207031, "__label__art_design": 0.0003142356872558594, "__label__crime_law": 0.00035858154296875, "__label__education_jobs": 0.0008311271667480469, "__label__entertainment": 6.812810897827148e-05, "__label__fashion_beauty": 0.0001480579376220703, "__label__finance_business": 0.0005044937133789062, "__label__food_dining": 0.00047850608825683594, "__label__games": 0.0004274845123291016, "__label__hardware": 0.0005550384521484375, "__label__health": 0.000690460205078125, "__label__history": 0.00019657611846923828, "__label__home_hobbies": 9.042024612426758e-05, "__label__industrial": 0.00047707557678222656, "__label__literature": 0.00032782554626464844, "__label__politics": 0.0002715587615966797, "__label__religion": 0.00043654441833496094, "__label__science_tech": 0.020782470703125, "__label__social_life": 9.721517562866212e-05, "__label__software": 0.0043487548828125, "__label__software_dev": 0.96728515625, "__label__sports_fitness": 0.0003485679626464844, "__label__transportation": 0.0005817413330078125, "__label__travel": 0.00019371509552001953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22332, 0.01759]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22332, 0.76334]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22332, 0.91546]], "google_gemma-3-12b-it_contains_pii": [[0, 4368, false], [4368, 9130, null], [9130, 12667, null], [12667, 17251, null], [17251, 19396, null], [19396, 22332, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4368, true], [4368, 9130, null], [9130, 12667, null], [12667, 17251, null], [17251, 19396, null], [19396, 22332, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22332, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22332, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22332, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22332, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22332, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22332, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22332, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22332, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22332, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22332, null]], "pdf_page_numbers": [[0, 4368, 1], [4368, 9130, 2], [9130, 12667, 3], [12667, 17251, 4], [17251, 19396, 5], [19396, 22332, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22332, 0.04505]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
cc1f578a901174412abe9b8b742584780f39e4c4
New Product Development Processes within the Australian Software Industry Jessica Severin, Paul Harrison and Heath McDonald Deakin University Track 15 Services Marketing Abstract It is widely acknowledged that new product and new service development are critical to the survival and performance of firms today. New Product Development (NPD) research emphasises the importance of employing models that aim to reduce new product risk and increase the likelihood of success. The bulk of this theory comes from traditional manufacturing enterprises, however, more recently researchers have begun to investigate whether these well-established models and NPD practices are applicable to services (Johne and Storey 1998). While there has been a great deal of research into the manufacturing industry, little is known about product development processes within firms that are characterised by high pressure, dynamic markets and intense competition. This research investigates new product development processes within the software industry. This industry has, at face value, very different characteristics from traditional manufacturing firms. Through in-depth interviews with a range of stakeholders in the software industry, this research found that common, linear NPD models are not appropriate in the development of new software. The development of new products was identified as requiring a more iterative and, at the same time, organic or unstructured procedure than that commonly presented in NPD models. Introduction and Antecedent Studies The notion of New Product Development is no longer a strategic option; rather, it is a necessity as businesses are faced with intense global competition, shorter product life cycles, major organisational re-structuring and rapid technological developments (Craig and Hart, 1992). Firms that fail to foster innovation will be confronted with static profit growth, changing markets and intense competition that will render their products obsolete (Urban and Hauser, 1993). New product failure rates have been reported as high as 80 to 90 per cent (Hanna, Ayers, Ridnour and Gordon, 1995). Less pessimistic reports, however, suggest that for those products that reach the market, 30 to 40 per cent fail (Barclay and Benson, 1990; Booz-Allen and Hamilton, 1982; Crawford, 1977). More recent research, however, has focused on comparing new product successes and failures. These studies are based on the premise that factors differentiating successes from failures can only be identified via comparative studies. Several notable success/failure comparison studies include the British SAPPHO (Rothwell, 1972), NewProd in Canada (Cooper, 1979a; 1979b; 1980) and the Stanford Innovation Project (Maidique and Zirger, 1984). Despite differences in methodology and geographic location, these studies have identified consistent factors that distinguish successful from unsuccessful new industrial products. These factors include a market orientation, clear decision points and the use of cross-functional teams (Cooper 1994) and they form the basis of formalised NPD models. The purpose of employing NPD models is to reduce the risk associated with new product introduction and to increase the likelihood of commercial success through implementation of a step-wise procedure (Crawford, 1997; Lehmann and Winer, 1994; Nijssen and Lieshout, 1995). Moenaert and Souder (1990) suggest that NPD activities act as discrete information processing units that progressively decrease the level of uncertainty attributed to new product development. NPD models are structured to ensure that the quality and progress of the new product project can be assessed. The question remains though as to whether on standard NPD process suits all industries and product types. Although most models are very similar, specific models have been developed for goods (e.g. Cooper 1994) and services (Scheuing and Johnson 1989). In this research, the value of standardized NPD processes is recognised, but the focus is on the extent to which industry characteristics influence the applicability of those processes. The research is separated into two stages. The first examines the how the industry perceives itself as different from other industries. The second stage looks at how well standard NPD processes suit the Australian software industry, based on the reported practices of those involved in it. **Methodology** New product development research has predominantly been quantitative in nature. The majority of notable studies have used quantitative surveys of large samples to investigate new product development practices. This type of research aims to identify and measure what variables are related to NPD outcomes, with particular emphasis on success criteria (Craig and Hart, 1992). Furthermore, the majority of NPD studies have focused on investigating a variety of industries, in an attempt to gain a general overview of the factors critical to success. The lack of industry specific research and the uniqueness of some industries provide the justification for investigating new product development processes within the software industry. This research is exploratory by nature, given that little research has focused on NPD practices within software development firms. The aim of this exploratory research is to investigate the unique characteristics that affect NPD within the software industry, rather than to quantify it. The purpose is to develop an understanding about the NPD processes within the software industry in order to provide a clear direction for subsequent research. This research project is formulated as a qualitative, in-depth exploration into NPD processes within the software industry in line with calls for more qualitative research into this field Craig and Hart (1992). This study follows the precedent established by Boag and Rinholm (1989) and Cooper and Kleinschmidt (1991) who also employed in-depth personal interviews to study NPD processes within high technology and leading industrial firms respectively. The researchers suggested that the complexity surrounding NPD practices required in-depth analysis to capture patterns and findings that a survey could not. Purposive sampling was used to select a sample of software firms operating within Melbourne. The sampling frame was a CD_ROM called Contacts which is a database compiled by IDG Communications Pty Ltd. The database, which includes a list of information technology firms operating within Australia, allows a search to be conducted to narrow the options. The sampling frame is the result of narrowing the search to a list of software firms based in Melbourne. There are many software firms that do not develop software as part of their business; therefore, the frame was furthered narrowed to software development firms. The in-depth personal interviews were conducted with a sample of 10 employees from different software companies that develop custom built systems. Damanpour (1992) states that the size and age of a firm are two characteristics that affect a firm’s level of innovativeness and hence its NPD experience. As a result, the cases are selected on the basis of size and age because these properties are relevant to the topic under investigation. Respondents who participated in the study consisted of persons responsible for product development, such as, marketing managers, project managers, developers and general managers. Interviews were not limited to one type of employee or department because new product development is a multifunctional endeavour. The interviews ran for approximately 60 minutes and were tape recorded to identify patterns quickly, verify viewpoints, and find supporting quotations and avoid biased reporting. The respondents requested that the identity of their organisation was to remain anonymous. The research explored the new product development processes typically conducted within software firms to determine whether these processes deviate from those stipulated within Kotler’s (2000) NPD model, which is a common NPD model presented in a substantial number of undergraduate marketing courses in Australian universities. The aim is to determine why, if any, deviations from this model exist, and whether these are valid and reasonable or just poor practice. The researchers asked a range of questions specifically linking Kotler’s eight-stage new product development model to the new product processes typically conducted within the software industry. Kotler’s model is comprised of the following stages: idea generation; screening; concept development and testing; marketing strategy; business analysis; product development; market testing; and commercialisation. **Findings and discussion** The first stage of this research was to identify aspects of the software industry that respondents felt were unique, and to isolate the impact those unique aspects had on NPD. A selection of the main issues identified in the interviews are summarised in Appendix One, through indicative quotations from individual interviewees. The second stage was to identify the processes by which these firms undertook in NPD and compare these, specifically, to the Kotler model. **Stage One: Implication of Software Industry Characteristics on Product Development** The results support the assertion that inherent characteristics within the software industry distinguish it from manufacturing industries. Software appears to have more in common with a service industry, which leads to the question of whether custom-built software is a service, rather than a product. The findings show that software development exhibits many similarities to three of the characteristics that Brentani (1990) states are unique to services: intangibility, inseparability and variability. Software is largely intangible because the intention of the user is to purchase a solution or a particular outcome. The physical product is of no importance. As Brentani (1990) suggests, the implication of intangibility is that new products can be developed and modified with ease. This notion is supported by the continual upgrades and modifications witnessed within the software industry. Software development, like NSD, evolves, as an ongoing process that alters as clients’ needs change. Although clients are not present during the production of the software solution, there is evidence of a very close interaction between the buyer and seller. The close contact during the development process enables the software to be customised to suit specific requirements, which is also indicative of services. The findings indicated that the software development process is viewed differently from developer to developer. Programming is considered a craft and the lack of industry standards means that people cannot make assumptions about the way in which systems function. The outcome of the software solution varies from developer to developer because the input required for production depends on the type of skill set employed. This variability evident within software and services raises the issue of standardised practices. It appears that the software industry is pushing for standards in order to minimise the level of variability. If software embodies many of the same characteristics that make services different to tangible products, it seems appropriate to classify it as a service. Given the similarities between software and services, one would expect NSD models to be applicable to software development. The NSD model proposed by Scheuing and Johnson (1989) has followed the same generic process as NPD. The element that distinguishes the NSD model is the design of the delivery system. Designing the delivery system is, to an extent, relevant to custom-built software development. Essentially, the software firm is providing the client with a solution, not a tangible product. The client is purchasing knowledge and expertise therefore it is a provision of a service. The aim of the software firm is to design the way in which it will deliver this knowledge to the customer. Despite this, Scheuing and Johnson’s (1989) NSD model follows the same process as Kotler and many other NPD models. The respondents indicated that the NPD generic process is not suitable to firms that develop software. This suggests that the processes prescribed in both NPD and NSD models do not fit the development process typically conducted by software firms. NPD and NSD models focus on testing the new idea prior to commercialisation. Within the manufacturing and service industries, the cost of building a working prototype and testing its validity and acceptance in the market is very low relative to the ongoing costs required to start full-scale production. As a result, the NPD processes within both sectors are devised to ensure that the product or service has been extensively tested, prior to launch, to determine whether the project will go ahead. As emphasised throughout the literature, this reduces the risk of wasting large amounts of investment required to produce and maintain the product or service. Firms have the ability to postpone or cancel the idea without incurring substantial costs. The development of custom-built software does not appear to follow Kotler’s NPD model or the model proposed by Scheuing and Johnson (1989). Software product ideas are generated and conceptualised with a specific user in mind. Once the customer has evaluated the new product idea and agreed to implement the system, construction of the product begins. The firm has entered into a contract with the buyer to supply the proposed system, therefore actual full-scale development commences at the construction stage. Unlike traditional manufacturing and service firms, companies that provide custom-built software do not develop the product, test it in the market and then decide whether to proceed with the idea. Firms that develop custom made software systems do not need to test it in the market prior to launch because they are delivering a product that is customised to a client’s specific requirements, thus market acceptability is a given. Kotler’s NPD model also does not account for the customisation element evident within software development. Software firms need a NPD process that enables the firm to delay commitment to a final design as late as possible. This ensures that any new information can be used to make rapid adjustments to the final product. Respondents emphasised the problem associated with employing a development process that is conducted sequentially. It is could be the case that software firms, which complete NPD stages sequentially, risk developing obsolete products that fail to meet customer needs and exploit new technologies. It is vital that software development occurs as an iterative process, rather than sequential, to reduce time to the market. This supports Takeuchi and Nonaka (1986) and Crawford’s (1997) view of performing NPD activities simultaneously. They suggest that NPD stages should overlap to deal with increasing time pressures, which is extensively supported by the findings. The benefits associated with being first to the market encourage NPD processes that shorten delivery times. **Stage Two: Applicability of the Kotler NPD Model** The respondents indicated that the idea generation, idea screening, concept development, marketing strategy and business analysis stages were practiced during the software development process. Despite the apparent relevance of the product development and market testing stages, the respondents indicated that these were not practiced within software development firms. Furthermore, the respondents suggested that commercialisation was not relevant to firms that develop custom built software because it is not launched into the market and made available to all customers. Commercialisation, therefore, was difficult to identify as a discrete stage, given that the consumers had been using the prototype and advances on it, all along. A software product is developed with the intention of being launched into the market. Respondents indicated that their software firms would not engage in construction of the product and then decide not to proceed with it. This is due to the high investment required to develop the initial product relative to other products. The software firm tests the new product concept early in the discussion with the client. Furthermore, if there are problems with the product after deployment, it is easy to make necessary changes and rectify the problem once the system has been installed. The respondents emphasised that software firms require a new product process that is characterised by flexibility and shorter NPD cycle times. Time pressures govern these firms in order to introduce new products ahead of competition. The respondents indicated that a sequential approach to NPD is inefficient for firms operating within turbulent and unpredictable business environments. The findings support the proposition that certain characteristics of the software industry make it different from traditional manufacturing firms, and thus affect product development processes. **Conclusions and Future Research** Research studies have demonstrated that adherence to formal NPD models improves the likelihood of developing successful products and services. The key question is whether the respondents are justified in their rejection of these models for software development. There are certainly idiosyncrasies within the software industry that make it unique. The research findings and literature on the software industry has shown that certain elements within the industry affect product development. The rejection of Kotler’s traditional NPD model by those who participated in the study was not based on any specific theoretical grounds, rather it was the respondents’ previous experience and technical skills in software development, rather than marketing, that influenced the process followed in NPD. As such, this research is not able to conclude that traditional NPD and NSD models would not be advantageous in the development of software. These models are not rejected by software firms in any deliberate strategy, rather the software firms that were involved in this research had developed their NPD strategies iteratively and therefore further research in firms that do employ a version of these models would be required to draw a distinction. Despite this, the results of this research project highlight that software development is a unique endeavour. There are characteristics indicative of software development that requires different NPD practices. The implication of this finding is that certain new product development practices and processes may be industry dependent. Formal NPD processes may need to be modified and adapted according to the nature of the industry, providing support for further industry specific research. A more iterative model, as presented below in Figure One, might be more appropriate for this particular industry. **Figure One: New Product Development Model in Software Industry** **PHASE ONE: CONCEPTUAL** - Idea Generation - Idea Screening - Concept Development - Marketing Strategy - Business Analysis **PHASE TWO: PRACTICAL** - Prototype Testing - Construction - Deployment **Appendix One: Unique Aspects of the Software Industry as identified by Interviewees** <table> <thead> <tr> <th>Unique Aspect</th> <th>Illustrative Quote</th> </tr> </thead> <tbody> <tr> <td><strong>1. Knowledge Based Endeavour</strong></td> <td>“Physical Assets like plant and equipment within software are negligible … people can get hold of a CD of your software and they’ve got everything you’ve got.”</td> </tr> <tr> <td><strong>2. High Labour Intensity</strong></td> <td>“The software industry is largely labour intensive, so you don’t have large set-up costs. Anyone with a good idea can set up a software company”</td> </tr> <tr> <td><strong>3. The Importance of Entry Time</strong></td> <td>“The difference between being first in the marketplace or last could only be a matter of weeks, but the impact on your company could be catastrophic. You have to be there at the right time and with the right product.”</td> </tr> </tbody> </table> | **4. Lack of Industry Standards** | “We need to get standards about the way we do things in the software industry. In the construction industry you have 100 years of building regulations and standards-in software we rely on unique skill that are different from developer to developer. We need standards so that you hire a developer that will be able to provide some return on investment.” | 5. Ongoing Buyer-Seller Relationships | “Clients keep changing their minds about what they want and they are under pressure from globalisation. So once you put a system in and build the core structure, nine months later you have to build another version of it. Our systems and new product processes have to be able to accommodate this.” | | 6. Importance of Innovation | “Innovation is vital, there is no question about it. Software is one of those things that will continually move and move. If someone gets the upper hand they can render products obsolete overnight. Software firms are constantly trying to build better products with greater functionalities.” | | 7. Customisation of Standard Platform | “Our core business is to sell platforms, solutions and upgrades and that gives us continuous revenue…but on top of that we have this whole opportunity area where our sales guys use customer relationships to focus on where new product activities should be heading.” | References Rothwell, R. (1972). "Factors For Success in Industrial Innovations", from *Project SAPPHO-A Comparative Study of Success And Failure in Industrial Innovation*. Sussex: S.P.R.U
{"Source-Url": "https://researchbank.swinburne.edu.au/file/906a534f-c8af-4c8d-b2cd-489f23d5dfc6/1/PDF%20(Published%20version).pdf", "len_cl100k_base": 4145, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 17329, "total-output-tokens": 5617, "length": "2e12", "weborganizer": {"__label__adult": 0.0005626678466796875, "__label__art_design": 0.0005483627319335938, "__label__crime_law": 0.0005879402160644531, "__label__education_jobs": 0.005161285400390625, "__label__entertainment": 8.594989776611328e-05, "__label__fashion_beauty": 0.00024306774139404297, "__label__finance_business": 0.008941650390625, "__label__food_dining": 0.0006265640258789062, "__label__games": 0.0007228851318359375, "__label__hardware": 0.0005965232849121094, "__label__health": 0.0006289482116699219, "__label__history": 0.0002620220184326172, "__label__home_hobbies": 0.00011688470840454102, "__label__industrial": 0.0005135536193847656, "__label__literature": 0.0005545616149902344, "__label__politics": 0.0004839897155761719, "__label__religion": 0.0003681182861328125, "__label__science_tech": 0.0035762786865234375, "__label__social_life": 0.00015747547149658203, "__label__software": 0.007236480712890625, "__label__software_dev": 0.966796875, "__label__sports_fitness": 0.0003604888916015625, "__label__transportation": 0.000690460205078125, "__label__travel": 0.0002689361572265625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25358, 0.01746]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25358, 0.07417]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25358, 0.93796]], "google_gemma-3-12b-it_contains_pii": [[0, 3380, false], [3380, 7126, null], [7126, 10875, null], [10875, 14773, null], [14773, 18495, null], [18495, 20766, null], [20766, 23238, null], [23238, 25358, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3380, true], [3380, 7126, null], [7126, 10875, null], [10875, 14773, null], [14773, 18495, null], [18495, 20766, null], [20766, 23238, null], [23238, 25358, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25358, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25358, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25358, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25358, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25358, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25358, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25358, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25358, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25358, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25358, null]], "pdf_page_numbers": [[0, 3380, 1], [3380, 7126, 2], [7126, 10875, 3], [10875, 14773, 4], [14773, 18495, 5], [18495, 20766, 6], [20766, 23238, 7], [23238, 25358, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25358, 0.064]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
a87c60bad5242bb30c9f4566129e7093504d58ee
1971 Information distribution aspects of design methodology David Lorge. Parnas Carnegie Mellon University Follow this and additional works at: http://repository.cmu.edu/compsci This Technical Report is brought to you for free and open access by the School of Computer Science at Research Showcase @ CMU. It has been accepted for inclusion in Computer Science Department by an authorized administrator of Research Showcase @ CMU. For more information, please contact research-showcase@andrew.cmu.edu. NOTICE WARNING CONCERNING COPYRIGHT RESTRICTIONS: The copyright law of the United States (title 17, U.S. Code) governs the making of photocopies or other reproductions of copyrighted material. Any copying of this document without permission of its author may be prohibited by law. INFORMATION DISTRIBUTION ASPECTS OF DESIGN METHODOLOGY by D. L. Parnas Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania February, 1971 This work was supported by the Advanced Research Projects Agency of the Office of the Secretary of Defense (F44610-70-C-0107) and is monitored by the Air Force Office of Scientific Research. This document has been approved for public release and sale; its distribution is unlimited. ACKNOWLEDGEMENT I am grateful to A. Perlis, H. Wactlar, and G. Bell for their suggestions after an early reading of this paper. I am deeply grateful to NV Philips-Electrologica, Apeldoorn, the Netherlands, for having provided me with the opportunity to study the problems of systems development in practice and by means of a direct involvement rather than a remote study. Although the problems discussed in this paper are apparently shared by everyone in the industry, the steps taken at Philips to improve the situation have provided me with valuable insight. Thanks are due to countless personnel, both at Philips and at several other institutions, who have been patient during my probing. INFORMATION DISTRIBUTION ASPECTS OF DESIGN METHODOLOGY D. L. Parnas Department of Computer Science Carnegie-Mellon University Pittsburgh, Pennsylvania ABSTRACT The role of documentation in the design and implementation of complex systems is explored, resulting in suggestions in sharp contrast with current practice. The concept of system structure is studied by examining the meaning of the phrase "connections between modules". It is shown that several system design goals (each suggesting a partial time ordering of the decisions) may be inconsistent. Some properties of programmers are discussed. System documentation, which makes all information accessible to anyone working on the project, is discussed. The thesis that such information "broadcasting" is harmful, that it is helpful if most system information is hidden from most programmers, is supported by use of the above mentioned considerations as well as by examples. An information hiding technique of documentation is exhibited in the appendix. IFIP CLASSIFICATION: 3 Language of Oral Presentation: English Statement of Originality: In the opinion of the author the paper contains a number of conclusions which have not been discussed or published elsewhere. No paper similar in scope to this paper is being presented for publication elsewhere. INTRODUCTION Papers on design methodology assume (1) that the methods used in system design affect strongly the quality of the final product; and (2) by selecting an appropriate methodology we can avoid many of the problems previously encountered in constructing large systems. Under the heading "Design Methodology" a number of separate topics can be distinguished: 1. The order in which design decisions are made [1, 2, 3, 6] 2. The characteristics of the final product (e.g., what constitutes "good structure" for a system) [4, 5, 6, 7] 3. Methods of detecting errors in design decisions shortly after they are made [1, 2, 3, 5, 8, 9] 4. Specification techniques [12, 13] 5. Tools for system designers [1, 2, 3, 10, 11] This paper emphasizes another topic named "information distribution". Design and development are a series of decisions. Each decision results in information about the system which can be used in making later decisions. We want eventually to discuss the distribution of that information among those working on the system and to deal with its organization in documentation. To prepare for this discussion we deal first with (1) the concept of system structure, (2) constraints on the order of decisions, and (3) some observed characteristics of good programmers. STRUCTURE DEFINED The word "structure" is used to refer to a partial description of a system. A structure description shows the system divided into a set of modules, gives some characteristics of each module, and specifies some connections between the modules. Any given system admits many such descriptions. Since structure descriptions are not unique, our usage of "module" does not allow a precise definition parallel to that of "subroutine" in software or "card" in hardware. The definitions of those words delineate a class of objects, but not the definition of "module." Nevertheless, "module" is useful in the same manner that "unit" is in military or economic discussions. We shall continue to use "module" without a precise definition. It refers to portions of a system indicated in a description of that system. Its precise definition is not only system dependent but also dependent upon the particular description under discussion. The term "connection" is usually accepted more readily. Many assume that the "connections" are control transfer points, passed parameters, and shared data for software, wires or other physical connections for hardware. Such a definition of "connection" is a highly dangerous oversimplification which results in misleading structure descriptions. The connections between modules are the assumptions which the modules make about each other. In most systems we find that these connections are much more extensive than the calling sequences and control block formats usually shown in system structure descriptions. The meaning of the above remark can be exhibited by considering two situations in which the structure of a system is terribly important: (1) making of changes in a system, and (2) proving system correctness. (I feel no need to argue the necessity of proving programs correct, or to support the necessity of making changes. I wish to use those hypothetical situations to exhibit the meaning of "connection.") Correctness proofs can become so complex that their own correctness is in question [e.g., 14, 15]. We would like to simplify the proofs by using the structure of the program, proving the correctness of each module separately. For each module we will have a set of hypotheses to prove and a description of the module. In our hypotheses we can distinguish the things we expect a module to accomplish from the things which we assume other modules will guarantee. Those statements are the connections between the module being examined and the rest of the system. The proof process will be facilitated only if the amount of information in the hypotheses is significantly less than the amount of information in the full description of the connected modules. In the extreme case, where one module's correctness is predicated upon the complete description of another module, the proof of the first module's correctness will be as complex as if the two were considered a single module. We now consider making a change in the completed system. We ask, "What changes can be made to one module without involving change to other modules?" We may make only those changes which do not violate the assumptions made by other modules about the module being changed. In other words, a single module may be changed only as long as the "connections" still "fit." Here, too, we have a strong argument for making the connections contain as little information as possible. FACTORS INFLUENCING THE ORDER OF DECISION MAKING Progress in a design is marked by decisions which eliminate some possibilities for system structure. The fact that those possibilities have been eliminated can be part of the rationale for subsequent decisions. If the information is used, the order of decision making (in time) affects the structure of the resulting product. Examples of interest can be found in [4]. We can identify three considerations, each suggesting a partial ordering on the decisions. 1. Obtaining 'good' external characteristics. All systems have characteristics which are not pleasing to the users. Usually they were not determined by explicit deliberations; they were the unnoticed implications of decisions about other aspects of system structure. To consistently avoid such errors we can make the decisions about external characteristics first. We use the resulting information to make the later decisions. The internal decisions would be either derived from or checked against the complete specifications of the external factors. This is the basis of the "top down" or "outside in" approach discussed in [1, 2, 3, 4]. 2. Reducing the time interval between initiation and completion of the project. Competitive pressures may require the use of large groups to produce a system in a sharply limited period of time. Additional men speed up a project significantly only after the project has been divided into subprojects in such a way that separate groups can work with little interaction (i.e., spending significantly less time in inter-group decisions than in intra-group decisions). This consideration affects the order of decisions in that it encourages very early splitting of the system into modules which are then designed completely independently. The desire to make the split early and "get on with it" encourages a splitting along familiar lines and in agreement with existing personnel classifications. Time pressures encourage groups to make the split before the externals are defined. Consequently we find some adverse effect on the useability of the product. Haste also makes poor internal structure more likely. 3. Obtaining an easily changed system. Systems are changed after construction either because their original characteristics proved insufficient or because another application was found. We have already noted that the difficulties in changing systems are related to the assumptions which each of the modules makes about its environment. Since each decision is usually made on the assumption that the previous decisions will hold, the most difficult decisions to change are usually the earliest. The last piece of code inserted may be changed easily, but a piece of code inserted several months earlier may have "wormed" itself into the program and become difficult to extract. These considerations suggest that the early decisions should be those which are the least likely to change; i.e., those based on "universal" truths or reasoning which takes into account little about a particular environment. The remaining facts must be used eventually, but the possibility of change suggests using the most general information first. Since such external characteristics as job control language and file commands are very frequently changed, the "outside-in" approach may make the system harder to change. Further, those decisions which should be made early on this basis are not usually those which allow the project to be quickly subdivided into independent assignments. As a rule, decisions which do not use all the available information about a system (i.e., the general decisions) take more time. In summary, each of the three considerations suggests a partial ordering of the decisions. Those orderings are usually inconsistent in that it will be impossible to satisfy them simultaneously. DOCUMENTATION SYSTEMS For any complex system there must be documentation about the system for use by the human beings who must complete it. Programs and wiring diagrams do completely define the algorithm which they will execute, but this form of documentation is not usually appropriate for people. Consequently there are always papers which attempt to answer the questions most likely to be asked. There is usually no attempt to make the documentation complete (i.e., equivalent to the code) for software, thus certain questions must be answered by reference to the code. When a system is strongly connected, this documentation must be read by persons not closely involved with the module being documented. Because each working group develops a unique module organization and a corresponding set of concepts and terms, the documents which they write are difficult for outsiders to read. The natural response is to require all documentation to be written with a standard organization and vocabulary [16]. A standard is made company-wide to allow anyone in the organization to find some piece of information without needing to learn the concepts and vocabulary peculiar to one system or module. Such approaches raise several questions: 1. Is it really desirable to have all information equally accessible to all in the company (or project)? 2. What is the effect of documentation standards on the resulting system? 3. What is the result of a non-standard system being described using a standard document organization? Documentation standards tend to force system structure into a standard mold. A standard for document organization and vocabulary makes some assumptions about the structure of the system to be described. If those assumptions are violated, the document organization fits poorly and the vocabulary must be stretched or misused. Consider the following example. In most operating systems there exists a module which handles all job control statements from the time they are read in until the job is completed. As a result, most documentation systems can insist that there be a section describing such a module. Now consider an organization such as that of the T.H.E. system in which there is no such module because most of the processing is handled in modules which are also used for other purposes. If we adhere to the documentation standard we will duplicate information and describe one module in the documentation of another. If there are to be standard documentation organizations, they must be designed to make the minimum number of assumptions about the system being documented. If so, they will be of little help in making the document readable to people who do not understand the structure of the system. ON SOME PROPERTIES OF GOOD PROGRAMMERS The following observation is essential to the remainder of this paper: "A good programmer makes use of the usable information given him!" The good programmer will try to use his machine well. He is actually programming for a "virtual machine" defined by the hardware and his knowledge of the other software on the machine. His training and his nature lead him to make full use of that extended machine. Sometimes the uses are obvious. The programmer makes use of a subroutine from some other module, or a table of constants already present for some other piece of code. Sometimes these uses are so marginal as to be laughable, e.g., the use of a 3-instruction subroutine or the borrowing of a single constant. In the terms of our previous discussions, such extreme cases increase the connectivity of the structure without appreciably improving its performance. Sometimes the uses are less obvious. For example, a programmer may make use of his knowledge that a list is searched in a certain order to eliminate a check or an extra queue. In the area of application programming we may find a programmer who introduces an erroneous value for n knowing that because of an error in the sine routine the erroneous value will cause his program to converge more rapidly. Such uses of information have been so costly that we observe a strange reaction. The industry has started to encourage bad programming. Derogatory names such as "kludger", "hacker" and "bit twiddler" are used for the sort of fellow who writes terribly clever programs which cause trouble later on. They are subtly but effectively discouraged by being assigned to work on small independent projects such as application routines (the Siberia of the software world) or hardware diagnostic routines (the coal mines). In both situations the programmer has little opportunity to make use of information about other modules. Those that remain (the non-bit-twiddlers) are usually poor programmers. While a few refrain from using information because they know it will cause trouble, most refrain because they are not clever enough to notice that the information can be used. Such people also miss opportunities to use facts which should be used. Poor programs result. Since even a poor programmer sometimes has a "flash of brilliance" (e.g., noticing that two bytes in a control block can be simultaneously set with one instruction because they are adjacent and in the same word) we still have no control of the structure. We have found that a programmer can disastrously increase the connectivity of the system structure by using information he possesses about other modules. We wish to have the structure of the system determined by the designers explicitly before programming begins, rather than inadvertently by a programmer's use of information. Consequently, we discourage the bit twiddlers and pay a price in poor programming without obtaining complete control of the structure. THE USE OF DESIGNER CONTROLLED INFORMATION DISTRIBUTION We can avoid many of the problems discussed here by rejecting the notion that design information should be accessible to everyone. Instead we should allow the designers, those who specify the structure, to control the distribution of design information as it is developed. Our concerns about the inconsistent decision orderings were based on the assumption that information would be used shortly after the cor­ responding decision. The restrictions placed by the three considerations are considerably relaxed if we have the possibility of hiding some decisions from each group. For example, we have noted a conflict between the desire to produce an external specification early and the desire to produce a system for which the external interface is easily changed. We can avoid that conflict by designing the external interface, using it as a check on the remaining work, but hiding the details that we think likely to change from those who should not use them. If we want the structure to be determined by the designers, they must be able to control it by controlling the distribution of the informa­ tion. We should not expect a programmer to decide not to use a piece of information, rather he should not possess information that he should not use. The decision is part of the design, not the programming. Reflection will show that such a policy expects a great deal from the designers. We currently release all the information about a module; to do so is considerably easier than (1) deciding which information should be released and (2) finding a way of expressing precisely the information needed by other modules. Preliminary experience has shown that making appropriate definitions is quite difficult. Acquiring skill in making those definitions is vital because we will be able to successfully build systems while restricting programmers' information only if we learn to provide them with precisely the information they need. EXAMPLES I believe it worthwhile to give some concrete examples of information which is now widely disseminated within a project and should instead be sharply restricted. 1. Control Block Formats Every system contains small amounts of information in pre-formatted areas of storage called control blocks. These are used for passing information between modules and are considered to be the interfaces. For this reason formats are usually specified early in the project and distributed to all who are working on the project. The formats are changed many times during the project. Few programmers on any project need to know such formats. They need a means for referring to a specific item, but not more. They need not even know which information is grouped into one control block. 2. Memory Maps It is common to begin a description of an operating system by (1) describing the main modules and (2) showing how the core storage is divided among those main modules. Soon there is a complete map of the memory showing how that resource is allocated. Reasonably sophisticated designers show the borders of allocated areas as symbolic rather than absolute addresses, but the order of memory assignment is specified. Only a small portion of this information derives from hardware decisions. There is no legitimate way to use the map information. It would be frightening if someone developed code that would not work if the map were changed. Such maps are almost invariably changed because something which was fixed becomes variable or vice versa. The information is only needed at assembly time. We could survive if it were input to the assembler and not known by anyone else. When there is a virtual memory or other mechanism for swapping built into the system, the distinction between resident and non-resident items should not be broadcast. If there are several kinds of core storage, the allocation of modules and data among those storage types should not be known to those who are writing the modules. If partial preloading of certain programs is envisaged, the decision as to which modules will be preloaded should be hidden. Each of these decisions is worthy of attention, but few should know the result. 3. Calling Sequences Calling sequences are the secret hobby of every system programmer. We begin to look at new hardware by inventing a calling sequence. Throughout the design and implementation, the calling sequence is simplified, generalized, made more efficient, etc. Each time we face a decision. Either modules all over the system are altered or the new sequence is added to a growing set of calling sequences. In the latter case generating a call to a routine requires determining which sequence it uses. Most routines can be written, and written well, without knowledge of the calling sequence if the programmer is provided with a programming tool which allows him to postpone decisions about register allocation for parameters, return addresses, and results. Such features can be provided in an assembler with macro facilities. 4. JCL Formats One characteristic which should be easy to change is the syntax of the so called Job Control Language, the means by which the user describes his job's gross characteristics to the operating system. The design of a JCL implies assumptions about the way that the system will be used which may later prove to be false or too restrictive. There exist systems in which JCL format information has been used so much that reasonable changes are beyond the scope of the usual organization. Often changes require user provision of duplicate information and/or the maintenance of duplicate tables. (See, for example, [17].) Most of the people working on an operating system need very little knowledge about the JCL. The only people who need to know the format are those who are writing the syntax analyzer for the language. 5. Location of I/O Device Addresses It is widely recognized that device addresses should not be built into code but stored in tables associated with each job. However, it is usual that all programmers are given knowledge sufficient to allow them to find and use the table. For example, many modules will send messages to a user at his teletype. If later one wishes to intersect those messages and reinterpret or suppress them for a special class of users, the job is horrendous. Most programs did not need that information. Access to a module which would send messages for them is sufficient. 6. Character Codes Some hardware information should not be released. I have seen one compiler in which the association made by the hardware between card characters and integers was so widely used that a second version of the compiler (for a new machine) contained a module which translated from the new character code to the old one and back again. The efficiency gained by using the character code information (e.g., by using arithmetic tests to determine if a given character is a delimiter) is often not worth the price paid. Where it is worthwhile, the knowledge can be closely restricted if the designers pay attention to the problem. Certainly the decision to use or not to use the information should not be left up to an individual programmer. CONCLUSION The inescapable conclusion is that manufacturers who wish to produce software in which the structure is under the control of the designers, must develop a documentation system which enables designer control of the distribution of information. Further, they must find and/or train designers who are able to define or specify modules in a way which provides exactly the information that they want the programmers to use. Until we can completely staff a project with men who have the intellectual capacity and training to make that decision for themselves, some must make the decision for others. An assembler which allows the insertion of some hidden information at "assembly time" will aid in maintaining efficiency. I consider the internal restriction of information within development groups to be of far more importance than its restriction from users or competitors. Much of the information in a system document would only harm a competitor if he had it. (He might use it!) It is worth repeating that the decision about which information to restrict is a design decision, not a management one. The management responsibility ends with providing the appropriate information distribution mechanism. The use of that mechanism remains a design function because it determines the structure of the product. APPENDIX A MODULE DOCUMENTED ON THE BASIS OF "KNEED TO KNOW" INTRODUCTION Assume the system under construction to be a translator for string manipulation algorithms based upon Markov Algorithms. Such a package must contain a representation of the variable length string known as the register which constitutes the only memory in a hypothetical Markov Algorithm machine. Assume further that the decision has been made that the knowledge of this representation be confined to a single module in spite of the fact that almost all actions done by the system will involve changes in the register. The purpose of this decision is to make the representation easy to change. The statements which follow provide all the documentation of such a module which should be available to its users. They are intended to provide all the information necessary to use the module, i.e., to manipulate the register, yet no information about the representation of the register in the machine. The method used is to define five procedures, to specify their initial values if they are functions, to specify the type of their parameters where they have parameters. Further, a statement is made as to the effect of a call on the procedures on the values of the other functions in the package. This is done by indicating the new value of any changed functions as a function of their old values and the values of parameters to the called procedure. A value before the change is shown enclosed in single quotes (e.g., 'length'). Values after the change are shown unquoted. The actions which take place in the event of errors are specified to be procedure calls. It is assumed that should such a call occur, (1) no values will have been changed, and (2) upon a return from the procedure called, the attempt to perform the routine specified will be repeated completely. DEFINITIONS INTEGER PROCEDURE: LENGTH possible values: an integer 0 ≤ length ≤ 1000 effect: no effect on values of other functions parameters: none initial value: 0 INTEGER PROCEDURE: GETCHA (I) possible values: an integer 0 ≤ GETCHA ≤ 255 parameters: I must be an integer effect: no changes to other functions in modules if I ≤ 0 V I > LENGTH then a procedure call to a user written routine RGERR is performed. (program cannot be assembled without such a routine) initial value: undefined PROCEDURE: INSAFT(I, J) possible values: none parameters: I must be an integer J must be an integer effect: if I < 0 V I > 'LENGTH' V J < 0 V J > 255 then a subroutine call to a user written routine INSAER is performed. (routine required) else LENGTH = 'LENGTH' +1 if LENGTH ≥ 1000 a subroutine call to user written function LENER is performed. GETCHA(K) = if k ≤ I, 'GETCHA(I)' if k = I+1, J if k > I+1, 'GETCHA(K-1)' PROCEDURE: DELETE (I, J) possible values: none parameters: I, J must be integers effect: if I ≤ 0 V J < 1 V I+J > 'LENGTH' +1 then a procedure call to a user written routine DELERR is performed. else LENGTH = 'LENGTH' - J. GETCHA(K) = if k < I then 'GETCHA(K)' if k ≥ I then 'GETCHA(K+J)' PROCEDURE: ALTER(I, J) possible values: none parameters: I, J must be integers effect: if I ≤ 0 V I > 'LENGTH' V J < 0 V J > 255 then a subroutine call to a user written routine ALTERERR is performed. GETCHA(K) = if K ≠ I then 'GETCHA(K)' if k = I then J DISCUSSION It is possible to verify the completeness of these definitions by showing that a value is defined for each function for every possible sequence of calls. The possibility of infinite looping through repeated calls of error routines exists, but this would be an error in usage not in definition. One can demonstrate that a minimum of information is given out by the definitions by showing first its sufficiency for use (i.e., completeness) and by showing that the widest conceivable variety of implementations can fit the definitions. The usual form of documentation would be (1) much more wordy, (2) more revealing of internal aspects. In fact, because natural language is used the completeness can only be assured by exhibiting the internal structure. The mnemonic names used here carry no essential information. They could be replaced by 'x1', 'x2', etc. at no theoretical cost, but at the practical cost of being obscure. The definitions are obscure now to a reader unfamiliar with the register of a Markov machine. This can be alleviated by a supplement suggesting ways to use the functions (e.g., a teaching supplement) having no official status. References The role of documentation in the design and implementation of complex systems is explored, resulting in suggestions in sharp contrast with current practice. The concept of system structure is studied by examining the meaning of the phrase "connections between modules." It is shown that several system design goals (each suggesting a partial time ordering of the decisions) may be inconsistent. Some properties of programmers are discussed. System documentation, which makes all information accessible to anyone working on the project, is discussed. The thesis that such information "broadcasting" is harmful, that it is helpful if most system information is hidden from most programmers, is supported by use of the above mentioned considerations as well as by examples. An information hiding technique of documentation is given in the appendix.
{"Source-Url": "http://repository.cmu.edu/cgi/viewcontent.cgi?article=2828&context=compsci", "len_cl100k_base": 6138, "olmocr-version": "0.1.50", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 45175, "total-output-tokens": 7897, "length": "2e12", "weborganizer": {"__label__adult": 0.00035643577575683594, "__label__art_design": 0.0005979537963867188, "__label__crime_law": 0.0003097057342529297, "__label__education_jobs": 0.001929283142089844, "__label__entertainment": 5.650520324707031e-05, "__label__fashion_beauty": 0.0001609325408935547, "__label__finance_business": 0.0002808570861816406, "__label__food_dining": 0.0003070831298828125, "__label__games": 0.00047898292541503906, "__label__hardware": 0.0010919570922851562, "__label__health": 0.0004775524139404297, "__label__history": 0.00026345252990722656, "__label__home_hobbies": 0.00010311603546142578, "__label__industrial": 0.0003838539123535156, "__label__literature": 0.0004012584686279297, "__label__politics": 0.000225067138671875, "__label__religion": 0.0004684925079345703, "__label__science_tech": 0.0192718505859375, "__label__social_life": 7.575750350952148e-05, "__label__software": 0.004299163818359375, "__label__software_dev": 0.96728515625, "__label__sports_fitness": 0.000255584716796875, "__label__transportation": 0.0005364418029785156, "__label__travel": 0.00016427040100097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33556, 0.02111]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33556, 0.66691]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33556, 0.93455]], "google_gemma-3-12b-it_contains_pii": [[0, 505, false], [505, 786, null], [786, 1240, null], [1240, 1933, null], [1933, 3249, null], [3249, 4537, null], [4537, 6093, null], [6093, 7832, null], [7832, 9479, null], [9479, 11147, null], [11147, 12700, null], [12700, 14257, null], [14257, 15735, null], [15735, 17356, null], [17356, 18898, null], [18898, 20306, null], [20306, 21900, null], [21900, 23403, null], [23403, 24751, null], [24751, 26068, null], [26068, 27599, null], [27599, 27910, null], [27910, 29369, null], [29369, 30536, null], [30536, 32710, null], [32710, 33556, null], [33556, 33556, null]], "google_gemma-3-12b-it_is_public_document": [[0, 505, true], [505, 786, null], [786, 1240, null], [1240, 1933, null], [1933, 3249, null], [3249, 4537, null], [4537, 6093, null], [6093, 7832, null], [7832, 9479, null], [9479, 11147, null], [11147, 12700, null], [12700, 14257, null], [14257, 15735, null], [15735, 17356, null], [17356, 18898, null], [18898, 20306, null], [20306, 21900, null], [21900, 23403, null], [23403, 24751, null], [24751, 26068, null], [26068, 27599, null], [27599, 27910, null], [27910, 29369, null], [29369, 30536, null], [30536, 32710, null], [32710, 33556, null], [33556, 33556, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33556, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33556, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33556, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33556, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33556, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33556, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33556, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33556, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33556, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33556, null]], "pdf_page_numbers": [[0, 505, 1], [505, 786, 2], [786, 1240, 3], [1240, 1933, 4], [1933, 3249, 5], [3249, 4537, 6], [4537, 6093, 7], [6093, 7832, 8], [7832, 9479, 9], [9479, 11147, 10], [11147, 12700, 11], [12700, 14257, 12], [14257, 15735, 13], [15735, 17356, 14], [17356, 18898, 15], [18898, 20306, 16], [20306, 21900, 17], [21900, 23403, 18], [23403, 24751, 19], [24751, 26068, 20], [26068, 27599, 21], [27599, 27910, 22], [27910, 29369, 23], [29369, 30536, 24], [30536, 32710, 25], [32710, 33556, 26], [33556, 33556, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33556, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
ef9c25993145bf3f148af4290387b2bda033e288
Marcin SOWA Silesian University of Technology, Gliwice SIMPLE C# CLASSES FOR FAST MULTIVARIATE POLYNOMIAL SYMBOLIC COMPUTATION – PART I: IMPLEMENTATION DETAILS Summary. The paper deals with a lightweight C# implementation that allows to perform symbolic computation on multivariate polynomials in expanded forms. The classes called SPoly and SMono (which represent the expressions) are explained along with their limitations. Furthermore a few remarks are made on the usefulness of methods working as += and -= operators. A further analysis of the implementation is given in parts II and III of the paper. Keywords: symbolic computation, sparse multivariate polynomials, computation time, C# implementation 1. INTRODUCTION In the practice of computational problems, arising from modeling tasks of engineering branches, it is an often case that it is convenient (and also often more efficient on the long run) to obtain dependencies in symbolic forms. The ability of a computer program to perform mathematical operations on formulae in a symbolic representation is widely known as symbolic computation. There are many programs available that allow these types of computations – they are referred to as CAS (Computer Algebra Systems). The most known are Maple and Mathematica. These programs are able to perform symbolic computations on a wide variety of expression types and provide various options concerning their presentation (e.g. in expanded or factored forms). Specialists most commonly reach out for the well-known CAS treating them as standalone programs to solve their problems. As for programmers' needs (when symbolic computation is required) CAS are also used – this usually involves a link being established between the main program and the Computer Algebra System. It is an often case however – that when dealing with engineering problems (like in the author’s experience in computational electromagnetics [1]) symbolic computation is only required for some specific expression classes and operations. In these cases a simple implementation of the ability to perform symbolic computations is sufficient and could provide a few benefits. The presented research concerns such an implementation in C# for multivariate polynomials in expanded forms. The author regularly uses the library in computational electromagnetics and simpler tasks such as interpolation or studies in theoretical electrical engineering. It will be made available for free to anyone who needs the proposed limited (yet useful) functionality for symbolic computations on multivariate polynomials. More comprehensive CAS may also be built with the use of the proposed classes in the future. Because the implementation is introduced into a programming language – many other benefits are gained, some of which are: - no need to install and execute additional linking processes between a computer algebra system and a lower level programming language (i.a. as discussed in [2]), especially if only multivariate polynomials with positive integer exponents are concerned (in this case only a limited CAS functionality is required), - a direct execution of symbolic computations and convenient access to the results, - no need for additional objects that would store the results and allow them to be transferred through auxiliary methods, - a pre-compiled code generally outperforms one that is interpreted at run-time (which is the case for all CAS) in terms of computation speed (however, there are efforts to at least make basic, built in operations in CAS much faster, as is the case with Maple's implementations in [3]). The proposed basic implementation could potentially also provide valuable insight as to what could be improved in terms of the numerical efficiency for the multivariate polynomial computations of computer algebra systems. The paper also provides general educational value to anyone who wants to write their own symbolic computation libraries. In this paper the numerical efficiency is understood by the author as an ability to perform symbolic computations in a reasonable period of time (judging by the problem complexity) where it is also the case that the obtained expressions do not require relatively excessive amounts of memory for their storage. The sections of the paper are divided as follows: - section 2 gives a brief explanation of the classes' functionality, - in section 3 the essential classes, representing multivariate polynomials, are presented, - section 4 contains details on the implemented operators of addition and subtraction, - section 5 deals with the ability of analytical differentiation and definite integration of the expanded symbolic expressions, - section 6 provides a summary of the work completed so far. In part II of the paper – an analysis of numerical efficiency is presented (for all the implemented operations of this part). Multivariate polynomial multiplication and exponentiation is discussed separately – in part III of the paper (along with an analysis of computation times). 2. SUPPORTED SYMBOLIC EXPRESSIONS AND FUNCTIONALITY It is assumed that the symbolic expressions must satisfy the expanded formula for multivariate polynomials: \[ \zeta(s) = \sum_{i=1}^{N} T_i(s) = \sum_{i=1}^{N} a_i \prod_{j=1}^{n} s_j^{k_{i,j}}, \quad a_i \in \mathbb{R}, \; k_{i,j} \in \mathbb{N}_0. \tag{1} \] where \( s \) represents a vector containing all symbolic variables \( s_1, s_2, s_3, \ldots, s_n \) while \( T_i(s) \) represents the subsequent terms of the symbolic expression. The product \( \prod_{j=1}^{n} s_j^{k_{i,j}} \) is later on referred to as the symbolic multiplier, while \( a_i \) is called the real-valued multiplier. At the current stage – efforts have been made so that the functionality of the implementation covers: a) a convenient generation of symbolic expressions by giving the multipliers \( a_i \) and exponentiations \( k_{i,j} \); alternatively a string, which is interpreted and converted into a set of objects, b) addition, subtraction and multiplication operations concerning expressions of the form (1), c) exponentiation of a polynomial by a positive integer, d) the possibility of obtaining the derivative \( \frac{\partial \zeta}{\partial s_l} \) in terms of a selected symbolic variable \( s_l \), e) the ability to analytically evaluate the definite integral: \[ I_{\text{def}} = \int_{c(s)}^{d(s)} \zeta(s) \, ds, \] for a selected interval from \( s_l = c(s) \) to \( s_I = d(s) \) (hence also – symbolic variable substitution must be considered). ### 3. MAIN CLASSES It was in the author’s interest to make the implementation simple, while also keeping it potentially efficient. Therefore it requires the standard C# libraries alone, without additional composite packages and classes. Polynomials of the form (1) are expressed by objects that are instances of the \texttt{SPoly} class. This class provides a root that contains methods allowing for the operations pointed out in the previous section. It contains references to only some of the monomials. The details on subsequent monomials \( T_i(s) \) will, however, be contained inside a unidirectional linked list of objects that are instances of a separate class called \texttt{SMono}. The dependencies between an \texttt{SPoly} object and the subordinate \texttt{SMono} object list are displayed in Figure 1. ![Fig.1. Dependencies between the proposed symbolic math objects](image) The start and slast references of an \texttt{SPoly} object represent respectively the first and last object of the linked list. start and at_start allow to conveniently begin a search for common symbolic multipliers, while slast helps in monomial appending. The searcher reference is used in order to mark the monomial that was compared as the last one – this is useful for polynomial addition and subtraction. Each \texttt{SMono} object contains information about a separate monomial, where the double variable \( a \) represents the real multiplier \( a \), while \( k \) is an array containing the exponentiations of each symbolic variable (i.e. the integers \( k_{i,1}, k_{i,2}, \ldots, k_{i,n} \) of a monomial with the index \( i \)). In order to reduce the amount of monomial comparisons when addition or subtraction is performed – polynomial ordering needs to be applied. Out of the many possible methods used for multivariate polynomials [4] – a type of lexicographical ordering is used in the implementation (where the exponent of $s_n$ is the most significant and of $s_1$ is the least significant). Therefore, when polynomials are added, the searcher reference allows to skip terms that are surely placed earlier than the one added (as they precede a previously added monomial). The simple monomial search method, which also informs what should be done when the search is complete, is presented in Figure 2. * whether the symbolic multiplier is the same or different – the rest of the monomials from the expression the added term belongs to do not need to be compared with what will be start after this comparison ** the compare.k function performs a comparison of the array values, starting from the last one; if the arrays are equal then 0 is returned, once an element of k is greater than the respective entry of start.k then -1 is returned, else the function returns 1. Fig.2. The algorithm for the search of a common symbolic multiplier Rys.2. Algorytm poszukiwania wspólnego mnożnika symbolicznego 4. ADDITION AND SUBTRACTION OF SYMBOLIC EXPRESSIONS IN C# 4.1. Implemented addition/subtraction operators Subject to the assumptions given in section 2 – the addition, subtraction and multiplication operators have been implemented. This section concerns the first two. For the implementation to be efficient – methods have been built so that one can perform + = and − = operations more efficiently (allowing to overwrite the polynomial, which another one is being added to); these are referred to, in the paper, as (op) = operations. The C# language does, however, not allow the overload of the regular (op) = operators, so instead functions called pluseq and mineq have been written. An idea is also introduced so that one can use alternative (even more efficient) methods if it is admissible to overwrite both objects, which take part in the operation. These are later on referred to as the overtaking operators. The operators and methods that have been implemented are presented in Table 1. <table> <thead> <tr> <th></th> <th>Regular</th> <th>Fast (op) =</th> <th>Overtaking</th> </tr> </thead> <tbody> <tr> <td><strong>Addition</strong></td> <td>c=a+b</td> <td>a.pluseq(b)</td> <td>plus_combine(a,b)</td> </tr> <tr> <td></td> <td>a+=b</td> <td></td> <td></td> </tr> <tr> <td><strong>Subtraction</strong></td> <td>c=a−b</td> <td>a.mineq(b)</td> <td>minus_combine(a,b)</td> </tr> <tr> <td></td> <td>a-=b</td> <td></td> <td></td> </tr> </tbody> </table> Comments on addition and subtraction operators and methods: - (op) = operators build a new object for a - the operations overwrite a, results are the same as in regular (op) = operators - use a and b to build a+b (or a−b) without acquiring new memory; a is overwritten, b is useless after these operations (as its SMono objects are used in the result); the methods return the result but it is also stored in a 4.2. Comparison of addition/subtraction operation performance A comparison is performed in terms of the computation speed for the symbolic math classes when using: - the regular operators += and − =, - the fast (op) = methods pluseq and mineq, - the overtaking operations plus_combine and minus_combine, The first procedure being tested is one where a continuous addition/subtraction of terms (with repeating symbolic multipliers) is performed. This procedure follows the pseudo-code: \[ S=0;\quad i=0;\quad m=\text{round}(n/5); \] For \(j=1\) to \(n\): \[ w = (j+1) \times a^i \times b^{(m-i)} \times c^i; \] \[ S+=w;\quad \text{OR}\quad S-=w; \] \[ i++; \] If \(i>m\): \[ i=0; \] End End The results (in terms of computation time) for selected addition and subtraction operators/methods are presented in Figure 3. The results for regular \((\text{op})=\) computations are not displayed since their duration was hundred to thousand times longer (this is because such continuous addition/subtraction of terms requires the object to be rebuilt each time). ![Fig.3. Comparison of the computation times for various: a) addition methods, b) subtraction methods](image) **Fig.3.** Porównanie czasów obliczeń dla różnych: a) metod dodawania, b) metod odejmowania A different task for the operators is the addition/subtraction of polynomials consisting of more than one term. The results from the trials of the previous test are placed as subsequent entries of an array \(S\) and then new trials are performed, where the neighboring entries are added together e.g. for the second trial the operation \(S[1](\text{op})=S[2]\) is performed. The results are presented in Figure 4. Because the results for both the addition and subtraction operations are very close – the analysis observations are the same: I) the `pluseq` and `mineq` methods have been thousands of times faster for frequent (loop) additions and about twice as fast as the regular `(op) =` operators for regular (single expression) additions and subtractions, II) the expression emerging as a result of the operation is the same in every case, III) this resulting expression requires the same amount of memory in every operation, however – for the overtaking operators the terms of the partaking objects are used, hence only terms with repeating symbolic multipliers are not taken over by the result; the overtaking operations can be useful when dealing with expressions that require large amounts of memory, IV) for the looped-operation test, the overtaking methods did not display significant advantages (faster by about 4 to 5 percent), while for the second test the computation time was 4 to 7 times shorter than for the `+=` and `-=` operators. In conclusion – the fast operation methods `pluseq` and `mineq` should be used instead of `+=` and `-=`, especially for frequent additions/subtractions. If two long expressions are added/subtracted (and not used later on), in order to save memory, one can apply the `plus_combine` and `minus_combine` methods. With all of its advantages – unfortunately the presented concept of overwriting operations has a drawback, since it requires objects to be available for modification. This opposes the idea of object immutability. Lately immutability is strongly motivated by trends in computer architecture i.e. allowing for a several times increased computation speed through parallelization [5]. This issue has not yet been resolved in the discussed research. 5. DIFFERENTIATION AND DEFINITE INTEGRATION A method that returns the derivative, with respect to a selected variable \( s_i \), can apply the well-known formula for multivariate polynomials: \[ \frac{\partial \zeta(s)}{\partial s_i} = \sum_{i=1}^{N} \frac{\partial T_i(s)}{\partial s_i} = \sum_{i=1, k_{ij} \neq 0}^{N} k_{ij} a_i s_j^{k_{ij}-1} \prod_{j=1, j \neq i}^{n} s_j^{k_{ij}}. \] (3) According to the above – a derivative along \( s_i \) is simply computed by adding the sum’s consecutive terms in which \( k_{ij} \neq 0 \). One can notice that the ordering is retained as each monomial’s exponentiations change with the same degree. Hence, actually, the terms can be appended using the slast reference. The computation of the definite integral is slightly more difficult (however – initially it follows an equally obvious formula, what is displayed later on). Its steps could be divided into, firstly, a symbolic evaluation of the indefinite integral and, secondly, substitutions taking into account some boundaries \( s_l = c(s) \) and \( s_l = d(s) \). The auxiliary polynomial \( I \) is computed according to the formula: \[ I = \int \zeta(s) \, ds_l - C = \sum_{i=1}^{N} \frac{a_i}{k_{i,l} + 1} s_j^{k_{i,l}+1} \prod_{j=1, j \neq i}^{n} s_j^{k_{ij}}, \] (4) where \( C \) is the integration constant, which does not need to be considered since it is subtracted later on after substitutions. Up to this moment the ordering is retained as in the case of differentiation. However, it is no longer the case when substitutions are made. One can distinguish two types of \( s_l = \theta \) substitutions. The first is where \( \theta \) is a constant, while the more difficult case is where it is generally another polynomial. The definite integral: \[ I_{\text{def}} = \int_{c(s)}^{d(s)} \zeta(s) \, ds_l = |I|_{s_l = d(s)} - |I|_{s_l = c(s)}, \] (5) is obtained through an application of the following procedures: 1) judging on whether \( c \) and \( d \) are numerical values or polynomials, the appropriate method overload is executed: a) if both values are numerical then the following auxiliary formula is used: \[ \int_{c(s)}^{d(s)} \zeta(s) \, ds_l = \sum_{i=1}^{N} \frac{a_i}{k_{i,l} + 1} (d^{k_{i,l}+1} - c^{k_{i,l}+1}) \prod_{j=1, j \neq i}^{n} s_j^{k_{ij}}. \] (6) b) if at least one is a symbolic expression then first the polynomial \( I \) is obtained by means of the formula (4), c) if only one of the values is numerical then for that case the substitution is made for each monomial, which is then added to a resulting polynomial, while maintaining a proper ordering (this is actually done with methods that the plus_combine operation also executes), d) for each polynomial substitution $s_l = \mathcal{S}$: - an auxiliary array of $SPoly$ objects (called $K$ further on) is obtained and filled by results of exponentiations that appear subject to the substitution, - for each monomial the substitution is made by applying a proper entry of $K$ (note that repeated exponentiations do not need to be made), - the obtained polynomials are added with `plus_combine`; 2) the subtraction of equation (5) can now be made. 6. SUMMARY A simple, lightweight C# implementation has been presented, which allows to perform symbolic computations on multivariate polynomials. It mainly bases on the $SPoly$ and $SMono$ classes, whose instances represent general information on the polynomial and details on each monomial term respectively. An algorithm has been presented for searches of monomials that can be added up forming one (this was used in every type of addition and subtraction method later on). Two types of additional methods have been introduced allowing for faster $+= -$ operations. The first type (represented by the `pluseq` and `mineq` methods) allowed for object overwriting, which resulted in a reduced computation time. The second type (referred to as the overtaking operators) leads to a further reduction of computation time and allows to save memory if dealing with large expressions (the partaking objects are, however, useless after this operation, hence these methods are only suggested in critical cases). Simple procedures have also been implemented to deal with differentiation and definite integration. In part II of this paper – the operations presented in the current part are subjected to tests and the results are discussed in terms of the numerical efficiency. The author has paid particular attention to the implementation of polynomial multiplication, hence it is discussed separately in part III of this paper. BIBLIOGRAPHY Dr inż. Marcin Sowa Silesian University of Technology Faculty of Electrical Engineering Institute of Electrical Engineering and Computer Science ul. Akademicka 10B 44-100, Gliwice e-mail: Marcin.Sowa@polsl.pl
{"Source-Url": "http://www.elektr.polsl.pl/images/elektryka/237/237-6.pdf", "len_cl100k_base": 4568, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 24533, "total-output-tokens": 5310, "length": "2e12", "weborganizer": {"__label__adult": 0.0004067420959472656, "__label__art_design": 0.0004057884216308594, "__label__crime_law": 0.00043845176696777344, "__label__education_jobs": 0.0016145706176757812, "__label__entertainment": 0.00012958049774169922, "__label__fashion_beauty": 0.00019073486328125, "__label__finance_business": 0.00028133392333984375, "__label__food_dining": 0.0005478858947753906, "__label__games": 0.0007863044738769531, "__label__hardware": 0.001605987548828125, "__label__health": 0.0009050369262695312, "__label__history": 0.000335693359375, "__label__home_hobbies": 0.00016820430755615234, "__label__industrial": 0.0009131431579589844, "__label__literature": 0.000324249267578125, "__label__politics": 0.00039005279541015625, "__label__religion": 0.0007653236389160156, "__label__science_tech": 0.1495361328125, "__label__social_life": 0.00016748905181884766, "__label__software": 0.00803375244140625, "__label__software_dev": 0.83056640625, "__label__sports_fitness": 0.0004925727844238281, "__label__transportation": 0.0008840560913085938, "__label__travel": 0.0002465248107910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20504, 0.01956]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20504, 0.85394]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20504, 0.91251]], "google_gemma-3-12b-it_contains_pii": [[0, 1085, false], [1085, 4146, null], [4146, 6356, null], [6356, 8186, null], [8186, 9465, null], [9465, 11749, null], [11749, 13129, null], [13129, 14926, null], [14926, 17644, null], [17644, 19725, null], [19725, 20504, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1085, true], [1085, 4146, null], [4146, 6356, null], [6356, 8186, null], [8186, 9465, null], [9465, 11749, null], [11749, 13129, null], [13129, 14926, null], [14926, 17644, null], [17644, 19725, null], [19725, 20504, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20504, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20504, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20504, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20504, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20504, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20504, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20504, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20504, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20504, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20504, null]], "pdf_page_numbers": [[0, 1085, 1], [1085, 4146, 2], [4146, 6356, 3], [6356, 8186, 4], [8186, 9465, 5], [9465, 11749, 6], [11749, 13129, 7], [13129, 14926, 8], [14926, 17644, 9], [17644, 19725, 10], [19725, 20504, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20504, 0.03681]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
50ed25fbaa24ce80bd46ebd63ee6167f9c9acd1d
Chapel: Base Language Goals of this Talk - Help you understand code in subsequent slide decks - Give you the basic skills to program in Chapel today - Provide a survey of Chapel’s base language features - Impart an appreciation for the base language design Note: There is more in this slide deck than we will be able to cover, so consider it a reference and overview "Hello World" in Chapel: Two Versions - Fast prototyping ```chapel writeln("Hello, world!"); ``` - “Production-grade” ```chapel module HelloWorld { def main() { writeln("Hello, world!"); } } ``` Characteristics of Chapel - **Syntax** - Basics taken from C and Modula - Influences from several other languages - **Semantics** - Imperative, block-structured execution model - Optional object-oriented programming - Type inference for convenience and generic programming - Static typing for performance and safety - **Design points** - No pointers and limited aliases - No compiler-inserted array temporaries - Intentionally not an extension of an existing language Chapel Influences ZPL, HPF: data parallelism, index sets, distributed arrays CRA Y M T A C/For tr an: task parallelism, synchronization CLU (see also Ruby, Python, C#): iterators Scala (see also ML, Matlab, Perl, Python, C#): type inference Java, C#: OOP, type safety C++: generic programming/templates (but with a different syntax) Outline • Introductory Notes • Elementary Concepts • Lexical structure • Types, variables, and constants • Operators and Assignments • Compound Statements • Input and output • Data Types and Control Flow • Program Structure Lexical Structure • Comments /* standard C style multi-line */ // standard C++ style single-line • Identifiers: • Composed of A-Z, a-z, _, $, 0-9 • Cannot start with 0-9 • Case-sensitive ## Primitive Types <table> <thead> <tr> <th>Type</th> <th>Description</th> <th>Default Value</th> <th>Currently-Supported Bit Widths</th> <th>Default Bit Width</th> </tr> </thead> <tbody> <tr> <td>bool</td> <td>logical value</td> <td>false</td> <td>8, 16, 32, 64</td> <td>impl. dep.</td> </tr> <tr> <td>int</td> <td>signed integer</td> <td>0</td> <td>8, 16, 32, 64</td> <td>32</td> </tr> <tr> <td>uint</td> <td>unsigned integer</td> <td>0</td> <td>8, 16, 32, 64</td> <td>32</td> </tr> <tr> <td>real</td> <td>real floating point</td> <td>0.0</td> <td>32, 64</td> <td>64</td> </tr> <tr> <td>imag</td> <td>imaginary floating point</td> <td>0.0i</td> <td>32, 64</td> <td>64</td> </tr> <tr> <td>complex</td> <td>complex floating points</td> <td>0.0 + 0.0i</td> <td>64, 128</td> <td>128</td> </tr> <tr> <td>string</td> <td>character string</td> <td>“”</td> <td>any multiple of 8</td> <td>N/A</td> </tr> </tbody> </table> ### Syntax ``` primitive-type: type-name [(bit-width)] ``` ### Examples ``` int(64) // 64-bit int real(32) // 32-bit real uint // 32-bit uint ``` Implicit Type Conversions (Coercions) Notes: - reals do not implicitly convert to ints as in C - ints and uints don’t interconvert as handily as in C - C# has served as our guide in establishing these rules Type Aliases and Casts • **Basic Syntax** ```chapel type-alias-declaration: type identifier = type-expr; cast-expr: expr : type-expr ``` • **Semantics** • type aliases are simply symbolic names for types • casts are supported between any primitive types • **Examples** ```chapel type elementType = complex(64); 5:int(8) // store value as int(8) rather than int(32) "54":int // convert string to an int(32) 249:elementType // convert int to complex(64) ``` Variables, Constants, and Parameters • Basic syntax ``` declaration: var identifier [: type] [= init-expr]; const identifier [: type] [= init-expr]; param identifier [: type] [= init-expr]; ``` • Semantics • `var/const`: execution-time variable/constant • `param`: compile-time constant • No `init-expr` ⇒ initial value is the type’s default • No `type` ⇒ type is taken from `init-expr` • Examples ``` class pi: real = 3.14159; var count: int; // initialized to 0 param debug = true; // inferred to be bool ``` Config Declarations **Syntax** ``` config-declaration: config type-alias-declaration config declaration ``` **Semantics** - Like normal, but supports command-line overrides - Must be declared at module/file scope **Examples** ```chapel config param intSize = 32; config type elementType = real(32); config const epsilon = 0.01:elementType; config var start = 1:int(intSize); ``` ```bash % chpl myProgram.chpl -sintSize=64 -selementType=real % a.out --start=2 --epsilon=0.00001 ``` <table> <thead> <tr> <th>Operator</th> <th>Description</th> <th>Associativity</th> <th>Overloadable</th> </tr> </thead> <tbody> <tr> <td>:</td> <td>cast</td> <td>left</td> <td>no</td> </tr> <tr> <td>**</td> <td>exponentiation</td> <td>right</td> <td>yes</td> </tr> <tr> <td>! ~</td> <td>logical and bitwise negation</td> <td>right</td> <td>yes</td> </tr> <tr> <td>* / %</td> <td>multiplication, division and modulus</td> <td>left</td> <td>yes</td> </tr> <tr> <td>unary + −</td> <td>positive identity and negation</td> <td>right</td> <td>yes</td> </tr> <tr> <td>+ −</td> <td>addition and subtraction</td> <td>left</td> <td>yes</td> </tr> <tr> <td>&lt;&lt; &gt;&gt;</td> <td>shift left and shift right</td> <td>left</td> <td>yes</td> </tr> <tr> <td>&lt;= &gt;= &lt; &gt;</td> <td>ordered comparison</td> <td>left</td> <td>yes</td> </tr> <tr> <td>== !=</td> <td>equality comparison</td> <td>left</td> <td>yes</td> </tr> <tr> <td>&amp;</td> <td>bitwise/logical and</td> <td>left</td> <td>yes</td> </tr> <tr> <td>^</td> <td>bitwise/logical xor</td> <td>left</td> <td>yes</td> </tr> <tr> <td>l</td> <td>bitwise/logical or</td> <td>left</td> <td>yes</td> </tr> <tr> <td>&amp;&amp;</td> <td>short-circuiting logical and</td> <td>left</td> <td>via isTrue</td> </tr> <tr> <td></td> <td></td> <td></td> <td>short-circuiting logical or</td> </tr> </tbody> </table> ## Assignments <table> <thead> <tr> <th>Kind</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>=</td> <td>simple assignment</td> </tr> <tr> <td>+= -= *= /= %= *= &amp;=</td> <td>= ^= &amp;&amp;=</td> </tr> <tr> <td>&lt;=&gt;</td> <td>swap assignment</td> </tr> </tbody> </table> - **Note:** assignments are only supported at the statement level Compound Statements - **Syntax** ``` compound-stmt: { stmt-list } ``` - **Semantics** - As in C, permits a series of statements to be used in place of a single statement - **Example** ``` { writeln(“Starting a compound statement”); x += 1; writeln(“Ending the compound statement”); } ``` Output - `write(expr-list)`: writes the argument expressions - `writeln(...)` variant: writes a linefeed after the arguments Input - `read(expr-list)`: reads values into the argument expressions - `read(type-list)`: reads values of given types, returns as tuple - `readln(...)` variant: same, but reads through next linefeed Example: ```chapel var first, last: string; write("what is your name? "); read(first); last = read(string); writeln("Hi ", first, " ", last); ``` What is your name? Chapel User Hi Chapel User I/O to files and strings also supported Outline - Introductory Notes - Elementary Concepts - Data Types and Control Flow - Tuples - Ranges - Arrays - For loops - Other control flow - Program Structure Tuples • Syntax heterogeneous-tuple-type: ( type, type-list ) homogenous-tuple-type: param-int-exp * type tuple-expr: ( expr, expr-list ) • Purpose • supports lightweight grouping of values (e.g., when passing or returning procedure arguments) • Examples var coord: (int, int, int) = (1, 2, 3); var coordCopy: 3*int = i3; var (i1, i2, i3) = coord; var triple: (int, string, real) = (7, “eight”, 9.0); Range Values • **Syntax** \[ \text{range-expr: [low] .. [high]} \] • **Semantics** • Regular sequence of integers low <= high: low, low+1, low+2, ..., high low > high: degenerate (an empty range) low or high unspecified: unbounded in that direction • **Examples** 1..6 // 1, 2, 3, 4, 5, 6 6..1 // empty 3.. // 3, 4, 5, 6, 7, ... Range Operators **Syntax** ``` range-op-expr: range-expr by stride range-expr # count range-expr(range-expr) ``` **Semantics** - **by**: strides range; negative *stride* ⇒ start from *high* - **#**: selects initial *count* elements of range - **()** or **[]**: intersects the two ranges **Examples** ``` 1..6 by 2 // 1, 3, 5 1..6 by -1 // 6, 5, 4, ..., 1 1..6 #4 // 1, 2, 3, 4 1..6[3..] // 3, 4, 5, 6 1.. by 2 // 1, 3, 5, ... 1.. by 2 #3 // 1, 3, 5 1.. #3 by 2 // 1, 3 0..#n // 0, ..., n-1 ``` Array Types - Syntax \[ \text{array-type:} \\ \quad [ \text{index-set-expr} ] \text{ elt-type} \] - Semantics - Stores an element of \text{elt-type} for each index - May be initialized using tuple expressions - Examples ```chapel var A: [1..3] int = (5, 3, 9), // 3-element array of ints B: [1..3, 1..5] real, // 2D array of reals C: [1..3][1..5] real; // array of arrays of reals ``` *Much more on arrays in the data parallelism talk...* **For Loops** - **Syntax** ```chapel for-loop: for index-exp in iterable-exp { stmt-list } ``` - **Semantics** - Executes loop body serially, once per loop iteration - Declares new variables for identifiers in `index-exp` - type and const-ness determined by `iterable-exp` - `iterable-exp` could be a range, array, or iterator - **Examples** ```chapel var A: [1..3] string = (" DO", " RE", " MI"); for i in 1..3 { write(A(i)); } // DO RE MI for a in A { a += "LA"; } write(A); // DOLA RELA MILA ``` Zipper and Tensor Iteration • Syntax ``` zipper-for-loop: for index-expr in ( iterable-exprs ) { stmt-list } tensor-for-loop: for index-expr in [ iterable-exprs ] { stmt-list } ``` • Semantics • Zipper iteration is over all yielded indices pair-wise • Tensor iteration is over all pairs of yielded indices • Examples ``` for i in (1..2, 0..1) { ... } // (1,0), (2,1) for i in [1..2, 0..1] { ... } // (1,0), (1,1), (2,0), (2,1) ``` Other Control Flow Statements - **Conditional statements** ```chapel if cond { computeA(); } else { computeB(); } ``` - **While loops** ```chapel while cond { compute(); } ``` - **Select statements** ```chapel select key { when value1 { compute1(); } when value2 { compute2(); } otherwise { compute3(); } } ``` **Note:** *Chapel also has expression-level conditionals and for loops* Control Flow: Braces vs. Keywords Note: Most control flow supports keyword-based forms for single-statement versions - Conditional statements ```chapel if cond then computeA(); else computeB(); ``` - While loops ```chapel while cond do compute(); ``` - Select statements ```chapel select key { when value1 do compute1(); when value2 do compute2(); otherwise do compute3(); } ``` - For loops ```chapel for indices in iterable-expr do compute(); ``` Outline - Introductory Notes - Elementary Concepts - Data Types and Control Flow - Program Structure - Procedures and iterators - Modules and main() - Records and classes - Generics - Other basic language features Procedures, by example - Example to compute the area of a circle ```chapel def area(radius: real): real { return 3.14 * radius**2; } writeln(area(2.0)); // 12.56 ``` - Example of argument default values, naming ```chapel def writeCoord(x: real = 0.0, y: real = 0.0) { writeln((x,y)); } writeCoord(2.0); // (2.0, 0.0) writeCoord(y=2.0); // (0.0, 2.0) writeCorrd(y=2.0, 3.0); // (3.0, 2.0) ``` Argument and return types can be omitted. Iterators - **Iterator**: a procedure that generates values/variables - Used to drive loops or populate data structures - Like a procedure, but yields values back to invocation site - Control flow logically continues from that point - **Example** ```chapel def fibonacci(n: int) { var current = 0, next = 1; for 1..n { yield current; current += next; current <=> next; } } for f in fibonacci(7) do writeln(f); ``` 0 1 1 2 3 5 8 Argument and Return Intents - Arguments can optionally be given intents - (blank): varies with type; follows principle of least surprise - most types: `const` - arrays, domains, sync vars: passed by reference - `const`: disallows modification of the formal - `in`: copies actual into formal at start; permits modifications - `out`: copies formal into actual at procedure return - `inout`: does both of the above - `param/type`: formal must be a param/type (evaluated at compile-time) - Returned values are `const` by default - `const`: cannot be modified (without assigning to a variable) - `var`: permits modification back at the callsite - `type`: returns a type (evaluated at compile-time) - `param`: returns a param value (evaluated at compile-time) Modules • **Syntax** ```chapel module-def: module identifier { code } module-use: use module-identifier; ``` • **Semantics** • all Chapel code is stored in modules • using a module makes its symbols visible in that scope • module-level statements are executed at program startup • useful for initializing a module • for convenience, a file with top-level code defines a module with the file’s name Semantics Chapel programs start by: - initializing all modules - executing main(), if it exists Any module may define a main() procedure If multiple modules define main(), the user must select one M1.chpl: ```chapel use M2; writeln("Init-ing M1"); def main() { writeln("Running M1"); } ``` M2.chpl: ```chapel module M2 { use M1; writeln("Init-ing M2"); def main() { writeln("Running M2"); } } ``` % chpl M1.chpl M2.chpl --main-module M1 % ./a.out Init-ing M2 Init-ing M1 Running M1 Revisiting "Hello World" - Fast prototyping ```chpl hello.chpl writeln("Hello, world!"); ``` - "Production-grade" ```chpl module HelloWorld { def main() { writeln("Hello, world!"); } } ``` Module-level code is executed during module initialization main() executed when program begins running Records and Classes - Chapel’s struct/object types - Contain variable definitions (fields) - Contain procedure & iterator definitions (methods) - Records: value-based (e.g., assignment copies fields) - Classes: reference-based (e.g., assignment aliases object) - Record:Class :: C++ class:Java class Example ```chapel record circle { var radius: real; def area() { return pi*radius**2; } } var c1, c2: circle; c1 = new c1(radius=1.0); c2 = c1; // copies c1 C1.radius = 5.0; writeln(c2.radius); // 1.0 // records deleted by compiler ``` Records and Classes - Chapel’s struct/object types - Contain variable definitions (fields) - Contain procedure & iterator definitions (methods) - Records: value-based (e.g., assignment copies fields) - Classes: reference-based (e.g., assignment aliases object) - Record:Class :: C++ class:Java class - Example ```plaintext class circle { var radius: real; def area() { return pi*radius**2; } } var c1, c2: circle; c1 = new c1(radius=1.0); c2 = c1; // aliases c1’s circle c1.radius = 5.0; writeln(c2.radius); // 5.0 delete c1; ``` Method Examples Methods are procedures associated with types ```chapel def circle.circumference return 2* pi * radius; writeln(c1.area(), " ", c1.circumference); ``` Methods can be defined for any type ```chapel def int.square() return this**2; writeln(5.square()); ``` Generic Procedures Generic procedures can be defined using type and param arguments: ```chapel def foo(type t, x: t) { ... } def bar(param bitWidth, x: int(bitWidth)) { ... } ``` Or by simply omitting an argument type (or type part): ```chapel def goo(x, y) { ... } def sort(A: []) { ... } ``` Generic procedures are instantiated for each unique argument signature: ```chapel foo(int, 3); // creates foo(x:int) foo(string, "hi"); // creates foo(x:string) goo(4, 2.2); // creates goo(x:int, y:real) ``` Generic objects can be defined using type and param fields: ```chapel class Table { param size: int; var data: size*int; } class Matrix { type eltType; ... } ``` Or by simply eliding a field type (or type part): ```chapel record Triple { var x, y, z; } ``` Generic objects are instantiated for each unique type signature: ```chapel // instantiates Table, storing data as a 10-tuple var myT: Table(10); // instantiates Triple as x:int, y:int, z:real var my3: Triple(int, int, real) = new Triple(1, 2, 3.0); ``` Other Base Language Features not covered today - Enumerated types - Unions - Type select statements, argument type queries - Parenthesis-less functions/methods - Procedure dispatch constraints (where clauses) - Compile-time features for meta-programming - type/param procedures - folded conditionals - unrolled for loops - user-defined compile-time warnings and errors • Most features are in reasonably good shape • Performance is currently lacking in some cases • Some semantic checks are incomplete • e.g., constness-checking for members, arrays • Error messages could use improvement at times • OOP features are limited in certain respects • multiple inheritance • user constructors for generic classes, subclasses • Memory for strings is currently leaked Future Directions - Fixed-length strings - I/O improvements - Binary I/O - Parallel I/O - General serialization capability - Exceptions - Interfaces - Namespace control - private fields/methods in classes and records - module symbol privacy, filtering, renaming - Interoperability with other languages - Introductory Notes - Characteristics - Influences - Elementary Concepts - Lexical structure - Types, variables, and constants - Operators and assignments - Compound Statements - Input and output - Data Types and Control Flow - Tuples - Ranges - Arrays - For loops - Other control flow - Program Structure - Procedures and iterators - Modules and main() - Records and classes - Generics - Other basic language features Questions?
{"Source-Url": "https://chapel-lang.org/tutorials/SC10/M10-2-Basics.pdf", "len_cl100k_base": 5507, "olmocr-version": "0.1.50", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 64861, "total-output-tokens": 6962, "length": "2e12", "weborganizer": {"__label__adult": 0.0003705024719238281, "__label__art_design": 0.00020956993103027344, "__label__crime_law": 0.0002005100250244141, "__label__education_jobs": 0.0008215904235839844, "__label__entertainment": 4.857778549194336e-05, "__label__fashion_beauty": 0.00010883808135986328, "__label__finance_business": 0.0001385211944580078, "__label__food_dining": 0.0003993511199951172, "__label__games": 0.0004534721374511719, "__label__hardware": 0.0005044937133789062, "__label__health": 0.0002682209014892578, "__label__history": 0.00014448165893554688, "__label__home_hobbies": 8.112192153930664e-05, "__label__industrial": 0.0002624988555908203, "__label__literature": 0.0001798868179321289, "__label__politics": 0.0001837015151977539, "__label__religion": 0.0003788471221923828, "__label__science_tech": 0.001155853271484375, "__label__social_life": 8.851289749145508e-05, "__label__software": 0.0032958984375, "__label__software_dev": 0.98974609375, "__label__sports_fitness": 0.0003161430358886719, "__label__transportation": 0.0003561973571777344, "__label__travel": 0.00020003318786621096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18698, 0.01746]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18698, 0.44811]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18698, 0.61799]], "google_gemma-3-12b-it_contains_pii": [[0, 22, false], [22, 369, null], [369, 594, null], [594, 1083, null], [1083, 1429, null], [1429, 1664, null], [1664, 1867, null], [1867, 3085, null], [3085, 3293, null], [3293, 3790, null], [3790, 4329, null], [4329, 4821, null], [4821, 6241, null], [6241, 6717, null], [6717, 7016, null], [7016, 7577, null], [7577, 7749, null], [7749, 8172, null], [8172, 8564, null], [8564, 9104, null], [9104, 9583, null], [9583, 10120, null], [10120, 10561, null], [10561, 10978, null], [10978, 11487, null], [11487, 11712, null], [11712, 12163, null], [12163, 12651, null], [12651, 13435, null], [13435, 13883, null], [13883, 14374, null], [14374, 14700, null], [14700, 15264, null], [15264, 15816, null], [15816, 16100, null], [16100, 16611, null], [16611, 17146, null], [17146, 17524, null], [17524, 17920, null], [17920, 18233, null], [18233, 18698, null]], "google_gemma-3-12b-it_is_public_document": [[0, 22, true], [22, 369, null], [369, 594, null], [594, 1083, null], [1083, 1429, null], [1429, 1664, null], [1664, 1867, null], [1867, 3085, null], [3085, 3293, null], [3293, 3790, null], [3790, 4329, null], [4329, 4821, null], [4821, 6241, null], [6241, 6717, null], [6717, 7016, null], [7016, 7577, null], [7577, 7749, null], [7749, 8172, null], [8172, 8564, null], [8564, 9104, null], [9104, 9583, null], [9583, 10120, null], [10120, 10561, null], [10561, 10978, null], [10978, 11487, null], [11487, 11712, null], [11712, 12163, null], [12163, 12651, null], [12651, 13435, null], [13435, 13883, null], [13883, 14374, null], [14374, 14700, null], [14700, 15264, null], [15264, 15816, null], [15816, 16100, null], [16100, 16611, null], [16611, 17146, null], [17146, 17524, null], [17524, 17920, null], [17920, 18233, null], [18233, 18698, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18698, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 18698, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18698, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18698, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18698, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18698, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18698, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18698, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18698, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18698, null]], "pdf_page_numbers": [[0, 22, 1], [22, 369, 2], [369, 594, 3], [594, 1083, 4], [1083, 1429, 5], [1429, 1664, 6], [1664, 1867, 7], [1867, 3085, 8], [3085, 3293, 9], [3293, 3790, 10], [3790, 4329, 11], [4329, 4821, 12], [4821, 6241, 13], [6241, 6717, 14], [6717, 7016, 15], [7016, 7577, 16], [7577, 7749, 17], [7749, 8172, 18], [8172, 8564, 19], [8564, 9104, 20], [9104, 9583, 21], [9583, 10120, 22], [10120, 10561, 23], [10561, 10978, 24], [10978, 11487, 25], [11487, 11712, 26], [11712, 12163, 27], [12163, 12651, 28], [12651, 13435, 29], [13435, 13883, 30], [13883, 14374, 31], [14374, 14700, 32], [14700, 15264, 33], [15264, 15816, 34], [15816, 16100, 35], [16100, 16611, 36], [16611, 17146, 37], [17146, 17524, 38], [17524, 17920, 39], [17920, 18233, 40], [18233, 18698, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18698, 0.04839]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
3986ffe3df184c3f86a3bfa5e6ebddeb7ba474bc
Toward Open and Unified Link-Layer API Tim Farnham, Alain Geflaut, Andreas Ibing, Petri Mähönen, Diego Melpignano, Janne Riihijärvi, and Mahesh Sooriyabandara Abstract—This paper describes the motivation and first results from the work carried out in the European GOLLUM-project toward developing a open, extendible, and unified API for accessing link-layer functionality and information. Key features of this API will include a general querying mechanism based on database technologies, and methods for setting up asynchronous notifications regarding changes in link conditions, in a technology-independent manner. Applications for such an interface are numerous and cover domains such as mobility and network cross-layer optimizations. Index Terms—API, abstraction layer, link-layer events I. INTRODUCTION In the present-day wireless world application programmers have to be very mindful of the platform they are writing the applications on. A program designed to work on a cellular phone or on a future smart mobile phone will probably not work without great modifications on a PDA equipped with a Bluetooth connection, or on a laptop using a wireless LAN. Direct portability for even more embedded devices would be almost unimaginable, and in particular most automation and control systems radio link layers are completely proprietary without any support from operating systems and application programmers. In part this is because of the large number of different operating systems in use. While unification is progressing in this sector, great problems remain on the other problem area, namely in the interface used to access the wireless access devices (or air interfaces). Even when using the same or compatible operating system the methods used to access, say, a Bluetooth or GSM/WCDMA link, and a Wireless LAN differ. This difference becomes even greater when many of the small, embedded radios are considered. The situation becomes even more difficult if the application (or middleware) should somehow intelligently respond to some events or changes in the wireless channel. The necessary mechanisms are usually simply not there, and even in the cases that they are available, they again are different form one technology to the other. With this background the necessary development seems clear: a unified API of sufficient generality and extendibility and corresponding embedded middleware, together with an embedded software reference implementation should be developed to unify access and information retrieval from various wireless and wired technologies. This is precisely what the GOLLUM-project aims to do. II. IDEA OF UNIFIED LINK-LAYER API The purpose of the GOLLUM project is to propose and develop a software Application Programmers’ Interface (API) that will hide network standards heterogeneity behind a common set of functionality applicable to all types of networks. We call such an API a Unified Link Layer API (ULLA). We believe that such an API is a big step in the direction toward intelligent radio aware software that will be able to accommodate multiple radio standards in a seamless way. The ULLA will help to resolve the complexity and interoperability problem related to the large number of different APIs and methods used for accessing communication interfaces, especially in the embedded domain. It will provide real and useful triggers, handles for different smart, context sensitive and link/network aware applications; enabling the development of “cognitive applications”. As a concept this is a well-known paradigm and goal. The problem is that no really acceptable and useful reference API has been provided in the public domain. ULLA will also provide abstraction and extendibility to permit different underlying wireless interfaces and networking technologies that exist now and will emerge in the future. Important emerging technologies are composite multi-mode radio and Software Defined Radio (SDR), and the flexibility of these devices to operate in different modes of operation and dynamically reconfigure will be made available through ULLA. The ULLA provides an abstraction from specific link technologies to the applications or other link users (where link users can include any higher layer protocols, middleware or... application software). It achieves this by regarding a link to be a generic means of providing a communication service. Links are made available and configured through link providers to permit abstraction from specific platforms and technologies. The following high level requirements capture the main functionalities of ULLA. 1) Notify of appropriate link changes and statistics Only inform the link users of events that the link users are interested in and with appropriate timeliness and granularity. For example, an application could be interested in the periodic link statistics or significant performance change events. Examples of statistics are average packet loss rate over a specific period of time or average latency for packet transmissions. Significant events to one application could be a specific increase or decrease in bandwidth or error rate. To other applications latency variations may be more important, such as the disruption caused by handover events. 2) Process link events Events from the link providers need to be processed in order to determine whether to forward them to link users or store them and/or perform statistical operations on the event information. For example, the link events could be generated frequently and many events may not be of interest to the applications, but statistics regarding the events may have been requested, such as averages or repeated events (for example, packet loss burst) lengths. Therefore, the event information needs to be stored and processed to form the notifications to the applications. 3) Provide link information and configure links The commands to configure links need to be processed to determine which link provides the corresponding command, and to issue a “not supported” response if necessary. It is also necessary to determine which link provider operations to invoke, for example, to connect or disconnect a specific link. It should therefore be possible for link providers to register with ULLA in order to specify which operations and attributes can be accessed. Applications may require detailed information prior to selecting and configuring specific links and therefore, the selection and retrieval of link related information (attributes) should be made possible by the ULLA. III. STATE-OF-THE-ART There were, in the past, several attempts to provide a uniform and simplified access to link layers. Through the Linux Wireless Extensions (LWE) [8], Linux proposes a uniform interface to control and configure wireless network devices. The LWE is implemented as an extension of the socket interface and features the ability to deliver network events to user level applications. LWE is, however limited to 802.11 based network drivers and does not provide any way to be dynamically extended. Also the set of supported events is constrained to simple things such as Link up or down. The Windows NDIS (Network Driver Interface Specification) extends this idea to all network drivers by providing a generic framework and interface allowing applications to query and set Object Identifiers (OIDs) representing specific attributes of a network, such as connection status or bandwidth. Still the NDIS interface does not deal with link layers that are not implemented as NDIS network drivers (Bluetooth, modems). Moreover, the interface provided to applications is low level since it can be compared to I/O controls allowing attributes to be queried and set. To provide a uniform access model to modems embedded in mobile phones, manufacturers have extended the basic set of commands called “Hayes commands” or “AT commands”, originally specified to allow applications to use PSTN lines to interchange information and to reach remote information resources. With the rise of GSM/GPRS system, the AT commands specification has been enriched with the aim of allowing the control and management of the capabilities being provided by the new devices. The AT interface now supports configuration as well as event notifications and has become the de-facto standard to manage modems embedded in phones. However due to the number of supported commands the AT interface stays cumbersome to use and is limited to modem like devices. Some higher level APIs such as the Windows Telephony API (TAPI) have been built over the AT command set in order to simplify the access to modem like devices. These interfaces are still dedicated to a single type of devices, though. Similarly, several research projects have tried to provide an interface enabling applications to retrieve information or configure link layers in a way simpler than the existing interfaces. Odyssey [6] is a framework built on NetBSD and aims to enable application-aware adaptation to resources availability and modification. Applications collaborate with the Odyssey framework by communicating resource expectations that are expressed in term of lower and upper bounds. The Odyssey framework is then responsible for monitoring the resources and for notifying the application as soon as the resources have left the requested bounds. Even if Odyssey shares some of the goals defined for the ULLA (notify application of appropriate link changes), it suffers from a non portable interface since it is based on the Virtual File System interface of NetBSD. It also does not provide any function to control link layers. Finally, the number of available monitored resources is limited (6) and the extensibility to other resources is poor. CME [7] is another middleware architecture for network-aware adaptive applications where application notifications are available. The proposed architecture takes, however, another approach by delegating the control of the link layers to a centralized Connection Controller whose decisions are based on policies registered by applications. This is different from the proposed ULLA since the ULLA is not meant to decide which link layer should be used and how. Also CME does not address issues such as extensibility or the ability to control the link layers directly from application level. It finally provides a very limited set of notifications for applications. IV. Example Application Domains In the following we discuss some application areas that would benefit from the existence of the ULLA API. A. Connection Manager (Always Best Connected) At present, due to the limitations in the IP stack, most mobile devices can only handle a single active connection at a time (per link). Connection manager is typically understood as the agent responsible for deciding on the user of the link, but at present the existing connection managers have to operate on a very limited amount of information (essentially driven by profiles and simple inferences from URIs, for example). The existence of ULLA would allow development of much more intelligent connection management schemes based on information inferred from both the link-layer, and end-to-end connection. This could even include implementation of actual hand-off schemes, for switching connection from one interface to another, either using Mobile IP (see below for further discussion), or any of the various transport or application layer schemes. Also, using ULLA, the connection management implementation could be independent of particular link-layer technologies, and also partially transportable between operating systems. B. Unified Cross-Layer Optimization In recent years, a variety of solutions for wireless cross-layer optimization (CLO) have been proposed. With CLO multiple parameters at different layers of the protocol stack are jointly optimized to meet application requirements. However, these solutions are often developed in an ad hoc fashion limiting their widespread use as standard mechanisms. Therefore, the ability to control radio “knobs” ([1]) in an intelligent way requires that interfaces are defined to expose controllable parameters. By providing a uniform way to access a wide variety of link-layer parameters in a technology-independent way, ULLA would be an important enabling technology for CLO. We believe that ULLA could help optimizing network operations across the protocol stack. At the network layer ULLA could be used to optimize mobility solutions by providing preliminary notifications when the link quality falls below a threshold, so that the handover process can be anticipated. This would help to reduce handoff latency. In ad-hoc networks link-layer information could be used for routing optimization by selecting the network interface to use in multi-homed terminals. Transport protocols like TCP can be adversely affected by the wireless link error and delay patterns, leading to throughput reduction and energy waste in a mobile terminal [1]. Error control at the link layer may interfere with end-to-end flow control at the transport layer. A joint tuning of parameters seems therefore desirable. Information provided by ULLA related to handoffs, network disconnection and packet losses could be used to differentiate congestion problems from packet losses, and improve timer management in TCP (see, for example, [4]) and other protocols sensitive to jitter and other changes in network characteristics. Adaptation to changing link characteristics can be effective at the application layer, as shown in [1]. In multimedia streaming scenarios, source coding parameters can be dynamically varied, based on link statistics; PHY/MAC parameters can also be controlled accordingly. In [2] a CLO example is presented that uses real-time transcoding in a Wireless LAN environment. Delay constrained applications (like wireless VoIP) may want to enforce robustness by adding FEC streams ([3]), when the packet error rate goes above an acceptable level. Applications could also use the ULLA notifications to delay their network requests until the required network characteristics are fulfilled (bandwidth, latency…). C. Context-sensitive Applications Considerable research effort has emerged for creating frameworks for context-sensitive applications. In general, context sensitivity can be taken to be adaptation from the part of the terminal or application to changes in environment, network conditions, location of the user (logical or physical), and so on. However, actual implementation of context-sensitivity has turned out to be extremely difficult, and it has even been argued [5] that “generic artificial intelligence” is necessary to make the concept work. We see ULLA as being one enabling technology for building generic context management systems. It would allow receiving information about the user context that the network(s) could provide through a unified interface, something that is impossible with present-day technologies. The context ULLA could provide would, for example, include location information (either in terms of absolute or relative coordinates in case of cellular systems or ultra-wideband links, or in logical terms using the information about the networks detected in the case of WLANs, for example), information about user mobility, and about other devices surrounding the user. V. Preliminary Architecture On top of providing a uniform API, the architecture of the ULLA has been designed to fulfill the following requirements. 1) Extensibility: the proposed architecture should be able to easily integrate new link layer technologies, possibly providing new features. (2) Platform independence: the proposed architecture should be able to be integrated on multiple software and hardware platforms (platform independence). (3) Scalability: the proposed architecture should be light enough to be used on very limited platforms such as sensor devices. (4) Battery life friendly: the proposed architecture should not be a major source of battery drain. The approach taken in the design of the ULLA is that it should provide an abstract view of the link layers to the ULLA clients. Using an abstract representation allows the ULLA to provide a uniform way to access the wide range of existing link layers independently of their implementation. In order to manipulate these abstractions, a specific query language should be used. In the following, a ULLA query specifies a request made by an application to retrieve information about a link layer. A notification request is used to specify a condition that should trigger an asynchronous notification. Finally a command is a request specifying an action that should be executed to modify a link layer state. Queries and request notifications both use the ULLA query language. The following paragraph gives an overview of the basic elements composing the ULLA architecture. A. Detailed Description of the Architecture Figure 1 describes the ULLA architecture. The architecture is composed of four main components. ![Basic ULLA Architecture](image) The ULLA Query Processing Engine (UQPE) is responsible for parsing the requests made by ULLA clients (queries, notification requests and commands). The handling of a request depends on its type. If the request is a query, the UQPE parses the request and uses the ULLA storage to return the requested information. If the request is a notification request, the UQPE stores the requests in the ULLA storage so that it can be later on evaluated. Finally, if the request is a command, the request is forwarded to the corresponding ULLA Link Layer Adapter. To provide abstraction of the existing link layers, we use an object oriented design. The ULLA storage is used to store class definitions (representing link definitions) as well as instances of these classes (representing discovered links). Link definitions are written with an Interface Definition Language (IDL) and expose a set of attributes that can be queried (in this sense, a class definition is similar to a DB schema). A link definition also specifies a set of methods available to modify the state of a link layer (commands). Instance of class definitions are created when new link layers are discovered. The ULLA archive storage is an optional component of the architecture. Its purpose is to provide a way to store historical information about the link layers in order to provide statistical values for ULLA clients. Access to the ULLA archive storage is realized through the UQPE. Obviously today’s operating systems won’t support the ULLA expected interface. To keep the ULLA platform independent and enable a quick integration in existing systems, we introduce the notion of ULLA Link Layer Adapters (LLAs). A LLA is a proxy interface on the existing driver that implements an interface known by the ULLA (methods and queries), enabling the ULLA to forward queries and method calls to the driver through the LLA. Upon starting, a LLA registers, to the ULLA, the interface that it implements. In addition, LLAs are used to send update requests resulting from notification queries made by applications (see paragraph on battery life). LLAs push modifications of attributes through events that are sent to the UQPE. LLAs are also used to report link discovery to the ULLA. In such a case, a new class using the class definition supported by the LLA, is created in the ULLA storage. Because LLAs are tightly coupled with the link layer implementation, they will probably have to be hand written. In the future we expect to see ULLA enabled drivers that will integrate the functionality provided by the LLAs. B. Extensibility Easy extensibility is probably the most important requirement on the ULLA, in particular in a field where new standards appear at a rapid pace. The ULLA provides extensibility by supporting link class inheritance. We envisage that a default ULLA_Link class will provide all common attributes and methods generic for all link layers. More specific attributes can be defined in daughter classes inherited from the ULLA_Link class. From an application point of view, it should always be possible to access a link layer using only the attributes defined by the ULLA_Link class. If an application wishes to use more advanced feature of a link, it needs to get the inherited class describing this link. C. Battery life As presented in the architecture description, the ULLA storage maintains a cache version of the attributes provided by known link layers (bandwidth, Signal Strength…). The main issue we have now is updating these values while still limiting battery consumption due to polling or code execution. To reach this goal, the following strategy is used. All attributes maintained in the ULLA storage have an associated timestamp indicating the last time they were updated. When a query request is performed, the validity of the requested attribute(s) is specified in the query and passed to the UQPE. For each requested attribute, the UQPE checks with the associated attribute timestamp the validity of the attribute. If the attribute is too old, a query is forwarded to the corresponding LLA to retrieve the current version of the attribute. This “lazy update” strategy enables the ULLA to keep the number of LLA queries to the bare minimum required by applications. Dealing with notification request is a little trickier. Notification requests specify a condition that should be met before notifying an application. A notification condition can span over several attributes of several link layers. When such a condition is received, the UQPE breaks it in a series of update requests sent to the involved LLAs. Each LLA update request specifies an attribute as well as a frequency or a threshold that should be used to send an event back to the UQPE. For example, the UQPE could request a LLA to send, an update for the signal strength attribute, every 2 seconds or if it changes more than x db. Internally the LLA is free to implement this update request through polling or with the support of the underlying driver that could provide some form of asynchronous notification. Note that the UQPE is not aware of the internal implementation of LLAs. It only receives events from the LLA and these events trigger a re-evaluation of the pending notification requests stored in the ULLA storage. ![Mapping of request notification to LLA update requests.](image) **Example Scenario** Potentially a large number of applications could utilise information passed through ULLA to perform specific adaptations to realise performance enhancements. For example a video application could consider information about changing bandwidth passed through ULLA to adapt and optimise its performance. During start-up, the application could request for notifications about bandwidth changes. Notification from ULLA to the application may also contain additional information about availability of alternate transmission modes. Based on this information, application could either adapt its operation by changing its coding schemes and playback buffers, instructing ULLA to select another mode of operation or performing both adaptations to maintain user perceived quality. The final adaptation may depend on the user preferences (on quality, economy, security etc) and application may consider and optimise for power consumption as well as user perceived quality. **VI. CONCLUSIONS AND FUTURE WORK** In this paper we have presented the motivation, and basic architecture of a Unified Link-Layer API. We believe that such an API is an essential enabling technology toward software radio, and more intelligent applications. Although several attempts to design such an API have been documented in the literature, none of the existing approaches really fulfils the requirements we have derived. Especially simultaneously achieving extendibility and technology-independence while retaining simplicity has proved to be a hard problem. The work in the project is now focusing on the specification of the API exposed to the applications in terms of the “high-level” commands. This will set out the generic frameworks for making queries, subscribing to and for receiving notifications, and for issuing commands to the link-layers. Our present “best guess” is that the API offered to the application will be primarily text-based. Queries, commands and notification subscriptions would be performed via simple, fixed high-level function calls that take as one of the arguments a specification written in a specific query language that will be developed in the next phase of the project. The reason for embracing such a text based approach is the easy extendibility. The exact structure of the query language is still under discussion, though we believe something like a well-defined subset of SQL will most likely be used as a basis. Parsers for such query languages with very small footprint are already available. Our viewpoint is to think the collection of link-layer information essentially as a database, with additional labelling of data with regard to staleness, update frequency, and other such quantities. Further steps to be taken in the near future are the possible refinement of the architecture, and clarification of where some of the functionalities reside. Toward the end of the project, our aim is to have a fully working prototype implementation of ULLA for a variety of platforms, including a scaled-down implementation for sensor-type platforms. **ACKNOWLEDGMENT** The authors would like to acknowledge discussions with all the members of the GOLLUM consortium. **REFERENCES**
{"Source-Url": "https://www.inets.rwth-aachen.de/pub/IST-Mobile-Summit-GOLLUM-Final.pdf", "len_cl100k_base": 4839, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 15541, "total-output-tokens": 5546, "length": "2e12", "weborganizer": {"__label__adult": 0.0005278587341308594, "__label__art_design": 0.0004336833953857422, "__label__crime_law": 0.0005040168762207031, "__label__education_jobs": 0.00031185150146484375, "__label__entertainment": 0.00018024444580078125, "__label__fashion_beauty": 0.0002397298812866211, "__label__finance_business": 0.00023376941680908203, "__label__food_dining": 0.00047659873962402344, "__label__games": 0.0006513595581054688, "__label__hardware": 0.0113067626953125, "__label__health": 0.000873565673828125, "__label__history": 0.0004940032958984375, "__label__home_hobbies": 0.00012058019638061523, "__label__industrial": 0.0008683204650878906, "__label__literature": 0.00028252601623535156, "__label__politics": 0.0003540515899658203, "__label__religion": 0.0006985664367675781, "__label__science_tech": 0.262451171875, "__label__social_life": 0.00010311603546142578, "__label__software": 0.0281982421875, "__label__software_dev": 0.6884765625, "__label__sports_fitness": 0.0005002021789550781, "__label__transportation": 0.0011749267578125, "__label__travel": 0.00033974647521972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27080, 0.00766]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27080, 0.2831]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27080, 0.91118]], "google_gemma-3-12b-it_contains_pii": [[0, 4283, false], [4283, 10424, null], [10424, 16406, null], [16406, 21427, null], [21427, 27080, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4283, true], [4283, 10424, null], [10424, 16406, null], [16406, 21427, null], [21427, 27080, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27080, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27080, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27080, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27080, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27080, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27080, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27080, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27080, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27080, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27080, null]], "pdf_page_numbers": [[0, 4283, 1], [4283, 10424, 2], [10424, 16406, 3], [16406, 21427, 4], [21427, 27080, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27080, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
2c937c7ddf861d20de3ae3f05369ece8a3e479ab
Managing Text as Data Gordana Pavlovic-Lazetic and Eugene Wong Department of Electrical Engineering and Computer Sciences and the Electronics Research Laboratory University of California, Berkeley, CA 94720 1. Introduction With all their advances, database management systems of the present generation are designed to handle only data of primitive types, namely, numbers and character strings. Several approaches to extending their capabilities to handle data with higher order semantics exist. One is to add general abstract data type support, so that users can define such data types easily. In this approach, the DBMS makes no attempt to understand the semantics of user-defined data types, and evaluation of operators on such data are done in applications programs. As a supplement rather than an alternative, one can also extend the query language and its processor so that certain common non-primitive data types are directly supported by the DBMS. Of these, text and geometric data are probably the two most prominent examples. This paper deals with the case of text. Direct embedding of complex data in a database management system has obvious advantages, the most important one being performance. To manage text as data, the first step is to handle words satisfactorily. Words are after all natural atoms of text. Whereas representing texts as strings of characters capture none of their meaning, representing them as sequences of words is a reasonable first order semantic representation. Our first step, then, is to introduce words as a data type. Important operations on words are lexical operators, not string operators. They deal with how words are related to each other and how they are used. For example, "went" is a verb in past tense with "go" as its root. "Verb", "past tense", and "go" are values returned by these distinct operators on the word "went". We refer to "words" together with a class of operators on words as the lexical data type. The principal objective of this paper is to deal with issues that arise in implementing the lexical data type. The specific issues that we shall consider are the following: - efficient storage of words in a relational database - storage and retrieval of words - implementation of lexical operators - resolving ambiguous words represented by the same character strings. The principal application that we envisage for textual databases is automatic extraction of facts. We shall consider some simple examples of this using lexical operators. 2. Encoding Words A natural way of storing texts in a relational database is to represent text by a relation: ``` textname(seqno, word) ``` where "seqno" denotes the order of appearance and "word" stands for words, punctuation and special symbols such as "new paragraph". As character strings, words have greatly varying lengths. For storage in a fixed-length field, character strings are grossly inefficient. A solution to this problem is to encode words into a fixed-length representation. Great compression can be achieved. For example, a 4-bytes integer suffices to represent a vocabulary of $2^{32} \approx 4 \times 10^9$ words. There is a second and equally compelling reason to encode. Very little of the lexical information is contained in the character-string representation of a word. Clearly, the fact that "went" has "go" as its root cannot be deduced from the string w-e-n-t alone. If the goal is to implement lexical operators, then words need to be represented in a form whereby the values returned by the operators are explicit in the representation. Basically, the coded form of a word should be a composite of the values returned by the set of all admissible operators on the word. There is yet a third reason to encode, namely, removing ambiguity. The same character string often has several meanings. In effect, it represents several different words, or more precisely, different "lexical units". For example, "well" has at least two unrelated meanings: "good and proper" and "a hole in the ground". For these reasons we believe that encoding words is a must in storing text in a database system, if its meaning is to be exploited. The question is: how can this encoding be done? For compression alone, some kind of automatic encoding can probably be devised. However, no automatic encoding using only the character-strings as input can achieve the other two goals, since additional information must be supplied. To provide the lexical information, we shall use a dictionary. To resolve ambiguities, we shall use an expert system. The amount of lexical information that has to be supplied depends on the lexical operators to be supported. Thus, the first step is define the lexical data type. 3. Lexical Data Type We adopt the following terminology: a lexical unit is the image of a word under encoding, lexical data set is a set of lexical units together with certain default values, lexical data type is a pair \((X, L)\) where \(X\) is a lexical data set and \(L\) the set of all supported operators. An element in \(X\) is of the form \((id, descr)\) where \(id\) is a four byte integer that uniquely identifies the element (lexical unit), and \(descr\) a two byte descriptor that incorporates additional semantic information. Encoding is done using a dictionary that is represented as a relation as follows: \[ \text{dictionary}(\text{word, class, form, root, prefix, ending, feature, id, descr}) \] where "word" denotes the character-string representing a word, "class" denotes the syntactic classification of the word (i.e., verb, noun, etc.). "form" denotes a specific form of the word class (e.g., feminine, infinitive, etc.). "feature" denotes semantic feature to be specified later. The meaning of "root", "prefix", and "ending" is clear. The code \((id, descr)\) is a composite made up as follows: \[ \text{id} = \text{code}(\text{root}) \times 100 + \text{code}(\text{prefix}) \times 100 + \text{code}(\text{ending}) \] \[ \text{descr} = \text{code}(\text{form}) \times 100 + \text{code}(\text{feature}) \] Codes for prefixes, endings and semantic features are read from tables, and a root is encoded on the basis of interpolation of words density in a dictionary; starting codes for roots beginning with a specific letter are determined on the basis of the total number of codes available, and proportionally to the number of pages occupied by that beginning letter in a sample dictionary. Code of a word's form is a number that is joined, in the table containing an entry for every possible form of any word class, to the form of that word (e.g., 43 for invariable form of anomalous verbs like "to have" or "to be", 41 for the first person in singular of the present tense of those verbs, "as" "have" or "am", 46 for participles, etc.). Encoding is done as follows: Given a word as a character string, we first search for the corresponding entry in the dictionary and extract the code \((id, descr)\). If there is more than one entry, then disambiguation is necessary. The set of operators \(L\) consists of four types of operators: lexical operators such as finding root, prefix, ending or semantic feature of a lexical unit, building specific lexical forms as plural for nouns or past tense for verbs, constenating or deleting one lexical unit with/from another one; syntactic operators such as finding word class for a given lexical unit, tense for a given verb, degree for a given adjective, kind, gender, case for a given pronoun, metric operators such as length of a lexical unit in characters; truth operators such as equality or order of lexical units based on weights of roots, prefixes, endings, word forms and semantic features. Examples of those operators are: - \(\text{root}(\text{word}) = \text{go};\) - \(\text{end}(\text{action}) = \text{ion};\) - \(\text{tense}(\text{went}) = \text{past};\) - \(\text{lexform}(\text{pl. datum}) = \text{data};\) - \(\text{lexform}(\text{past. went}) = \text{went};\) - \(\text{concat}(\text{act}, \text{ion}) = \text{action};\) - \(\text{concat}(\text{trans}, \text{ion}) = \text{null};\) In what follows, we give a precise and formal specification for the lexical data type. 3.1. Lexical Data Set \(\text{LEX}\) (lexical data set) is a union of the following sets of pairs of integers \((id, descr)\): - encoded full lexical units - encoded lexical units from the dictionary, which are images of words, - sets of pairs \((id, descr)\) having codes of all the entries from PREFIX, ENDING and SEM-FEATURE data relations in the corresponding portions of id, descr, and all the other zeros, and - "null". \(\text{SYNT}\) (syntactic data set) is a union of: - \(\text{WCL}\) (word-class set), and - \(\text{NFORM}, \text{VFORM}, \text{AFORM}\) and \(\text{PFORM}\) sets (sets of all the different forms corresponding to noun, verb, adjective-adverb, and pronoun word classes, respectively, i.e. \[ \text{WCL} = \text{reg.noun, reg.verb, reg.adjective, reg.adverb, irreg.noun, irreg.verb, irreg.adjective, irreg.adverb, anom.verb, pronoun, conjunction, prefix, preposition, null}; \] \[ \text{NFORM} = \text{sing, pl, null}; \] \[ \text{VFORM} = \text{pres, 2st sing, pres, 2nd, pres, 3rd sing, past, part, null}; \] \[ \text{AFORM} = \text{positive, comparative, superlative, null}; \] \[ \text{PFORM} = \text{pers, 1st sing, pers, 1st pl, pers, 1st sing, pers, 1st pl, pers, 3rd sing, pers, 3rd pl, pass, 1st sing, pass, 2nd sing, pass, 3rd sing, pass, pl, show, sing, show, pl, null}; \] \(Q\) - set of numbers. \(\text{TF}\) - truth values set \(\{T,F\}\). 3.2. Constants, Variables \text{Constants:} \[ \text{\(l_1\) from LEX;} \] \[ \text{\(s_1\) from SYNT;} \] \[ \text{\(o_1\) from Q;} \] \[ \text{\(t, p\) from TF;} \] \text{Variables:} \[ \text{\(l_1\) from LEX;} \] \[ \text{\(s_1\) from SYNT;} \] \[ \text{\(o_1\) from Q;} \] \[ \text{\(t, p\) from TF.} \] 3.3. Operators \text{lexical operators:} \(\text{LEX}^+ \rightarrow \text{LEX}^+\) or \(\text{LEX}^+ \times \text{SYNT} \rightarrow \text{LEX}^+\); \text{syntactic operators:} \(\text{LEX}^+ X \text{SYNT} \rightarrow \text{SYNT};\) \text{metric operators:} \(\text{LEX}^+ \rightarrow \text{Q};\) \text{truth operators:} \(\text{LEX}^+ \rightarrow \text{TF}.\) The operators on lexical data type: \text{unary:} \text{lexical:} \[ \text{root}\left(\text{l}_1\right) \quad (\text{in LEX}); \] \[ \text{prefix}\left(\text{l}_1\right) \quad (\text{in LEX}); \] \[ \text{end}\left(\text{l}_1\right) \quad (\text{in LEX}); \] \[ \text{tense}\left(\text{l}_1\right) \quad (\text{in LEX}); \] \[ \text{concat}\left(\text{act}, \text{ion}\right) \quad (\text{in LEX}); \] \[ \text{concat}\left(\text{trans}, \text{ion}\right) \quad (\text{in LEX}); \] \text{syntactic:} \[ \text{w-class}\left(\text{l}_1\right) \quad (\text{in WCL}); \] \[ \text{tense}\left(\text{l}_1\right) \quad (\text{in VFORM}); \] \[ \text{number}\left(\text{l}_1\right) \quad (\text{in NFORM U PFORM}); \] \[ \text{degrees}\left(\text{l}_1\right) \quad (\text{in AFORM}); \] kind(L1) (in PFORM); gender(L2) (in NFORM U PFORM); case(L1) (in PFORM); meter; length(L1) (in Q). binary: lexical: lexform(S1, L1) (in LEX); concat(L1, L2) (in LEX*); delete(L1, L2) (in LEX*); truth: equal(L1, L2) (in TF); less_eq(L1, L2) (in TF); gr_eq(L1, L2) (in TF). 3.4. Lexical and Logical Expressions Lexical expression is a sequence of constants and variables from the set LEX and the sets supporting it, intermixed with operators leading to LEX-type result. Lexical predicates are of the form truth_op(expr1, expr2), where expr1, expr2 are any lexical expressions, and truth_op is any of the binary truth operators defined above. Logical expressions (and thus qualifications) are extended to accept lexical predicates as arguments of logical operators (not, and, or). 3.5. Procedures for Operator Evaluation Operators on lexical data are defined by procedures having encoded lexical units (i.e., pairs of integers) and values from syntactic data set as their arguments. The following are some examples of those procedures written in a C-like language: ``` root: root(L1); int L[2]; { L[0]=(L[0]/10**4) * 10**4; L[1]=0; } lexform: lexform(form,L1) char *form; int L[2]; { if(form=='sing') singular(L); else if (form=='pl') plural(L); else if (form=='pres_1st_sing') pr1s(L); else if (form=='pres_2nd') pr2s(L); else if (form=='pres_3rd_sing') pr3s(L); else if (form=='past') past(L); else if (form=='participle') participle(L); else if (form=='positive') pst(L); else if (form=='comparative') compar(L); else if (form=='superlative') superl(L); else if (form=='pers_1st_sing') pr1s(L); else if (form=='pers_m_1st_sing') prm1s(L); else if (form=='pers_1st_pl') pr1p(L); else if (form=='pers_f_4th_sing') pr4s(L); else if (form=='pers_m_4th_sing') prm4s(L); else if (form=='pers_4th_pl') pr4p(L); else if (form=='poss_f_sing') psfs(L); else if (form=='poss_m_sing') psms(L); else if (form=='poss_n_sing') pns(L); else if (form=='poss_pl') psp(L); else if (form=='show_sing') ss(L); else if (form=='show_pl') sps(L); else L=NULL; } Procedures "singular" might be defined as follows: singular(L) int L[2]; { if(L[1]/1000==6 & & L[1]/100==16) L=NULL; else if(L[1]/100==81 || L[1]/100==151) { L[0]=L[0]-1; L[1]=L[1]-100; } } and similarly for other procedures. 4. Text Representation Our goal is to take a text in its natural form and automatically convert it into a relation: text(seqno, lex) where seqno represents the sequential order and lex (lexical unit) is either the image of a word under encoding or a special symbol. The process of encoding (a) reduces a word to a fixed length representation, (b) makes explicit the lexical properties required to support the desired operators, and (c) resolves any ambiguity that may be present in the character string form. The automatic conversion of text is done using: a text scanner, a dictionary, and an expert system for resolving ambiguity. 4.1. Dictionary The structure of the dictionary has already been described in section 3. It contains all words except plurals for regular nouns, tenses for regular verbs and comparatives and superlatives for regular adjectives and adverbs. Roots, prefixes and endings are determined by hand and their meaning is obvious; one rule about roots is that they are always words themselves. Semantic feature is a marker that expresses semantics of a word or of a specific use of a word (e.g., ACTION for the word "work", LOCATION for the word "abroad", TIME for the word "then", QUALITY for the word "brilliance", both MEASURE and EMOTION for the word "content"). The set of semantic features we use is much like the one in [Sich 82], extended with a hierarchical structure. For example, semantic feature TIME has as its subordinated semantic features FUTURE, PRESENT and PAST. Our set contains about 50 semantic features. The dictionary encoding is done by an EQUEL-program. For our experimental study, we have built a dictionary with 1400 entries of basic words. 4.2. Lexical Rules Since different forms of regular words are not present in the dictionary, lexical (i.e. morphological) rules for synthesizing them or recognizing them is necessary in order for a text to be encoded. An example of those rules is the following: - if a word from the text ends with "ies" and in the dictionary there is a noun equal to that word except for the ending being "y" instead of "ies", then the word is the noun from the dictionary, in plural. Those rules are stored in a relation "lexrule" which is of the form: ``` word_ending | dict.entry_ending | word's class | dict.entry's class | descr | code offset ``` Word and dict.entry endings are the letter groups that should be deleted at the end of the word that is to be encoded and that should be then added to the end of such a word, respectively, in order to obtain a dictionary entry corresponding to the word being encoded (e.g., the ending "ies" should be deleted at the end of the word "copies" and then the ending "y" should be added to "cop" in order to get a dictionary entry "copy"). Word and dict.entry's classes are word classes that the word being encoded and the corresponding dictionary entry, respectively, belongs to (e.g., noun for both in the previous example). Descriptor is an explanation of the form found in the text (the code for "plural" in our case), and a code offset says how to calculate the code of the word being encoded on the basis of the code of the corresponding dictionary entry. The lexical rules relation created contains about 40 rules. 4.3. An Expert System for Resolving Ambiguity According to the classification of expert systems in [HaWL 83], our expert system is of the interpretation type. The components of the system are: (1) a blackboard used to record intermediate results. (2) a knowledge base containing facts from the dictionary and rules used for resolving ambiguity. (3) an interpreter that applies a rule from the knowledge base and posts changes to the blackboard. (4) a scheduler that controls the order of rule processing according as whether the ambiguity is to be resolved syntactically or semantically. In most cases, ambiguity is between word classes (e.g., noun and verb) and is resolved using context. For example, suppose that the phrase "a set of rules" is encountered. The word "rules" is either "verb - third person singular" or "noun - plural". In this case, the ambiguity is easily resolved by the rule: "preposition-noun combination is far more likely than "preposition-verb" combination. As in MYCN [DAVI 77], we use a probability model, and our rules have the form (antecedent, consequent, probability) where antecedent specifies a set of conditions under which the rule is applicable, consequent is the conclusion and probability gives a weight to the conclusion. For example, we might have: antecedent: if x is a noun or a verb and if x follows a preposition conssequent: then x is a noun with probability: weight 0.9 The architecture of the expert system was chosen on the basis of knowledge, data and solution space appropriate to our problem. Using the terminology found in [STEP 82], we find that we have a small solution space (few possible choices), unreliable data and knowledge (the context used for resolving ambiguity of a word might be ambiguous as well, and rules, representing knowledge, are not absolutely correct), and fixed (time - independent) data. For such an environment, the [STEP 82] suggests an expert system organization that applies exhaustive search and combines evidence from multiple sources and a probability model. Thus, our strategy is a MYCN-like one [DAVI 77]. It is designed to make an exhaustive search through the set of rules applicable to a given situation, and stops short of exhaustion only when ambiguity is resolved with certainty. Backward chaining control strategy is used. The search is hypothesis driven: from possible solutions to related antecedent conditions and to their required data. Our expert system was built using EQUEL [INGR 81], which is QUEL (QUERY Language for INGRES) coupled with general purpose programming language "C" [Kehr 78], rather than knowledge representation languages [HaWL 83]. In our experimental system, we have 110 rules, 50 of which involve word class (e.g., noun vs verb), 30 involve semantic feature (e.g., time or place), and 30 are word specific (e.g., noun "drugs" or adverb/noun "back"). Both rules and facts as well as the dictionary are stored as relations in INGRES. Figure 1 depicts the flow of control among the basic procedures. All procedures have read and write access to a blackboard, which is a "C" array of structures. 4.4. Text Encoding Texts are scanned first, and then encoded and stored on a sentence by sentence basis. A current word is matched against the dictionary entries, taking into account lexrule relation. It is appended to the blackboard together with the information about its position in the text, and, if unambiguous, with its code and descriptor. If a word is ambiguous, then it is marked indicating the kind of ambiguity that is encountered. The procedure for resolving ambiguities in a sentence is then called, which fires the expert system procedures for every word on the blackboard marked as ambiguous. The contents of the blackboard is then written into an output file, and at the end stored in a relation. At an experiment, Albert Einstein's biography [ENCY 79] has been used as a text that contained 4086 words. include; extraction of keywords and phrases, (information retrieval application), stylistic homogeneity testing (computer linguistics application) and extracting precise informations from texts. We shall describe the last one in greater detail. Extracting precise informations from texts consists of asking a question about a fact from the text (e.g., when a person named "X" was born) and finding the answer (e.g., 1879). Our approach to extracting facts from texts is to view texts as a virtual relational database corresponding to a specific schema. The schema defines, a priori, the universe of all queries that may be posed, and the answer to a query is found from one or more texts at execution time. Thus, except for the encoding at load time, the texts are not preprocessed. Query processing makes heavy use of the syntactic and semantic features of words that we have designed into the code. We have constructed an experimental system with a collection of biographies in the (virtual relational) database and the following schema: relations with attribute:domain pairs: birth(author:person, birth-d:date, birth-pl:place); degree(name:person, deg:degree, deg-d:date, deg-inst:institution, field:field_of_science); education(name:person, attend.inst:institution, field:field_of_science, period:(date, date)); emp_history(name:person, employer:institution, position:position, d_started:date, d_left:date); location(inst_name:institution, pl:place); research_interest(name:person, area:field_of_science, period:(date, date) U [w|feat(w)="PRF"] U (date-date)); publishation(author:person, title:citation, d:date, published:institution). A priori, lexical information concerning some of the relations and domains may be supplied, for example, birth: root: "birth"; person: word class: proper phrase; semantic feature: HUM; place: word class: proper phrase; semantic feature: LOC; As we have explained, no stored relations correspond to the schema above. Instead, the collection of texts is stored as a relation cod_text(tno, sno, id, descr) where tno (text number) identifies a particular biography, sno identifies a sentence, and (id, descr) is the coded form of a lexical unit. Now the question: "Where was Albert Einstein born?" can be expressed as a virtual query: range of e is birth retrieve (e.birth_place) where e.author="Albert Einstein". Using the facts: code("LOCATION")=46, code("Albert Einstein")=-1807030, code(root="birth")=12838, we can translate virtual query into a real query: range of e is cod_text range of u is cod_text range of v is cod_text retrieve(e.id) where e.id<>0 and feat(e.id, e.descr)=46 and e.sno=u.sno and e.tno=u.tno and root(e.id, e.descr)=(12638000,0) and v.lno=u.lno and v.sno=u.sno and v.id=-1607030 which yields the answer "Ulm, Germany". 6. Conclusion We have presented a way of handling texts in a rela- tional database system so that: (a) storage efficiency is maintained, (b) ambiguity of words is resolved, and (c) lexical (word based) information, both syntactic and seman- tic, is made explicit. These goals are achieved through encoding, which in turn uses a dictionary and an expert system for resolving ambiguity. Once a dictionary is built, any machine readable text can be automatically encoded with no human intervention. Our long term goal is to apply what we have done to the problem of extracting facts from texts. A simple and rather primitive version of such a system is given as an example. However, considerable more work will be required for a fact-extraction system of general utility, and we are in the midst of such a development. References: [DAVI 77] Davis, R., et al., Production rules as a represent- ation for a knowledge-based consultation pro- gram, Artificial Intelligence, 9(1977), pp.15-45; [ENCY 79] The new Encyclopaedia Britannica. Macro- [HaWL 83] Hayes-Roth, F., Waterman, D.A., Lenat, D.B. (Eds.), Building expert systems, Addison Wesley Publ. Comp. Inc., 1983; Berkeley, Memo. No. UCB/ERL M81/81, Aug 1981; [KeRi 78] Kernighan, B.W., Ritchie, D.M., The C program- ing language, Prentice Hall Software series, 1978; [SiCh 82] Simmons, P., and Chester, D., Relating sen- tences and semantic networks with procedural logic, CACM, Aug. 1982, vol. 25, No.8, pp.527-547; [STEF 82] Stefsk, M., et al., The organization of expert sys-
{"Source-Url": "https://people.eecs.berkeley.edu/~wong/wong_pubs/wong88.pdf", "len_cl100k_base": 6451, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 6525, "total-output-tokens": 7282, "length": "2e12", "weborganizer": {"__label__adult": 0.0004346370697021485, "__label__art_design": 0.0007081031799316406, "__label__crime_law": 0.0007457733154296875, "__label__education_jobs": 0.004817962646484375, "__label__entertainment": 0.000194549560546875, "__label__fashion_beauty": 0.00025081634521484375, "__label__finance_business": 0.0003705024719238281, "__label__food_dining": 0.0005292892456054688, "__label__games": 0.0006155967712402344, "__label__hardware": 0.001809120178222656, "__label__health": 0.0008397102355957031, "__label__history": 0.0004553794860839844, "__label__home_hobbies": 0.00016033649444580078, "__label__industrial": 0.0007672309875488281, "__label__literature": 0.003208160400390625, "__label__politics": 0.00040340423583984375, "__label__religion": 0.000667572021484375, "__label__science_tech": 0.256591796875, "__label__social_life": 0.0001823902130126953, "__label__software": 0.03326416015625, "__label__software_dev": 0.69189453125, "__label__sports_fitness": 0.0002313852310180664, "__label__transportation": 0.000736236572265625, "__label__travel": 0.00019550323486328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25449, 0.02756]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25449, 0.76582]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25449, 0.88244]], "google_gemma-3-12b-it_contains_pii": [[0, 4931, false], [4931, 11070, null], [11070, 14377, null], [14377, 20951, null], [20951, 23658, null], [23658, 25449, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4931, true], [4931, 11070, null], [11070, 14377, null], [14377, 20951, null], [20951, 23658, null], [23658, 25449, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25449, null]], "pdf_page_numbers": [[0, 4931, 1], [4931, 11070, 2], [11070, 14377, 3], [14377, 20951, 4], [20951, 23658, 5], [23658, 25449, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25449, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
42065a64efc00d59057ca2afaae512a828ed76cc
Classical Planning Chapter 10 Prof. Laura Zavala, laura.zavala@umbc.edu, ITE 373, 410-455-8775 Today’s class • What is planning? • Representation – PDDL • Approaches to planning – GPS /STRIPS – Situation calculus formalism – State space planning – Graph-based planning Planning problem • Find a **sequence of actions** that achieves a given **goal** when executed from a given **initial world state**. That is, given – a set of operator descriptions (defining the possible primitive actions by the agent), – an initial state description, and – a goal state description or predicate, compute a plan, which is – a sequence of operator instances, such that executing them in the initial state will change the world to a state satisfying the goal-state description. • Goals are usually specified as a conjunction of goals to be achieved Planning • Planning is finding and choosing a sequence (or a “program”) of actions to achieve goals. • Unlike theorem prover, not seeking whether the goal is true, but is there a sequence of actions to attain it - Move C to the table - Put B on top of C - Put A on top of B Planning vs. problem solving • Planning and problem solving methods can often solve the same sorts of problems Planning vs. problem solving - Problem solving deals with atomic representations of states - Planning is more powerful because of the representations and methods used Planning vs. problem solving - States, goals, and actions are decomposed into sets of sentences (usually in first-order logic or a subset of it) - Search often proceeds through *plan space* rather than *state space* (though there are also state-space planners) - Subgoals can be planned independently, reducing the complexity of the planning problem Typical assumptions • **Atomic time**: Each action is indivisible • **No concurrent actions** are allowed (though actions do not need to be ordered with respect to each other in the plan) • **Deterministic actions**: The result of actions are completely determined—there is no uncertainty in their effects • Agent is the **sole cause of change** in the world • Agent is **omniscient**: Has complete knowledge of the state of the world • **Database semantics**: close world assumption and unique names assumption Representation – General planning algorithms require a way to represent states, actions and goals – STRIPS, PDDL are languages based on propositional or first-order logic PDDL: Planning Domain Definition Language - States represented as a conjunction of **ground** literals - \( \text{at(Home)} \land \text{have(Milk)} \land \text{have(bananas)} \ldots \) - Actions have - **Action Name and Parameter List** - **Precondition** - conjunction of positive literals - **Effect** - conjunction of positive or negative literals which describe how situation changes when operator is applied \[ \text{Action(Fly(p,from, to),} \\ \text{PRECOND: At(p,from) \land Plane(p) \land Airport(from) \land Airport(to)} \\ \text{EFFECT: } \neg \text{AT(p,from)} \land \text{At(p,to)} \] Actions are specified in terms of what changes; everything that stays the same is left unmentioned Applicability of Actions • An action is *applicable* in any state that satisfies the precondition. • For FO action schema applicability involves a substitution $\theta$ for the variables in the PRECOND. \[ \text{Satisfies : } \text{At}(p, \text{from}) \land \text{Plane}(p) \land \text{Airport}(\text{from}) \land \text{Airport}(\text{to}) \] With $\theta = \{p/P1, \text{from}/\text{JFK}, \text{to}/\text{SFO}\}$ Thus the action is applicable. Effect of Actions • Executing action a in state s results in state s’, where s’ is same as s except – Any positive literal P in the effect of a is added to s’ – Any negative literal ¬P is removed from s’ \[ s: \text{At}(P1,JFK) \land \text{At}(P2,SFO) \] \[ \land \text{Plane}(P1) \land \text{Plane}(P2) \land \text{Airport}(JFK) \land \text{Airport}(SFO) \] \[ \text{EFFECT: } \neg \text{AT}(p,\text{from}) \land \text{At}(p,\text{to}) : \] \[ s’: \text{At}(P1,SFO) \land \text{At}(P2,SFO) \] \[ \land \text{Plane}(P1) \land \text{Plane}(P2) \land \text{Airport}(JFK) \land \text{Airport}(SFO) \] • assumption: every literal NOT in the effect remains unchanged Blocks world operators • Here are the classic basic operations for the blocks world: – stack(X,Y): put block X on block Y – unstack(X,Y): remove block X from block Y – pickup(X): pickup block X – putdown(X): put block X on the table • Each action will be represented by: – a list of preconditions – a list of new facts to be added (add-effects) – a list of facts to be removed (delete-effects) – optionally, a set of (simple) variable constraints • For example: preconditions(stack(X,Y), [holding(X), clear(Y)]) deletes(stack(X,Y), [holding(X), clear(Y)]). adds(stack(X,Y), [handempty, on(X,Y), clear(X)]) constraints(stack(X,Y), [X≠Y, Y≠table, X≠table]) The **blocks world** is a micro-world that consists of a table, a set of blocks and a robot hand. Some domain constraints: - Only one block can be on another block - Any number of blocks can be on the table - The hand can only hold one block Typical representation: ``` onetable(a) onetable(c) on(b,a) handempty clear(b) clear(c) ``` operator(stack(X,Y), **Precond** [holding(X), clear(Y)], **Add** [handempty, on(X,Y), clear(X)], **Delete** [holding(X), clear(Y)], **Constr** [X≠Y, Y≠table, X≠table]). operator(unstack(X,Y), [on(X,Y), clear(X), handempty], [holding(X), clear(Y)], [handempty, clear(X), on(X,Y)], [X≠Y, Y≠table, X≠table]). operator(pickup(X), [ontable(X), clear(X), handempty], [holding(X)], [ontable(X), clear(X), handempty], [X≠table]). operator(putdown(X), [holding(X)], [ontable(X), handempty, clear(X)], [holding(X)], [X≠table]). Major approaches • GPS / STRIPS /PDDL • Progression and Regression • Situation calculus • Partial-order planning • Planning with constraints (SATplan, Graphplan) • Hierarchical decomposition (HTN planning) • Reactive planning General Problem Solver • The General Problem Solver (GPS) system was an early planner (Newell, Shaw, and Simon) • GPS generated actions that reduced the difference between some state and a goal state • GPS used Means-Ends Analysis – Compare what is given or known with what is desired and select a reasonable thing to do next – Use a table of differences to identify procedures to reduce types of differences • GPS was a state space planner: it operated in the domain of state space problems specified by an initial state, some goal states, and a set of operations Situation calculus planning • Intuition: Represent the planning problem using first-order logic – Situation calculus lets us reason about changes in the world – Use theorem proving to “prove” that a particular sequence of actions, when applied to the situation characterizing the world state, will lead to a desired result Situation calculus • **Initial state:** a logical sentence about (situation) $S_0$ \[ \text{At(Home, } S_0 \text{)} \land \neg \text{Have(Milk, } S_0 \text{)} \land \neg \text{Have(Bananas, } S_0 \text{)} \land \neg \text{Have(Drill, } S_0 \text{)} \] • **Goal state:** \[ (\exists s) \text{At(Home,} s \text{)} \land \text{Have(Milk,} s \text{)} \land \text{Have(Bananas,} s \text{)} \land \text{Have(Drill,} s \text{)} \] • **Operators** are descriptions of how the world changes as a result of the agent’s actions: \[ \forall (a,s) \text{Have(Milk,Result}(a,s)) \iff ((a=\text{Buy(Milk)} \land \text{At(Grocery,} s \text{)}) \lor (\text{Have(Milk,} s \land a \neq \text{Drop(Milk)})) \] • Result$(a,s)$ names the situation resulting from executing action $a$ in situation $s$. • Action sequences are also useful: Result'(l,s) is the result of executing the list of actions (l) starting in $s$: \[ (\forall s) \text{Result'}([],s) = s \] \[ (\forall a,p,s) \text{Result'}([a|p]s) = \text{Result'}(p,\text{Result}(a,s)) \] A solution is a plan that when applied to the initial state yields a situation satisfying the goal query: \[ \text{At(Home, Result'(p,S_0))} \\ \land \text{Have(Milk, Result'(p,S_0))} \\ \land \text{Have(Bananas, Result'(p,S_0))} \\ \land \text{Have(Drill, Result'(p,S_0))} \] Thus we would expect a plan (i.e., variable assignment through unification) such as: \[p = \text{[Go(Grocery), Buy(Milk), Buy(Bananas), Go(HardwareStore), Buy(Drill), Go(Home)]}\] Situation calculus: Blocks world • A situation calculus rule for the blocks world: – Clear (X, Result(A,S)) ↔ [Clear (X, S) ∧ (¬(A=Stack(Y,X) ∨ A=Pickup(X)) ∨ (A=Stack(Y,X) ∧ ¬(holding(Y,S)) ∨ (A=Pickup(X) ∧ ¬(handempty(S) ∧ ontable(X,S) ∧ clear(X,S))))] ∨ [A=Stack(X,Y) ∧ holding(X,S) ∧ clear(Y,S)] ∨ [A=Unstack(Y,X) ∧ on(Y,X,S) ∧ clear(Y,S) ∧ handempty(S)] ∨ [A=Putdown(X) ∧ holding(X,S)] • English translation: A block is clear if (a) in the previous state it was clear and we didn’t pick it up or stack something on it successfully, or (b) we stacked it on something else successfully, or (c) something was on it that we unstacked successfully, or (d) we were holding it and we put it down. Situation calculus planning: Analysis • This is fine in theory, but remember that problem solving (search) is exponential in the worst case • Also, resolution theorem proving only finds a proof (plan), not necessarily a good plan • So we restrict the language and use a special-purpose algorithm (a planner) rather than general theorem prover STRIPS planning - STRIPS maintains two additional data structures: - **State List** - all currently true predicates. - **Goal Stack** - a push-down stack of goals to be solved, with current goal on top of stack. - If current goal is not satisfied by present state, examine add lists of operators, and push operator and preconditions list on stack. (Subgoals) - When a current goal is satisfied, POP it from stack. - When an operator is on top of the stack, record the application of that operator in the plan sequence and use the operator’s add and delete lists to update the current state. Typical BW planning problem Initial state: - clear(a) - clear(b) - clear(c) - ontable(a) - ontable(b) - ontable(c) - handempty Goal: - on(b,c) - on(a,b) - ontable(c) A plan: - pickup(b) - stack(b,c) - pickup(a) - stack(a,b) Another BW planning problem Initial state: - clear(a) - clear(b) - clear(c) - ontable(a) - ontable(b) - ontable(c) - handempty Goal: - on(a,b) - on(b,c) - ontable(c) A plan: - pickup(a) - stack(a,b) - unstack(a,b) - putdown(a) - pickup(b) - stack(b,c) - pickup(a) - stack(a,b) Goal interaction - Simple planning algorithms assume that the goals to be achieved are independent - Each can be solved separately and then the solutions concatenated - This planning problem, called the “Sussman Anomaly,” is the classic example of the goal interaction problem: - Solving on(A,B) first (by doing unstack(C,A), stack(A,B) will be undone when solving the second goal on(B,C) (by doing unstack(A,B), stack(B,C)). - Solving on(B,C) first will be undone when solving on(A,B) - Classic STRIPS could not handle this, although minor modifications can get it to do simple cases Achieve on(a,b) via stack(a,b) with preconds: [holding(a),clear(b)] | Achieve holding(a) via pickup(a) with preconds: [ontable(a),clear(a),handempty] | | Achieve clear(a) via unstack(_1584,a) with preconds: [on(_1584,a),clear(_1584),handempty] | | Applying unstack(c,a) | | Achieve handempty via putdown(_2691) with preconds: [holding(_2691)] | | Applying putdown(c) | | Applying pickup(a) | | Applying stack(a,b) | Achieve on(b,c) via stack(b,c) with preconds: [holding(b),clear(c)] | Achieve holding(b) via pickup(b) with preconds: [ontable(b),clear(b),handempty] | | Achieve clear(b) via unstack(_5625,b) with preconds: [on(_5625,b),clear(_5625),handempty] | | Applying unstack(a,b) | | Achieve handempty via putdown(_6648) with preconds: [holding(_6648)] | | Applying putdown(a) | | Applying pickup(b) | | Applying stack(b,c) | Achieve on(a,b) via stack(a,b) with preconds: [holding(a),clear(b)] | Achieve holding(a) via pickup(a) with preconds: [ontable(a),clear(a),handempty] | | Applying pickup(a) | | Applying stack(a,b) | From [clear(b),clear(c),ontable(a),ontable(b),on(c,a),handempty] To [on(a,b),on(b,c),ontable(c)] Do: - unstack(c,a) - putdown(c) - pickup(a) - stack(a,b) - unstack(a,b) - putdown(a) - pickup(b) - stack(b,c) - pickup(a) - stack(a,b) State-space planning • We initially have a space of situations (where you are, what you have, etc.) • The plan is a solution found by “searching” through the situations to get to the goal • A progression planner searches forward from initial state to goal state • A regression planner searches backward from the goal – This works if operators have enough information to go both ways – Ideally this leads to reduced branching: the planner is only considering things that are relevant to the goal State-space planning - Progression planners - forward state-space search - consider the effect of all possible actions in a given state - Regression planners - backward state-space search - Determine what must have been true in the previous state in order to achieve the current state Progression algorithm - Formulation as state-space search problem: - Initial state and goal test: obvious - Successor function: generate from applicable actions - Step cost = each action costs 1 - Any complete graph search algorithm is a complete planning algorithm. - E.g. A* - Inherently inefficient: - (1) irrelevant actions lead to very broad search tree - (2) good heuristic required for efficient search Regression algorithm • How to determine predecessors? – What are the states from which applying a given action leads to the goal? Goal state = $\text{At}(C1, B) \land \text{At}(C2, B) \land \ldots \land \text{At}(C20, B)$ Relevant action for first conjunct: $\text{Unload}(C1,p,B)$ Works only if pre-conditions are satisfied. Previous state= $\text{In}(C1, p) \land \text{At}(p, B) \land \text{At}(C2, B) \land \ldots \land \text{At}(C20, B)$ Subgoal $\text{At}(C1,B)$ should not be present in this state. • Actions must not undo desired literals (consistent) • Main advantage: only relevant actions are considered. – Often much lower branching factor than forward search. Regression algorithm - General process for predecessor construction - Give a goal description G - Let A be an action that is relevant and consistent - The predecessors are as follows: - Any positive effects of A that appear in G are deleted. - Each precondition literal of A is added, unless it already appears. - Any standard search algorithm can be added to perform the search. - Termination when predecessor satisfied by initial state. - In FO case, satisfaction might require a substitution. Heuristics for state-space search • Neither progression or regression are very efficient without a good heuristic. – How many actions are needed to achieve the goal? – Exact solution is NP hard, find a good estimate • Two approaches to find admissible heuristic: – The optimal solution to the relaxed problem. • Remove all preconditions from actions – The subgoal independence assumption: The cost of solving a conjunction of subgoals is approximated by the sum of the costs of solving the subproblems independently. Planning heuristics • Just as with search, we need an **admissible** heuristic that we can apply to planning states – Estimate of the distance (number of actions) to the goal • Planning typically uses **relaxation** to create heuristics – Ignore all or selected preconditions – Ignore delete lists (movement towards goal is never undone) – Use state abstraction (group together “similar” states and treat them as though they are identical) – e.g., ignore fluents – Assume subgoal independence (use max cost; or if subgoals actually are independent, can sum the costs) – Use pattern databases to store exact solution costs of recurring subproblems Forward search explores irrelevant actions
{"Source-Url": "https://www.csee.umbc.edu/courses/graduate/671/fall10b/CMSC671Fall2010-class14-Oct14.pdf", "len_cl100k_base": 4396, "olmocr-version": "0.1.50", "pdf-total-pages": 35, "total-fallback-pages": 0, "total-input-tokens": 49426, "total-output-tokens": 5857, "length": "2e12", "weborganizer": {"__label__adult": 0.0004851818084716797, "__label__art_design": 0.0011396408081054688, "__label__crime_law": 0.000896453857421875, "__label__education_jobs": 0.010711669921875, "__label__entertainment": 0.00016009807586669922, "__label__fashion_beauty": 0.00027561187744140625, "__label__finance_business": 0.0006656646728515625, "__label__food_dining": 0.00054168701171875, "__label__games": 0.0019168853759765625, "__label__hardware": 0.0014619827270507812, "__label__health": 0.0008778572082519531, "__label__history": 0.0007143020629882812, "__label__home_hobbies": 0.00055694580078125, "__label__industrial": 0.0017852783203125, "__label__literature": 0.0009679794311523438, "__label__politics": 0.0005931854248046875, "__label__religion": 0.00077056884765625, "__label__science_tech": 0.36376953125, "__label__social_life": 0.0003893375396728515, "__label__software": 0.01122283935546875, "__label__software_dev": 0.59765625, "__label__sports_fitness": 0.0006041526794433594, "__label__transportation": 0.0015468597412109375, "__label__travel": 0.0002827644348144531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16253, 0.00401]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16253, 0.51994]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16253, 0.81404]], "google_gemma-3-12b-it_contains_pii": [[0, 96, false], [96, 281, null], [281, 855, null], [855, 1131, null], [1131, 1243, null], [1243, 1411, null], [1411, 1762, null], [1762, 2275, null], [2275, 2447, null], [2447, 3179, null], [3179, 3628, null], [3628, 4299, null], [4299, 4981, null], [4981, 5317, null], [5317, 5902, null], [5902, 6130, null], [6130, 6700, null], [6700, 7028, null], [7028, 8079, null], [8079, 8539, null], [8539, 9274, null], [9274, 9618, null], [9618, 10217, null], [10217, 10444, null], [10444, 10724, null], [10724, 11316, null], [11316, 12586, null], [12586, 13086, null], [13086, 13380, null], [13380, 13803, null], [13803, 14502, null], [14502, 15015, null], [15015, 15550, null], [15550, 16211, null], [16211, 16253, null]], "google_gemma-3-12b-it_is_public_document": [[0, 96, true], [96, 281, null], [281, 855, null], [855, 1131, null], [1131, 1243, null], [1243, 1411, null], [1411, 1762, null], [1762, 2275, null], [2275, 2447, null], [2447, 3179, null], [3179, 3628, null], [3628, 4299, null], [4299, 4981, null], [4981, 5317, null], [5317, 5902, null], [5902, 6130, null], [6130, 6700, null], [6700, 7028, null], [7028, 8079, null], [8079, 8539, null], [8539, 9274, null], [9274, 9618, null], [9618, 10217, null], [10217, 10444, null], [10444, 10724, null], [10724, 11316, null], [11316, 12586, null], [12586, 13086, null], [13086, 13380, null], [13380, 13803, null], [13803, 14502, null], [14502, 15015, null], [15015, 15550, null], [15550, 16211, null], [16211, 16253, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16253, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 16253, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16253, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16253, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16253, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16253, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16253, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16253, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16253, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16253, null]], "pdf_page_numbers": [[0, 96, 1], [96, 281, 2], [281, 855, 3], [855, 1131, 4], [1131, 1243, 5], [1243, 1411, 6], [1411, 1762, 7], [1762, 2275, 8], [2275, 2447, 9], [2447, 3179, 10], [3179, 3628, 11], [3628, 4299, 12], [4299, 4981, 13], [4981, 5317, 14], [5317, 5902, 15], [5902, 6130, 16], [6130, 6700, 17], [6700, 7028, 18], [7028, 8079, 19], [8079, 8539, 20], [8539, 9274, 21], [9274, 9618, 22], [9618, 10217, 23], [10217, 10444, 24], [10444, 10724, 25], [10724, 11316, 26], [11316, 12586, 27], [12586, 13086, 28], [13086, 13380, 29], [13380, 13803, 30], [13803, 14502, 31], [14502, 15015, 32], [15015, 15550, 33], [15550, 16211, 34], [16211, 16253, 35]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16253, 0.05136]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
805d17c3a65cff8124a337cc2cb52fef93b5ce94
What Can Automatic Programming Learn from Theoretical Computer Science? Colin G. Johnson Computing Laboratory University of Kent at Canterbury Canterbury, Kent, CT2 7NF, England C.G.Johnson@ukc.ac.uk Abstract This paper considers two (seemingly) radically different perspectives on the construction of software. On one hand, search-based heuristics such as genetic programming. On the other hand, the theories of programming which underpin mathematical program analysis and formal methods. The main part of the paper surveys possible links between these perspectives. In particular the contrast between inductive and deductive approaches to software construction are studied, and various suggestions are made as to how randomized search heuristics can be combined with formal approaches to software construction without compromising the rigorous provability of the results. The aim of the ideas proposed is to improve the efficiency, effectiveness and safety of search-based automatic programming. 1 Introduction In recent years a number of systems have been developed which apply computational intelligence techniques to the automated creation of computer programs. Many of these have been based on biologically-inspired methods such as genetic algorithms: genetic programming [3, 11, 21] (and the many variants thereon) is the most widely used of these. By looking at further details of evolutionary biology various extensions of these ideas have been created, such as grammatical evolution [30], which exploits the idea of gene expression. Other systems have used tabu search as a way of exploring program-space [6], or have developed search techniques which are most closely tied into program structure [29]. These techniques have been shown to be successful on a wide variety of problems, though the success of the various techniques on different kinds of problems is variable. For the purposes of this paper we shall refer to such search-based, heuristic methods of creating programs as automated programming systems. Alongside this development there continues to be a substantial development in many areas of theoretical computer science concerned with understanding programs, the way in which programs can be constructed and how properties of programs can be verified. At first these two areas of study appear to be completely at odds with each other. On one hand is the sloppily, heuristic world of GP and its fellows, seen typically as unrigorous but potentially effective at solving the sort of ill-defined, fuzzy problems which are found in “the real world”. On the other hand the theoretically rigorous work is seen as very precise and exact; however this work is often seen as not being flexible enough to be applied to realistic problems. However seen from another perspective the two approaches seem to have much in common. Human programmers often have a view of code as a brittle object where the slightest erroneous change can lead to catastrophic error. By contrast to this both heuristic-driven automated programming and many of the theoretical approaches to programming view code-stuff as a malleable material, able to be transformed and restructured by a wide variety of transformations without breaking it. This paper surveys some of the potential for cross-fertilization between these areas. The aim is to spark interest in this area through a broad survey of ideas and suggest topics which might provide a bridge between the theory of programming and computational intelligence techniques. The basic question addressed is this: how can an understanding of the theoretical perspectives on programming make automatic programming more effective, efficient and safe? 2 Program Induction vs. Program Deduction One way to create a unified framework in which to view these diverse approaches is to consider the relationship between specifications and programs. One stance towards the creation of code is that programming is the process of transforming specifications of desired behaviour into programs which carry out that behaviour. By specification we mean any means by which we can express that behaviour. This could be a formal specification written in a specification language, but could also be some informal idea of what the program is intended to achieve (an interesting discussion of what “specification” might mean is given in [32]). For a program to implement a particular specification everything asked for in the specification must be implemented in the program, and any constraints in the specification must be respected by the program. From a traditional “formal methods” perspective this process has been viewed as a deductive problem. A set of operations is defined which describe how to transform a specification statement into a concrete statement which realizes that specification step. This can be enriched by the inclusion of operations which use operations which transform some part of the specification into some intermediate state, less abstract than the full specification but not fully executable. An example of such a program deduction process is refinement [26]. In contrast to this the automatic programming systems described above take an inductive stance towards program creation. A program induction system consists of two components. The first is a way of measuring the closeness of programs to specifications, and the second is a way of using that information to suggest ways in which programs might be brought closer to the specification. Another way to state this is that the algorithm is searching problem space for a program which matches the specification [16, 17, 29]. However program induction has its own problems. Typically the measure used to decide whether programs are close to their specification consists simply of running the programs on test data and measuring the distance between the solutions produced by the program and the solutions acceptable to the specification using some metric in solution-space. Whilst this has been empirically shown to provide a powerful way of directing program towards appropriate parts of the program-space, it gives no formal confidence that the programs induced will apply anywhere other than on the test data sets used. This dual perspective allows us to place inductive and deductive techniques side-by-side and study them for commonalities and ways in which we might combine the two perspectives. A summary of the two views is given in figure 1. 3 How Might Programming Language Theory Inform Automatic Programming? This section looks at a number of areas of programming language theory and suggests ways in which these might interact with automatic programming. 3.1 Static Analysis and Model Checking Static analysis [27] is an overarching term for a range of techniques which extract information out of programs without explicitly running them on particular data sets. A typical technique is that of abstract interpretation [8, 9], in which the data processed by the program and the various operators within the programming language which change those data are “abstracted” into sets of properties of interest. The analysis of the program consists of tracking whether these properties are guaranteed to hold at any particular point in the program. That is, the analysis makes a conservative approximation to whether a particular property holds or not. A large number of different kinds of information can be extracted from programs by this kind of analysis, for example: • constraint information about relationships between variables [10] • information about the extreme values which a variable could possibly take at each point in the execution of a program [8] • usage information about whether facts vital to the solution of a problem have been used • complexity information, e.g. the number of potential paths through a piece of code [23] • performance information [28, 35] Consider for example an integer variable within a program. In a normal run of a program a particular initial value is assigned to that variable and various operators act on the variable to change that value. In a static analysis of a program a property of the variable is followed through the program. To illustrate this we shall consider two examples of the kind of information which can be tracked through a program using a static analysis. For each example we give four pieces of information: the condition which is being tracked through the program the values which that condition can take, the initial value which it takes when a variable is defined without giving it a value, and some examples of update rules which show how actions in the program affect the abstract variable. The first example is whether a given integer variable is guaranteed to be \( \geq 0 \). **Condition:** An integer variable \( x \) is positive or zero. **Values:** True (T) or don’t know (D). **Initial value:** D. **Update rules:** Some examples: - If a positive constant is assigned to \( x \) then value becomes T. - If \( x \) becomes equal to something which is known to be positive-or-zero then value becomes T. - If an integer is added to \( x \) then value becomes \( D \). - If value is currently T and something which is known to be positive-or-zero is added to \( x \) then value remains at T. - If \( x \) becomes equal to the absolute value of any expression then value becomes T. - If any value is subtracted from \( x \) then value becomes \( D \). \[ \ldots \text{and so on} \ldots \] Here is another example: in this example we track the upper and lower bounds on the integer value. **Condition:** The upper and lower bounds an integer variable \( x \) can take. **Values:** A pair of integers \( x_{\min} \) and \( x_{\max} \), where we can be confident that \( x \in [x_{\min}, x_{\max}] \). **Initial value:** \([−\infty, +\infty]\). **Update rules:** Some examples: - If a constant value \( y \) is assigned to \( x \), then \( x_{\min} := y \) and \( x_{\max} := y \). - If a constant \( y \) is added to \( x \), then \( x_{\min} := x_{\min} + y \) and \( x_{\max} := x_{\max} + y \). - If the absolute value operator is applied to \( x \), then if \( x_{\min} < 0 \) we can make \( x_{\min} := 0 \). - If a variable which is known to be positive-or-zero is added to \( x \) then \( x_{\max} := \infty \). \[ \ldots \text{and so on} \ldots \] In developing such an analysis program we begin with simple, clearly true statements, and gradually “refine” them to produce more accurate statements. E.g. in the first example above we had the following statement: *If any value is subtracted from \( x \) then \([the \ positive-or-zero] \ \text{value becomes} \ D.* this is a conservative approximation: consider the following code. ```c int x, y input a value in the range [0, 20] into \( x \) input a value in the range [0, 5] into \( y \) x := x + 10 x := x − y ``` Applying the rule above makes a true statement, as long as we remember that “don’t know” doesn’t mean “can’t know”. However this update rule can be refined as follows: *If any value is subtracted from \( x \) then value becomes \( D \), unless \( x_{\min} − y_{\max} > 0 \), in which case value becomes \( T \).* Now note that by the end of line 4 we know that \( x \in [10, 30] \) and \( y \in [0, 5] \), so \( x_{\min} − y_{\max} > 0 \), therefore we can assign \( T \) as the positive-or-zero value for \( x \). By using such techniques we can discover whether particular properties hold, either throughout a program or at the end of the program. This is typically seen as a way of deducing information about a program which is in the debugging or testing stage. However in the context of automatic programming we can see these techniques as ways of measuring the fitness of programs [18, 19]. If we can specify certain desired properties of programs in terms of such properties then a static analysis can po tentially give a guarantee that a program must satisfy that property regardless of input. This would seem to be particularly important in im posing safety constraints. In many situations it is important that a variable (whether a variable in a program or a derived quantity) is bounded within a certain range. For example a robot may be constrained so that no movement takes it outside its working area, or the temperature of some process must not exceed some critical value. The fitness function in these cases is not based around testing but based around checks to confirm whether these constraints are guar teed to hold. Importantly there is the potential to use multi-criterion optimization to combine essential fitness constraints with desirable, data-driven features. Many problems can be partially speci cified by a set of formal statements about the variables and their relationships, whilst other aspects of the problem can only be expressed data. Combining satisfaction of a number of statically combined constraints together with optimization of some data-defined features into a multicriterion fitness function could be a way of satisfying both of these simultaneously. Many of the above ideas could also poten tially be implemented using the ideas of model based systems and qualitative reasoning [22, 34]. These techniques are also concerned with track ing properties of variables through a program. A related topic is model checking [7]. One important check which model checking provides is a check as to whether a temporal logic for mula holds over a given system, specified using e.g. an finite state machine or Petri net. Again this has the potential to be used as a fitness driver for the creation of such systems—the idea of evolving finite state machines has recently un undergone a revival [5, 36] after being one of the earliest applications of evolution-like algorithms [14]. Interestingly the output of a model check ing procedure which fails is a specific counterex ample to the statement; some automated anal ysis of this could be carried out to determine which direction in the search space might prove profitable. Also it may be possible to make a graded sequence of logical formulas which guide evolution towards the solution; the final desired set of constraints could be relaxed into a set of less-strict constraints, and the strictness an nealed with time until the final constraints are satisfied. 3.2 Specifications and Refinement A substantial area of theory which lies at the heart of formal methods is the idea of refining a specification. A refinement [13, 26] of a specifi cation is the replacement of various abstract statements with more concrete statements. The set of allowable replacements is formalized in a set of rules called a refinement calculus. As these rules are applied repeatedly, the abstract parts of the specification are eventually replaced by concrete, executable statements which eventual ly (provided the refinement is successful) pro vide an executable program. There are several points in this process where a search-based automatic programming approach might be used. Firstly it may be pos sible to use a search algorithm to decide which sequence of refinements to apply. Less obviously this may provide a way of combining data-driven and specification-driven aspects of a program. One problem with automated refinement of pro grams from a specification is that all of the in formation about the program must be contained in the specification. In many cases problems have a dual nature: we are able to provide a specifica tion for some parts of the desired functionality, whereas the remaining functionality is provided by data. Indeed, as pointed out by Partridge [31, 33], some problems (or parts of problems) are defined by sets of data. One of the difficulties in this sort of problem is in assigning fitness values to partially-concrete programs. We cannot execute these in a conven tional fashion as parts of the specification remain abstract. One promising approach to this is the executable specifications of Barnett et al. [4]. 3.3 Program Transformation and Mutation Testing One of the important features of automatic pro gramming is the need to make meaningful trans formations of programs. For example at the core of GP are recombination and (typically less im portantly) mutation operators. In the theory of programming there is also much work about how programs can be transformed [25]. These two kinds of transformation have somewhat dif ferent aims. In the theory of programming the transformations are typically applied with the aim of preserving the semantics of the program; the aim of the transformation is to make the program more specialized and therefore faster on a particular problem, or to overcome some hardware constraints by rearranging the structure of the program so that it compiles in such a way that it respects the underlying hardware structure (an example is given in [12]). In automatic programming the aim is to transform a program so that it makes a move through the space of program semantics; however that move is not an arbitrary move, we want to ensure that the move has certain properties. For example the desired outcome from a mutation operation is a “small move” to a program which is close in meaning to the original program, whilst a recombination operation is designed to produce a child program which includes some aspects of the semantics of the two parent programs. Can we make use of the understanding which the theory of program transformations gives us about how changes to code affect the meaning of programs to construct appropriate move operators? A related area of interest is partial evaluation of programs [20, 24]. The aim of partial evaluation is to transform a more general program into a more specialized one, typically with the aim of making the program more efficient. A toy example would be taking a program which takes two variables as input, fixing the value of one of the input variables, substituting an appropriate value throughout the program, then recompiling the program with the fixed value. This would mean that the program would run much more efficiently on the restricted input space; many of the calculations and branching decisions which would have to be taken (relatively slowly) at runtime will have already been taken at compile-time and so will execute relatively faster. This eliminates a traditional tradeoff between writing generic software which carries a lot of runtime baggage with it and the human cost of writing specialized code for each area of application. This has been applied e.g. to the specialization of Fast Fourier Transforms for a particular application area [15] and to the specialization of a ray tracing program with respect to various inputs, e.g. the scene, the lighting conditions, and the point of view [2]. Again there are many potential connections between automatic programming and partial evaluation. Can we use search algorithms to automatically specialize a given program so that it is appropriate for a class of problems implicitly defined by a set of training data? Can a set of suitably generic algorithms be combined together and specialized appropriately in a GP-like system? 4 Randomness and Reliability One criticism of heuristic methods is their dependency on randomized search as part of their discovery engine. However the use of randomization doesn’t necessarily rule out the production of end-products which rigorously satisfy a set of requirements. The above approaches ensure that the end results are still reliable despite the use of randomness in a number of ways. Some of them begin from an abstract but correct version of the problem, and ensure that the search process never makes a move outside this correct region of search space. Others use conservative approximations to check whether the programs generated by the system fall inside the requirements. An alternative approach would be to apply search algorithms simultaneously to the solutions themselves and to proofs of required properties about those solutions. 5 Conclusions We have surveyed several different approaches to the formal analysis of programs from the perspective of automatic programming. A number of ways in which these two seemingly diverse subjects can be combined are suggested. In particular many of these techniques provide new ways of defining fitness functions, particularly for safety-critical systems, and they can offer insights into how moves in a search algorithm change the meaning of programs. Acknowledgements Thanks to Andy King, Simon Thompson, John Clark, James Foster and Howard Bowman for conversations on these topics. References
{"Source-Url": "https://kar.kent.ac.uk/13729/1/WhatColin1.pdf", "len_cl100k_base": 4211, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 37809, "total-output-tokens": 6551, "length": "2e12", "weborganizer": {"__label__adult": 0.00035881996154785156, "__label__art_design": 0.0003077983856201172, "__label__crime_law": 0.0003709793090820313, "__label__education_jobs": 0.0009717941284179688, "__label__entertainment": 6.29425048828125e-05, "__label__fashion_beauty": 0.00015735626220703125, "__label__finance_business": 0.0002148151397705078, "__label__food_dining": 0.0003879070281982422, "__label__games": 0.0004270076751708984, "__label__hardware": 0.0007219314575195312, "__label__health": 0.0006999969482421875, "__label__history": 0.00022125244140625, "__label__home_hobbies": 0.00011134147644042967, "__label__industrial": 0.0004284381866455078, "__label__literature": 0.0003495216369628906, "__label__politics": 0.00029730796813964844, "__label__religion": 0.0005965232849121094, "__label__science_tech": 0.0270233154296875, "__label__social_life": 0.00010532140731811523, "__label__software": 0.004734039306640625, "__label__software_dev": 0.96044921875, "__label__sports_fitness": 0.00033783912658691406, "__label__transportation": 0.0005860328674316406, "__label__travel": 0.00019347667694091797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26635, 0.01955]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26635, 0.68271]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26635, 0.90102]], "google_gemma-3-12b-it_contains_pii": [[0, 233, false], [233, 3930, null], [3930, 7732, null], [7732, 11863, null], [11863, 16813, null], [16813, 21209, null], [21209, 24918, null], [24918, 26635, null]], "google_gemma-3-12b-it_is_public_document": [[0, 233, true], [233, 3930, null], [3930, 7732, null], [7732, 11863, null], [11863, 16813, null], [16813, 21209, null], [21209, 24918, null], [24918, 26635, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26635, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26635, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26635, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26635, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26635, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26635, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26635, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26635, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26635, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26635, null]], "pdf_page_numbers": [[0, 233, 1], [233, 3930, 2], [3930, 7732, 3], [7732, 11863, 4], [11863, 16813, 5], [16813, 21209, 6], [21209, 24918, 7], [24918, 26635, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26635, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
45e057dbc72963ca57d78dffdb9648a94d5d2b55
22. Classical Component Systems – CORBA Prof. Dr. Uwe Aßmann Technische Universität Dresden Institut für Software- und Multimediatechnik http://st.inf.tu-dresden.de 15-1.0, Mai 12, 2015 1. Basics 2. Dynamic Call 3. Traded Call 4. Evaluation according to our criteria list 5. Appendices Obligatory Reading - ISC, 3.1-3.3 - Szyperski 2nd edition, Chap 13 - http://java.sun.com/javase/6/docs/technotes/guides/idl/ - http://java.sun.com/developer/technicalArticles/releases/corba/ Literature - R. Orfali, D. Harkey: Client/Server programming with Java and Corba. Wiley&Sons. easy to read. - CORBA 3.1 specification: http://www.omg.org/spec/CORBA/3.1/ - Jens-Peter Redlich, CORBA 2.0 / Praktische Einführung für C++ und Java. Verlag: Addison-Wesley, 1996. ISBN: 3-8273-1060-1 22.1 Basic Mechanisms **CORBA: Common Object Request Broker Architecture®** - Founding year of the OMG (object management group) 1989 - Goal: plug-and-play components everywhere - Corba 1.1 1991 (IDL, ORB, BOA) - ODMG-93 (Standard for OO-databases) - Corba 2.0 1995, later 2.2 and 2.4 - Corba 3.0 1999 - Corba is large - Object Request Broker – 2000 pages of specification - Object Services – 300 pages - Common Facilities – 150 pages Ingredients of CORBA Component Model - Components are classes and objects, i.e., similar to object-oriented software - In CORBA 3.0, the CCM has additionally been introduced - Components have more component secrets - Language interoperability by uniform interface description - Location transparency - Name transparency - Transparent network protocols - Standardization - CORBA Services - CORBA Facilities - Horizontal vs. vertical Composition Techniques - Adaptation by stubs and skeletons - CORBA MOF for metamodelling OMA *(Object Management Architecture)* - A software bus, based on the Mediator (Broker) design pattern - Coupled by decorator-connectors Diagram: - Application Interfaces - Domain Interfaces - Common Facilities Object Request Broker Object Services The Top Class CORBA::Object - The class CORBA::Object defines a component model - The class must be inherited to all objects in the application - CORBA supports reflection and introspection: - `get_interface` delivers a reference to the entry in the interface repository - `get_implementation` a reference to the implementation - Reflection works by the interface repository (list_initial_references from the CORBA::ORB interface). Problem: Multiple Inheritance of CORBA Object - CORBA::Object includes code into a class - Many languages only offer only single inheritance - Application super class must be a delegatee - Only some languages offer mixin inheritance (mixin layers), such as Scala, C# 4.0, Eiffel Basic Connections in CORBA - CORBA composes components with connections - Static method call with static stubs and skeletons - Local or remote is transparent (compare to EJB!) - Polymorphic call - Local or remote - Event transmission - Callback (simplified Observer pattern) - Dynamic invocation (DII, request broking, interpreted call, symbolic call) - Searching services dynamically in the web (location transparency of a service) - Trading - Find services in a yellow pages service, based on properties - Important: CORBA is language-heterogeneous, i.e., offers these services for most of the main-stream languages 22.2 Dynamic Call Connector (with Object Request Broking) (Reified or interpreted call) **Dynamic Call Connector (Request Broking)** - CORBA *dynamic call* is a *reified call (interpreted call)*, i.e., a reflective call with a symbolic name and arguments - Without knowing that the service exists - Services can be dynamically exchanged, brought into the play a posteriori - Without recompilation of clients, nor regeneration of stubs - Binding of names to addresses is dynamic - Requires descriptions of semantics of service components - For identification of services - Metadata (descriptive data): catalogues of components (interface repository, implementation repository) - Property service (later) - and a mediator, that looks for services: the ORB Object Request Broker (ORB) - For a dynamic call, the ORB must be involved - The ORB is a mediator (design pattern) between client and server - Hides the environment from clients - Can talk to other ORBs, also on the web CORBA::ORB init object_to_string string_to_object BOA_init list_initial_services resolve_initial_references get_default_context create_environment .... ORB Activation Client object ORB_init CORBA BOA_init ORB list_initial_services Initializes the mediator Initializes the server BOA resolve_initial_references Delivers service names (as strings) Delivers object references to server objects from service names Requesting a Service via the ORB - Reflective calls - Building a call object (Request) - Adding arguments - Invoking - Polling, reading ```c++ // dynamic call create_list create_operation_list add_item add_value invoke poll_response send get_response delete .... ``` Protocol of Dynamic Call (DII) **ORBs** - **Java-based** - IBM WebSphere - IONA Orbix: In Java, ORBlets possible - BEA WebLogic - Visibroker (in Netscape) - Voyager (ObjectSpace) (with Mobile Agents) - free: JacORB, ILU, Jorba, DynaORB - **C-based** - ACE ORB TAO, University Washington (with trader) - Linux ORBIT (gnome) - Linux MICO - **Python-based** - fnorb - **http://www.omg.org** 22.3 Trader-Based Call The foundation of service-oriented architecture (SOA) Beyond Dynamic Call: Service Call with the Trader Service - A service call is a call, not based on naming but on semantic attributes, published properties - Requires a yellow page directory of services - Service-oriented architectures (SOA), requires matchmaking of services - The ORB resolves operations still based on naming (with the name service). The trader, however, resolves services without names, only based on properties and policies - The trader gets offers from servers, containing new services Diagram: - Trader - Service - Client - Export functionality - Import functionality - Interact Mediator pattern, mediator lets clients lookup services Service Offers for Trader - Service offer (IOR with properties (metadata)) - Properties describe services - Are used by traders to match services to queries - *not* facet-based, one-dimensional - Dynamic property - A property can be queried dynamically by the trader of service - The service-object can determine the value of a dynamic property anew - Matching with the standard constraint language - Boolean expressions about properties - Numeric and string comparisons Traders Provide Service Hopping - If a trader doesn’t find a service, it calls neighbor traders - Design pattern Chain of Responsibility - Graph of traders - Links to neighbors via TraderLink - TraderLink filters queries and manipulate via policies Modification of Queries - Policies parameterize the behaviour of the traders and the TraderLinks - Filters, i.e., values, modifying the queries: - max_search_card: maximum cardinality for the ongoing searches - max_match_card: maximum cardinality for matchings - max_hop_count: cardinality search depth in the graph ![Diagram showing the process of modifying queries](image) Interfaces Trading Service - Basic interfaces - Lookup (query) - Register (for export, retract, import of services) - Admin (info about services) - Link (construction of trader graph) - How does a lookup query look like? - `Lookup.Query(in ServicetypeName, in Constraint, in PolicySeq, in SpecifiedProperties, in howTo, out OfferSequence, offerIterator)` - Unfortunately, no faceted matchmaking possible! CORBA Trader Types - **Lookup** - query trader - **Lookup Register** - simple trader - **Lookup Register Admin** - standalone trader - **Lookup Register** - social trader (linked trader) - **Lookup Register Admin** - substitute trader (proxy trader) - **Lookup Register Admin** - full-service trader **Link** - proxy Corba 3.0 - Provides the well-defined packaging for producing components - CORBA Component Model (CCM): similar to EJB - Message Service MOM: Objects have asynchronous buffered message queues - Language mappings avoid IDL - Generating IDL from language specific type definitions - C++2IDL, Java2IDL, … - XML integration (SOAP messages) - Scripting (CORBA script), a composition language 22.5 Evaluation of CORBA as composition system Component Model ► Mechanisms for secrets and transparency: very good ■ Interface and Implementation repository ■ Component language hidden (interoperability) ■ Life-time of service hidden ■ Identity of services hidden ■ Location hidden ► No parameterization ► Standardization: quite good! ■ Services, application services are available ■ On the other hand, some standards are FAT ■ Technical vs. application specific vs business components: ■ .. but for business objects, the standards must be extended (vertical facilities) (that’s where the money is) **Composition Technique** - **Mechanisms for connection** - Mechanisms for adaptation - Stubs, skeletons, server adapters - Mechanisms for glueing: marshalling based on IDL - **Mechanisms for aspect separation** - Multiple interfaces per object - Facade classes/objects (design pattern facade) - **Nothing for extensions** - **Mechanisms for meta-modeling** - Interface Repositories with type codes - Implementation repositories - Dynamic call and traded call are reflective and introspective - **Scalability** - Connections cannot easily be exchanged (except static local and remote call) Composition Language - Weak: CORBA scripting provides the facility to write glue code, but only black-box composition CORBA - Connection - Extensibility - Aspect Separation - Scalability - Adaptation - Product quality - Software process - Metacomposition Prof. U. Aßmann, CBSE Appendix Basic Composition Technique of CORBA (Basic CORBA Connections) (self study) Static CORBA Call, Local or Remote - Advantage: methods of the participants are statically known - Indirect call by stub and skeletons, without involvement of an ORB - Supports distribution (exchange of local call in one address space to remote call is very easy) - Inherit from CORBA class - Write an IDL spec - No search for service objects, rather fast - Better type check, since the compiler knows the involved types - The call goes through the server object adapter (server decorator) - Basic (server) object adapter (BOA) - Portable (server) object adapter (POA) - This hides the whether the server is transient or persistent The CORBA Outer Skeleton: Basic Object Adapter BOA - The BOA is a real adapter (no decorator) - The BOA hides the life time of the server object (activation: start, stop) - Persistency - The BOA is implemented in every ORB, for minimal service provision - The BOA maintains an implementation repository (component registry) - It supports non-object-oriented code **CORBA::BOA** - create - get_id - dispose - set_exception - impl_is_ready - obj_is_ready - change_implementation - deactivate_impl - deactivate_obj Server Site Server / Object Implementation Basic Object Adapter (BOA) (Outer Skeleton) IDL-generated Skeleton Network Object Activation on the Server through a BOA Server → object1 ↓ create ↓ obj_is_ready ↓ impl_is_ready ↓ deactivate_obj ↓ deactivate_impl object2 ↓ get_id ↓ obj_is_ready ↓ deactivate_obj ↓ deactivate_impl CORBA::BOA ### Portable Object Adapter POA The POA is an evolution of the BOA in CORBA 3.0: - One per server, serving many objects - Nested POAs possible, with nested name spaces User policies for object management: - User-written instance managers for management of object instances <table> <thead> <tr> <th>Method</th> </tr> </thead> <tbody> <tr> <td>CORBA::POA</td> </tr> <tr> <td>create_POA</td> </tr> <tr> <td>find_POA</td> </tr> <tr> <td>create_reference</td> </tr> <tr> <td>dispose</td> </tr> <tr> <td>set_exception</td> </tr> <tr> <td>impl_is_ready</td> </tr> <tr> <td>obj_is_ready</td> </tr> <tr> <td>change_implementation</td> </tr> <tr> <td>activate_object</td> </tr> <tr> <td>deactivate_object</td> </tr> </tbody> </table> Object Adapters Support Different Server Life-Time Models - **Common server process (shared server)** - Several objects reside in one process on the server; the BOA initializes them as threads with common address space (common apartment) - deactivate_impl, impl_is_ready, obj_is_ready are mapped directly to thread functions - **Separate server process (unshared server)** - For every object an own process - **Server-per-request (session server)** - Every request generates a new process - Similar to Session EJB - **Persistent server** - Another application stores the objects (e.g., a data base). - The BOA passes on the queries - Similar to Entity Bean Callback Connectors with the Callback Service - The Callback pattern is a simplified Observer pattern - Registration and notification, but not status update - Callback function registration - Register a procedure variable, a closure (procedure variable with arguments), or a reference to an object at the subject, the server - Callback works for all languages, not only object-oriented ones ``` Client registerCallback() Client2 riseEvent() callCallback() Server (subject) return() signal() ``` Event Connections - Most flexible way of communication (also called messages) - Asynchronous communication - Works for every CORBA language - Receiver models - **Unicast**: one receiver - **Multicast**: many receivers - **Dynamically** varying receivers - **Push model**: PushConsumer/PushSupplier: object delivers event with push, event is shipped automatically - **Pull model**: PullSupplier/PullConsumer: object waits for event with pull - Synchronous or asynchronous - Untyped generic events, or typed by IDL - **Event channels** as intermediate buffers - Channels buffer, filter, and map of pull to push - Advantage: - Asynchronous Working in the Web (with IIOP and dynamic Call) - Attachment of legacy systems interesting for user interfaces, network computing etc. - Disadvantage: Very general interface Appendix Dynamic Call Connector (with Object Request Broking) Code example (self study) // Wow, a complex protocol!! CORBA::ORB_ptr orb; main(int argc, char* argv[]) { orb= CORBA::ORB_init(argc,argv, ORBID); // alternative description of service CosNaming::NamingContext_ptr naming= CosNaming::NamingContext::_narrow( ::resolve_initial_references("NameService")); CORBA::Object_ptr obj; try { obj= naming->resolve(mk_name("dii_smpl")); } catch (CORBA::Exception) { cerr << "not registered" << endl; exit(1); } // construct arguments CORBA::Any val1; val1 <<= (CORBA::Short) 123; CORBA::Any val2; val2 <<= (CORBA::Short) 0; CORBA::Any val3; val3 <<= (CORBA::Short) 456; // Make request (short form) CORBA::Request_ptr rq= obj->_request("op"); // Create argument list rq->arguments() = orb->create_list(); rq->arguments()->add_value("arg1",val1,CORBA::ARG_IN); rq->arguments()->add_value("arg2",val2,CORBA::ARG_OUT); rq->arguments()->add_value("arg3",val3,CORBA::ARG_INOUT); // Start request (synchronously) cout << "start request" << endl; rq->invoke(); // analyze result CORBA::Short rs1t ; if (*rq->result()->value()) CORBA::Short _arg2, _arg3; *(rq->arguments()->item(1)->value()) >>= _arg2; *(rq->arguments()->item(2)->value()) >>= _arg3; cout << " arg2= " << _arg2 << " arg3= " << _arg3 << " return= " << rs1t << endl; } else { cout << "result has unexpected type" << endl; } } public class Client { public static void main(String[] args) { if (args.length != 2) { System.out.println("Usage: vbj Client <carrier-name> <aircraft-name>"); return; } String carrierName = args[0]; String aircraftName = args[1]; org.omg.CORBA.Object carrier = null; org.omg.CORBA.Object aircraft = null; org.omg.CORBA.ORB orb = null; try { orb = org.omg.CORBA.ORB.init(args, null); } catch (org.omg.CORBA.systemexception se) { System.err.println("ORB init failure " + se); System.exit(1); } } } { // scope try { carrier = orb.bind("IDL:Ship/AircraftCarrier:1.0", carrierName, null, null); } catch (org.omg.CORBA.systemxception se) { System.err.println("ORB init failure " + se); System.exit(1); } org.omg.CORBA.Request request = carrier._request("launch"); request.add_in_arg().insert_string(aircraftName); request.set_return_type(orb.get_private_tc( org.omg.CORBA.TCKind.tk_objref)); request.invoke(); aircraft = request.result().value().extract_Object(); } { // scope org.omg.CORBA.Request request = aircraft._request("codeNumber"); request.set_return_type(orb.get_private_tc( org.omg.CORBA.TCKind.tk_objref)); request.invoke(); String designation = request.result().value().extract_string(); System.out.println("Aircraft " + designation + " is coming your way"); } Server Implementation // Building Distributed Object Applications with CORBA // Infowave (Thailand) Co., Ltd. // http://www.waveman.com // Jan 1998 public class Server { public static void main(String[] args) { org.omg.CORBA.ORB orb = null; try { orb = org.omg.CORBA.ORB.init(args, null); } catch (org.omg.CORBA.SystemException se) { System.err.println("ORB init failure " + se); System.exit(1); } org.omg.CORBA.BOA boa = null; try { boa = orb.BOA_init(); } catch (org.omg.CORBA.SystemException se) { System.err.println("BOA init failure " + se); System.exit(1); } Ship.AircraftCarrier carrier = new AircraftCarrierImpl("Nimitz"); try { boa.obj_is_ready(carrier); } catch (org.omg.CORBA.SystemException se) { System.err.println("Object Ready failure " + se); System.exit(1); } System.out.println(carrier + " ready for launch !!!"); try { boa.impl_is_ready(); } catch (org.omg.CORBA.SystemException se) { System.err.println("Impl Ready failure " + se); System.exit(1); } } } Example: Time Server in Java - On one machine; 2 address spaces (processes) - Call provides current time - Contains - IDL - Server - Starts ORB - Initializes Service - Gives IOR to the output - Client - Takes IOR - Calls service ```cpp // TestTimeServer.idl module TestTimeServer{ interface ObjTimeServer{ string getTime(); } }; ``` // TestTimeServerImpl.java - Server Skeleton import CORBA.*; class ObjTestTimeServerImpl extends TestTimeServer.ObjTimeServer_Skeleton { // generated from IDL // Variables // Constructor // Method (Service) Implementation public String getTime() throws CORBA.SystemException { return "Time: " + currentTime; } }; Server Implementation ```java // TimeServer_Server.java import CORBA.*; public class TimeServer_Server{ public static void main(String[] argv){ try { CORBA.ORB orb = CORBA.ORB.init(); ObjTestTimeServerImpl obj = new ObjTestTimeServerImpl(...); System.out.println(orb.object_to_string(obj)); } catch (CORBA.SystemException e){ System.err.println(e); } } }; ``` Client Implementation (Simpler Protocol) // TimeServer_Client.java import CORBA.*; public class TimeServer_Client{ public static void main(String[] argv){ try { CORBA.ORB orb= CORBA.ORB.init(); ... CORBA.object obj = orb.string_to_object(argv[0]); ... TestTimeServer.ObjTimeServer timeServer = TestTimeServerImpl.ObjTimeServer_var.narrow(obj); ... System.out.println(timeServer.getTime()); } catch (CORBA.SystemException e){ System.err.println(e); } } } Execution // starting server C:\> java TimeServer_Server IOR:00000000000122342435 ... // starting client C:\> java TimeServer_Client IOR:00000000000122342435 ... Time: 14:35:44 Appendix Corba Services (optional material) Literature Overview on Corba Services - Services provide functionality a programming language might not provide (e.g., Cobol, Fortran) - 16+ standardized service interfaces (i.e., a library) - Standardized, but status of implementation different depending on producer - Object services - Deal with features and management of objects - Collaboration services - Deal with collaboration, i.e., object contexts - Business services - Deal with business applications - The services serve for criterion M-3, standardization. They are very important to increase reuse. - Remember, they are available for every language, and on distributed systems! **Object Services: Rather Simple** - **Name service (directory service)** - Records server objects in a simple tree-like name space - (Is a simple component system itself) - **Lifecycle service (allocation service)** - Not automatic; semantics of deallocation undefined - **Property service (feature service for objects)** - **Persistency service (storing objects in data bases)** - **Relationship service to build interoperable relations and graphs** - Support of standard relations reference, containment - Divided in standard roles contains, containedIn, references, referenced - **Container service (collection service)** Collaboration Services - Communication services - Resemble connectors in architecture systems, but cannot be exchanged to each other - Event service - push model: the components push events into the event channel - pull model: the components wait at the channel and empty it - Callback service - Parallelism - Concurrency service: locks - Object transaction service, OTS: Flat transactions on object graphs - Nested transactions? Business Services ► Trader service ■ Yellow Pages, localization of services ► Query service ■ Search for objects with attributes and the OQL, SQL (ODMG-93) ► Licensing service ■ For application providers (application servers) ■ License managers ► Security service ■ Use of SSL and other basic services Example: CORBA Interoperable Object Reference – IOR - A unique key for an object - Uniquely mapped per language (for all ORBs) - Hides object references of programming languages - Consists of: - Type name (code), i.e., index into Interface Repository - Protocol and address information (e.g., TCP/IP, port #, host name), could support more than one protocol - Object key: - Opaque data only readable by generating ORB (pointer) - Object decorator (adapter) name (for BOA) IOR Example Client IOR IDL: TimeServer: Verion 1.0 IIOP iiop.my.net 1234 Object key OA 2 0x0003 Server: iiop.my.net:1234 Object 0x0002 Object 0x0001 OA 1 (BOA) OA 2 Object 0x0003 Object Services: Names - Binding of a name associates a name to an object in a name space (directory, scope, naming context) - A name space is an associative array with a set of bindings of names to values - Namespaces are recursive, i.e., they can reference each other and build name graphs - Others: Active Directory, LDAP - The representation of a name is based on abstract syntax, not on the concrete syntax of a operating system or URL. - A name consists of a tuple (Identifier, Kind). - The identifier is the real name, the Kind tells how the name is represented (e.g., c_source, object_code, executable, postscript,..). - For creation of names there is a library (design pattern Abstract Factory). Name Service CosNaming CosNaming::NamingContext bind(in Name n, in Object obj) // associate a name rebind(in Name n, in Object obj) bind_context rebind_context mk_name(String s) Object resolve unbind(in Name n) // disassociate a name NamingContext new_context; NamingContext bind_new_context(in Name n) void destroy void list(..) _narrow() void bind(in Name n, in Object obj) raises(NotFound, Cannotproceed, InvalidName, AlreadyBoand); void rebind(in Name n, in Object obj) raises(NotFound, Cannotproceed, InvalidName ); void bind_context(in Name n, in NamingContext nc) raises(NotFound, Cannotproceed, InvalidName, AlreadyBoand ); void rebind_context(in Name n, in NamingContext nc) raises( NotFound, Cannotproceed, InvalidName ); Name mk_name(String s); Object resolve(in Name n) raises( NotFound, Cannotproceed, InvalidName ); void unbind(in Name n) raises( NotFound, Cannotproceed, InvalidName ); NamingContext new_context(); NamingContext bind_new_context(in Name n) raises( NotFound, AlreadyBoand, Cannotproceed, InvalidName ); void destroy() raises( NotEmpty ); void list(in unsigned long how_many, out BindingLis bl, out Bindingeserator bi ); module CosNaming{ struct NameComponent { string id; string kind; }; typedef sequence <NameComponent> Name; enum BindingType { nobject, ncontext }; struct Binding { Name binding_name; BindingType binding_type; }; typedef sequence <Binding> BindingList; interface BindingIterator; interface NamingContext { enum NotFoundReason { missing_node, not_context, not_object }; exception NotFound { NotFoundReason why; Name rest_of_name; }; } } exception Cannotproceed { NamingContext cxt; Name rest_of_name; }; exception InvalidName {}; exception AlreadyBoand {}; exception NotEmpty {}; // methods see previous slide interface BindingIterator { boolean next_one(out Binding b); boolean next_n(in unsigned long how_many, out BindingLis bl); void destroy(); }; ... Use of Names - System dependent name - Create Name - Corba-Name - Binding (association) - Search Object - object - Search/create name space - name space - object // From: Redlich import java.io.*; import java.awt.*; import IE.Iona.Orbix2.CORBA.SystemException; // OrbixWeb import CosNaming.NamingContext; // name service/context import CosNaming.NamingContext.*; // name service/Exceptions import Calc5.calc.complex; // Typ 'complex' from Calc5 class MyNaming extends CosNaming { ... } public class client extends Frame { private Calc5.calc.Ref calc; private TextField inR, inI; private Button setB, addB, multB, divB, quitB, zeroB; public static void main(String argv[]) { try { CosNaming.NamingContext._narrow( MyNaming. resolve_initial_references(MyNaming.NameService)); cf = Calc5.calc_factory._narrow( ctx.resolve(MyNaming.mk_name("calcfac"))) f = new client(cf.create_new_calc()); f.pack(); f.show(); } catch (Exception ex) { System.out.println("Calc-5/Init:" + ex.toString()); } } } Object Services: Persistency - Definition of a Persistent Object Identifier (PID) - references the value of CORBA-objects (in contrast to a CORBA-object) - Interface - connect, disconnect, store, restore, delete - Attachment to data bases possible (also ODMG compatible) Object Services: Property Service - Management of lists of features (properties) for objects - Properties are strings - Dynamically extensible - Concept well-known as - LISP property lists, associative arrays, Java property classes - Iterators for properties - Interface: - define_property, define_properties, get_property_value, get_properties, delete_property, Collaboration Services: Transactions ► What a dream: the Web as data base with nested transactions. Scenarios: ■ Accounts as Web-objects. Transfers as Transaction on the objects of several banks ■ Parallel working on web sites: how to make consistent? ► Standard 2-phase commit protocol: ■ begin_ta, rollback, commit ► Nested transactions ■ begin_subtransaction, rollback_subtransaction, commit_subtransaction Appendix CORBA Facilities (Standards for Application Domains) Application domain specific interfaces **Horizontal Facilities** - **User interfaces** - Printing, Scripting - Compound documents: since 1996 OpenDoc is accepted as standard format. Source Code has been released of IBM - **Information management** - Metadata (meta object facility, MOF) - Tool interchange: a text- and stream based exchange format for UML (XMI) - Common Warehouse Model (CWM): MOF-based metaschema for database applications Vertical Facilities (Domain-Specific Facilities) The Domain technology committee (DTC) creates domain task forces DTF for a application domain ► Business objects ► Finance/insurance ■ Currency facility ► Electronic commerce ► Manufacturing ■ Product data management enablers PDM ► Medicine (healthcare CorbaMed) ■ Lexicon Query Service ■ Person Identifier Service PIDS ► Telecommunications ■ Audio/visual stream control object ■ Notification service ► Transportation Since 2000, the OMG describes domain-specific vocabularies with UML profiles - Probably, all CORBA facilities will end up in UML profiles A UML Profile is a UML dialect of an application specific domain - With new stereotypes and tagged values - Corresponds to an extension of the UML metamodel - Corresponds to a domain specific language with own vocabulary - Every entry in profile is a term Example UML Profiles: - EDOC Enterprise Distributed Objects Computing - Middleware profiles: Corba, .NET, EJB - Embedded and real time systems: - MARTE profile on schedulability, performance, time - Ravenscar Profile - HIDOORS Profile on real-time modelling www.hidoors.org Appendix CORBA and the Web Corba and the Web - HTML solves many of the CORBA problems - HTTP only for data transport - HTTP cannot call methods, except by CGI-Gateway-functionality (common gateway interface) - Behind the CGI-interface is a generals program, communicating with HTTP with untyped environment variables (HACK!) - http-Server are simple ORBs, pages are objects - The URI/URL-name schema can be integrated into CORBA - IIOP becomes a standard internet protocol - Standard ports, URL-mappings and Standard-proxies for Firewalls are available - CORBA is an extension of HTTP of data to code **CORBA and Java** - Java is an ideal partner for Corba: - Bytecode is mobile, i.e., - Applets: move calculations to clients (thin/thick client problem) - can be used for migration of objects, ORBs and agents - Since 1999 direct Corba support in JDK 1.2 - IDL2Java mapping, IDL compiler, Java2IDL compiler, name service, ORB - Corba supports for Java a distributed interoperable infrastructure - Java imitates functionality of Corba - Basic services: Remote Method Invocation RMI, Java Native code Interface JNI - Services: serialization, events - Application specific services (facilities): reflection, properties of JavaBeans **Corba and the Web (Orblets)** - ORBs can be written as bytecode applets if they are written in Java (ORBlet) - Coupling of HTTP and IIOP: Download of an ORBlets with HTTP: Talk to this ORB, to get contact to server - Standard web services (see later) are slower than CORBA/ORBlets, because they incur interpretation overhead What Have We Learned - CORBA is big, but universal: - The Corba-interfaces are very flexible, work and can be used in practice - ... but also complex and fat, may be too flexible - If you have to connect to legacy systems, CORBA works - Corba has the advantage of an open standard - To increase reuse and interoperability in practice, one has to learn many standards - Trading and dynamic call are future advanced communication mechanisms - CORBA was probably only the first step, but web services might be taking over The End
{"Source-Url": "http://st.inf.tu-dresden.de/files/teaching/ss15/cbse/slides/22-cbse-corba.pdf", "len_cl100k_base": 7807, "olmocr-version": "0.1.48", "pdf-total-pages": 77, "total-fallback-pages": 0, "total-input-tokens": 117888, "total-output-tokens": 11028, "length": "2e12", "weborganizer": {"__label__adult": 0.0002639293670654297, "__label__art_design": 0.0002570152282714844, "__label__crime_law": 0.00017547607421875, "__label__education_jobs": 0.0004546642303466797, "__label__entertainment": 4.351139068603515e-05, "__label__fashion_beauty": 8.273124694824219e-05, "__label__finance_business": 0.00015032291412353516, "__label__food_dining": 0.00019466876983642575, "__label__games": 0.0003664493560791016, "__label__hardware": 0.0005369186401367188, "__label__health": 0.00018799304962158203, "__label__history": 0.00016236305236816406, "__label__home_hobbies": 4.291534423828125e-05, "__label__industrial": 0.00021219253540039065, "__label__literature": 0.00012981891632080078, "__label__politics": 0.00012636184692382812, "__label__religion": 0.0002834796905517578, "__label__science_tech": 0.00421142578125, "__label__social_life": 5.340576171875e-05, "__label__software": 0.00608062744140625, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00020241737365722656, "__label__transportation": 0.0003032684326171875, "__label__travel": 0.00016367435455322266}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32599, 0.0141]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32599, 0.32683]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32599, 0.68332]], "google_gemma-3-12b-it_contains_pii": [[0, 288, false], [288, 480, null], [480, 926, null], [926, 948, null], [948, 1369, null], [1369, 1910, null], [1910, 2165, null], [2165, 2608, null], [2608, 2890, null], [2890, 3539, null], [3539, 3628, null], [3628, 4315, null], [4315, 4695, null], [4695, 4964, null], [4964, 5241, null], [5241, 5272, null], [5272, 5653, null], [5653, 5731, null], [5731, 6401, null], [6401, 6889, null], [6889, 7145, null], [7145, 7536, null], [7536, 7954, null], [7954, 8285, null], [8285, 8679, null], [8679, 8727, null], [8727, 9305, null], [9305, 9922, null], [9922, 10041, null], [10041, 10202, null], [10202, 10288, null], [10288, 10944, null], [10944, 11465, null], [11465, 11587, null], [11587, 11842, null], [11842, 12470, null], [12470, 13149, null], [13149, 13652, null], [13652, 14496, null], [14496, 14585, null], [14585, 16038, null], [16038, 16682, null], [16682, 17607, null], [17607, 18877, null], [18877, 19255, null], [19255, 19596, null], [19596, 20037, null], [20037, 20640, null], [20640, 20821, null], [20821, 20866, null], [20866, 21046, null], [21046, 21686, null], [21686, 22328, null], [22328, 22782, null], [22782, 23098, null], [23098, 23098, null], [23098, 23589, null], [23589, 23779, null], [23779, 24498, null], [24498, 24840, null], [24840, 25687, null], [25687, 26527, null], [26527, 26704, null], [26704, 27720, null], [27720, 27998, null], [27998, 28373, null], [28373, 28794, null], [28794, 28896, null], [28896, 29312, null], [29312, 29793, null], [29793, 30472, null], [30472, 30499, null], [30499, 31084, null], [31084, 31738, null], [31738, 32066, null], [32066, 32592, null], [32592, 32599, null]], "google_gemma-3-12b-it_is_public_document": [[0, 288, true], [288, 480, null], [480, 926, null], [926, 948, null], [948, 1369, null], [1369, 1910, null], [1910, 2165, null], [2165, 2608, null], [2608, 2890, null], [2890, 3539, null], [3539, 3628, null], [3628, 4315, null], [4315, 4695, null], [4695, 4964, null], [4964, 5241, null], [5241, 5272, null], [5272, 5653, null], [5653, 5731, null], [5731, 6401, null], [6401, 6889, null], [6889, 7145, null], [7145, 7536, null], [7536, 7954, null], [7954, 8285, null], [8285, 8679, null], [8679, 8727, null], [8727, 9305, null], [9305, 9922, null], [9922, 10041, null], [10041, 10202, null], [10202, 10288, null], [10288, 10944, null], [10944, 11465, null], [11465, 11587, null], [11587, 11842, null], [11842, 12470, null], [12470, 13149, null], [13149, 13652, null], [13652, 14496, null], [14496, 14585, null], [14585, 16038, null], [16038, 16682, null], [16682, 17607, null], [17607, 18877, null], [18877, 19255, null], [19255, 19596, null], [19596, 20037, null], [20037, 20640, null], [20640, 20821, null], [20821, 20866, null], [20866, 21046, null], [21046, 21686, null], [21686, 22328, null], [22328, 22782, null], [22782, 23098, null], [23098, 23098, null], [23098, 23589, null], [23589, 23779, null], [23779, 24498, null], [24498, 24840, null], [24840, 25687, null], [25687, 26527, null], [26527, 26704, null], [26704, 27720, null], [27720, 27998, null], [27998, 28373, null], [28373, 28794, null], [28794, 28896, null], [28896, 29312, null], [29312, 29793, null], [29793, 30472, null], [30472, 30499, null], [30499, 31084, null], [31084, 31738, null], [31738, 32066, null], [32066, 32592, null], [32592, 32599, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32599, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 32599, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32599, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32599, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32599, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32599, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32599, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32599, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32599, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32599, null]], "pdf_page_numbers": [[0, 288, 1], [288, 480, 2], [480, 926, 3], [926, 948, 4], [948, 1369, 5], [1369, 1910, 6], [1910, 2165, 7], [2165, 2608, 8], [2608, 2890, 9], [2890, 3539, 10], [3539, 3628, 11], [3628, 4315, 12], [4315, 4695, 13], [4695, 4964, 14], [4964, 5241, 15], [5241, 5272, 16], [5272, 5653, 17], [5653, 5731, 18], [5731, 6401, 19], [6401, 6889, 20], [6889, 7145, 21], [7145, 7536, 22], [7536, 7954, 23], [7954, 8285, 24], [8285, 8679, 25], [8679, 8727, 26], [8727, 9305, 27], [9305, 9922, 28], [9922, 10041, 29], [10041, 10202, 30], [10202, 10288, 31], [10288, 10944, 32], [10944, 11465, 33], [11465, 11587, 34], [11587, 11842, 35], [11842, 12470, 36], [12470, 13149, 37], [13149, 13652, 38], [13652, 14496, 39], [14496, 14585, 40], [14585, 16038, 41], [16038, 16682, 42], [16682, 17607, 43], [17607, 18877, 44], [18877, 19255, 45], [19255, 19596, 46], [19596, 20037, 47], [20037, 20640, 48], [20640, 20821, 49], [20821, 20866, 50], [20866, 21046, 51], [21046, 21686, 52], [21686, 22328, 53], [22328, 22782, 54], [22782, 23098, 55], [23098, 23098, 56], [23098, 23589, 57], [23589, 23779, 58], [23779, 24498, 59], [24498, 24840, 60], [24840, 25687, 61], [25687, 26527, 62], [26527, 26704, 63], [26704, 27720, 64], [27720, 27998, 65], [27998, 28373, 66], [28373, 28794, 67], [28794, 28896, 68], [28896, 29312, 69], [29312, 29793, 70], [29793, 30472, 71], [30472, 30499, 72], [30499, 31084, 73], [31084, 31738, 74], [31738, 32066, 75], [32066, 32592, 76], [32592, 32599, 77]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32599, 0.01443]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
c10c1a284fd9ad5ce250ebd7b746a50cfe460d1b
Software Components for Serious Game Development Wim Westera, Wim van der Vegt, Kiavash Bahreini, Mihai Dascalu, and Giel van Lankveld 1 Open University of the Netherlands, Heerlen, The Netherlands 2 University Politehnica of Bucharest, Bucharest, Romania wim.westera@ou.nl wim.vandervegt@ou.nl kiavash.bahreini@ou.nl mihai.dascalu@cs.pub.ro giel.vanlankveld@ou.nl Abstract: The large upfront investments required for game development pose a severe barrier for the wider uptake of serious games in education and training. Also, there is a lack of well-established methods and tools that support game developers at preserving and enhancing the games’ pedagogical effectiveness. The RAGE project, which is a Horizon 2020 funded research project on serious games, addresses these issues by making available reusable software components that aim to support the pedagogical qualities of serious games. In order to easily deploy and integrate these game components in a multitude of game engines, platforms and programming languages, RAGE has developed and validated a hybrid component-based software architecture that preserves component portability and interoperability. While a first set of software components is being developed, this paper presents selected examples to explain the overall system’s concept and its practical benefits. First, the Emotion Detection component uses the learners’ webcams for capturing their emotional states from facial expressions. Second, the Performance Statistics component is an add-on for learning analytics data processing, which allows instructors to track and inspect learners’ progress without bothering about the required statistics computations. Third, a set of language processing components accommodate the analysis of textual inputs of learners, facilitating comprehension assessment and prediction. Fourth, the Shared Data Storage component provides a technical solution for data storage - e.g. for player data or game world data - across multiple software components. The presented components are exemplary for the anticipated RAGE library, which will include up to forty reusable software components for serious gaming, addressing diverse pedagogical dimensions. Keywords: serious games; applied games; game technology; game development; software components; RAGE; performance; emotion; natural language processing; learning analytics 1. Introduction The creation of games, be it leisure games or serious games, is a complex process of game design, programming, content production, and testing (Björk et al, 2006; De Gloria et al, 2014; Westera et al, 2008; Salen et al, 2004). To some extent the efforts, time and costs of game development can be reduced by reusing existing game objects that are available on various online portals such as the Unity Asset Store, Guru, Game Salads and the Unreal Marketplace. Unfortunately, most of the game objects offered come with three main drawbacks. First, most objects are media objects (e.g. terrains, audio, buildings, weapons, user-interface objects, and templates), which do reduce the efforts required for content creation, but still preserve the programming load. Second, the objects very rarely expose pedagogical affordances, while they do not discriminate between leisure games and serious games. Third, the scarce game software objects are bound to a particular game engine or platform and will not run on other platforms: a Unity3D software component will only run in the Unity3D engine. The RAGE project, which is a Horizon 2020 research and innovation project of the European Commission, addresses these issues by making available a set of serious game software components that can be used across a wide variety of game engines, game platforms, and programming languages that game developers have in use (Van der Vegt et al, 2016a). Up to forty reusable software components are foreseen, which cover a wide range of functionalities particularly tuned to the pedagogy of serious gaming. The components address player data analytics, emotion recognition, stealth assessment, personalisation, game balancing, procedural animations, language analysis and synthesis, interactive storytelling, social gamification, and many other functions. The European Commission supports the RAGE project as it envisions a flourishing serious games industry that both stimulates the creation of jobs in the creative industry sectors and helps to address a variety of societal challenges in education, health, social cohesion, and citizenship. Also, the approach of open software components complies with Europe’s Digital Single Market policy, which is among the top priorities of the European Commission: enhancing Europe's position as a world leader in the digital economy by supporting open standards, open data, and open access. Moreover, RAGE complies with the principles of fair competition as its portable open software components help to avoid game engine vendor lock-in, monopolies, and price-fixing. In this paper we present a first selection of software components and explain their design and purpose. These include components for emotion detection, performance statistics, natural language processing and data storage. Before going into the details of these game software components, we will first outline the overall architecture. 2. The RAGE Component-Based System Architecture The purpose of the RAGE component architecture is to allow for the easy integration of reusable software modules across a multitude of different game engines and platforms. The architecture combines a service-oriented architecture (SOA) using web-services for communication with remote agents via the http protocol (e.g. REST) and a client-side framework for components that are to be fully integrated into the game engine (Van der Vegt et al, 2016a; Van der Vegt et al, 2016b). Such a hybrid approach is needed to bypass the limitations of both SOA and client-side solutions. Generally, SOA offers several advantages such as decoupling from implementation details and a high degree of reusability of services. On the downside, SOA may reduce system performance due to frequent network calls and additional overheads, and it offers limited customisation and configuration of services by service consumers. Moreover, the player is supposed to be permanently online, which need not always be the case. The RAGE client architecture relies on a limited set of well-established software patterns and coding practices aimed at decoupling abstraction from its implementation. It avoids dependencies on external software frameworks and minimizes code that may hinder integration with game engine code. However, if client-side components require extensive processing, remote services may be a better option to avoid poor game performance on the local machine, e.g. reduced frame rate or reduced game responsiveness. Client components will primarily be developed in C# and JavaScript/TypeScript as to comply with most popular development platforms. Even so, proofs of concept specifically designed within RAGE are available in C++ and Java (Van der Vegt et al, 2016a). Figure 1 sketches the basic structure of the RAGE architecture (Van der Vegt et al, 2016a; Van der Vegt et al, 2016b). Figure 1. Outline of the RAGE component-based architecture and its diverse data communication routes. In figure 1, the game engine and the eventual game server are in pale grey. Dark rectangles represent RAGE software components. The RAGE data analytics server is a central server-side system that aggregates interaction data from the population of players to be inspected by teachers and educators. This server handles authentication and authorisation, and acts as a gateway proxy server to underlying components and services. It includes a component manager for the registration of web services. Client-side RAGE components are coordinated by a local component manager. RAGE components, either client-side or server-side, can directly interact with each other. Server-side components can be included in the game server or use a separate server. Third party web services can be called by both client-side and server-side components. 3. The Emotion Detection Component Emotion is a highly relevant factor in both game play and learning (Bower, 1981; Butz et al, 2015). Emotions can strongly influence cognitive processes as they relate to memory and action (Pekrun, 1992). It is widely accepted that emotion and cognition represent highly interactive and integrated processes (Izard, 2009), a perspective consistent with the high degree of connectivity among the brain’s neural structures and systems as established by neuroscience studies (Heraz et al, 2011). So far, however, the player’s emotions have been systematically neglected in the player models from serious games, due to difficulties and limitations of detection methods. Advances in machine learning and pattern recognition have now enabled a viable and affordable detection of emotions. The RAGE Emotion Detection component enables game developers to easily include the player’s emotions in the game play. This client-side component uses the player’s webcam video stream for the real-time recognition of the player’s emotions from facial expressions. The Emotion Detection component returns a value for one out of six basic emotions (happiness, sadness, surprise, fear, disgust, anger) suggested by Ekman et al (1978), as well as the neutral emotion. The returned emotional state values can be used in various ways to enhance the pedagogical quality of serious games. First, tracking the player’s moods during gameplay can be used to improve game balancing and personalisation, for instance by informing pedagogical support interventions in the game, such as hints, guidance, and feedback. Second, the unobtrusive tracking of player emotions enables the evaluation and analysis of user experiences during gameplay; issues and bottlenecks in the game could be identified and removed. Third, when emotions are part of the learning content (e.g. in job interview training, negotiation training, or social skills training), the Emotion Detection component can be used for assessing the players’ mastery of emotion expression and for providing feedback based on their performances. Figure 2 shows a screenshot of an applied game in communication training that provides direct feedback on the quality of expressed emotions (Bahreini et al, 2014). Figure 2. Emotion-based feedback in a communication dialogue training game (Bahreini et al, 2014). This RAGE component comes with several advantages. It provides a simple and low cost solution as it just uses a standard webcam. It offers a high accuracy and reliability that are comparable with those of expert human raters. In many respects the webcam-based emotion detection is superior to existing approaches such as physiological sensors and questionnaires, which are obtrusive, discontinuous, disturbing, or unreliable. A developed and tested version of this component has been already reported in a previous study (Bahreini et al, 2014). Technically, the component uses as an input the webcam’s video stream with a frame rate of up to 30 frames per second. For each image received from the webcam, the component returns a probability table for the set of seven emotional states. No storage capacity for the video streams is required as these are processed on the fly and not stored for retrieval or post-processing. This component is being developed as a client-side component in both C# and JavaScript/Typescript. A first experiment with this component involves its use in a job interview game. 4. The Performance Statistics Component Driven by the ever growing datasets in education and training, many educators and scientists consider learning analytics as a means of improving the quality of learning (Buckingham Shum et al, 2012). Serious games would be an excellent target of learning analytics as playing a serious game produces highly individualised data trails that reflect the player’s personal choices, behaviours, and performances. However, due to multiple constraints, it still is a complex case for teachers and game developers to use these trails in a productive way. First, the diversity and large amounts of logged data creates a challenge in terms of deciding which entries to select and which statistical methods to apply. Second, the correct application of statistical methods requires detailed knowledge about the procedures and their limitations. Third, the interpretation of statistical results may be deceptive due to hidden assumptions, constraints, and common pitfalls associated with statistics. The RAGE Performance Statistics component provides teachers and game developers with a utility that overcomes these problems. Basically, the component requires relevant player performance data as inputs and then automatically generates a set of statistical indicators about the player’s learning progress, supplemented with validity checks, key events, and guidelines for their interpretation. The performance statistics component processes two types of input metric that are most commonly used for indicating learning progress (Chance, 2009). The first metric reflects the time required to complete a task, while the second one is indicative of the performance on a task (in terms of errors made or points scored). Tasks are well-identified game activities with a clear start and ending, and possibly well-related to learning goals. Differences in progress may exist between learners as some learners may complete the task in one play-through, while others may need multiple trials. It is essential for tutors to have access to such data in order to be able to provide further assistance and feedback when needed. The Performance Statistics component works on the server-side, enabling it to aggregate player data at population level. Valuable insights are gained from comparisons at population level, as it provides benchmarks for individual students. After game data is sent to the server, the Performance Statistics component performs the statistical analyses and presents the results through a web interface in either tabular or graphical form, including an interpretational message. At individual level, each learner’s score is compared to the group means and standard deviations, while notifications are triggered in case of substantial deviations. Also, a self-comparison is provided for each learner in case of multiple trials on the same task. Learning progress can be identified by fitting a regression curve to the learner scores or the time spent. A second level of analysis is the group level: the performances of different groups, e.g. schools, teams, or classes, are compared. The groups’ progress over trials can be derived from individual player data. T-tests are used to identify differences between groups. In addition, the component supports a third level of analysis: by aggregating task performances across different trials learners can be mutually compared. Deviations from normality or from the average may signal problems. For instance, skewness of the distribution may indicate balancing issues with the task at hand. Similarly, large standard deviations across task performance may indicate unclear or inappropriate task definitions. The Performance Statistics component offers a practicable suite of statistical options that may help teachers gain insight into the learning performances of their learners and that provide timely triggers to detect uneven progress. No substantial prior knowledge about statistics is required from the teacher or the developer to use this component. The performance statistics component is being developed in Java and JavaScript. A first implementation will be tested in a communications training game for Dutch vocational schools. 5. Language Technology Services For a long time, computer-based learning tools were not capable of processing the learners’ textual inputs; instead they had to rely on the unambiguous responses from multiple-choice items. Consequently, learners were required to recognise and select the correct answers from a limited set of options, rather than express their thoughts using natural language. This makes a big difference as the active use of language is closely related to comprehension and in-depth cognitive processing. Obviously, language is a key component to education, because it represents the conduit for communicating and understanding information. In recent years, Natural Language Processing (NLP) techniques have become accurate, efficient, and reliable methods to analyse language and have gained a broader usage. So far, however, their application in education has been scarce and indirect (e.g. simple word count and checking for plagiarism). Given the advances in language technologies, the RAGE project will make available a set of NLP services to be directly integrated in serious games. These services facilitate a variety of improvements in serious game design, which all concern the direct analysis and interpretation of spoken or written traces of players while interacting with the game and/or with fellow players. In addition, our language services facilitate a wide range of educational scenarios covering, for example: the automated evaluation of summaries with regards to comprehension prediction, assessment of CVs and cover letters in terms of adequacy and optimism, providing personalised recommendations for improving one’s writing style, and many more. NLP techniques provide the ground for computational analyses focused on various aspects of language and are related to particular tasks or domains. The tools can measure a variety of linguistic features that are important for understanding texts and predicting learners’ comprehension, including textual cohesion, textual complexity, and sentiment analysis, often referred to as opinion mining, which is identifying and extracting subjective information from texts. The services will be based on the ReaderBench framework (Dascalu, 2014; Dascalu et al., 2014), which introduces a generalised, multi-lingual, automated model applicable to both essay-like or story-like texts as well as conversations in multi-user games, Computer Supported Collaborative Learning (CSCL), or Communities of Practice (cf. figure 3). Figure 3. Global view of the ReaderBench NLP framework. Basically, the framework allows for predicting and assessing comprehension, both for individual and collaborative learning in teams. While comprehension prediction evaluates the textual complexity or difficulty of learning materials, comprehension assessment refers to the a posteriori processing of learner productions, e.g. reports, answers, or contributions to (online) conversations. The core of our language analysis is cohesion, which can be viewed as the sum of lexical, grammatical, and semantic relationships that link together textual segments. Cohesion is automatically computed in terms of semantic distances in lexicalised ontologies (e.g. WordNet or other located instances), Latent Semantic Analysis (LSA) vector spaces and Latent Dirichlet Allocation (LDA) topic distributions (Dascalu, 2014). The two latter semantic models need to be trained on dedicated corpora based on language and domain specificities. Moreover, tailored natural language processing pipelines are enforced by a variety of methods for tokenising, splitting, selection of dictionary-only words, stop words elimination, stemming, part of speech tagging, dependency parsing, lemmatisation, and named-entity recognition (Dascalu, 2014). Four services that have been integrated in the ReaderBench framework are explained below. First, sentiment analysis (opinion mining) allows for tracking the mood in terms of multiple valences that can be extracted, e.g. positive words versus negative words (cf. figure 3). In RAGE, sentiment analysis will be used for assessing the mood derived from application letters that learners have to write in a job application training game. Second, automated essay grading refers to the assessment of learners’ texts in reports and responses to open questions. The service helps to identify specific flaws in writing, which can be improved (e.g., length of phrases, use of discourse connectors, lack of cohesion between adjacent phrases) and allows for providing personalised feedback to learners. Such feedback has been demonstrated to have a big positive impact on writing style quality (Crossley et al, in press). The analysis of textual complexity includes a multitude of factors such as classic readability formulas, surface metrics commonly used in other automatic essay grading systems, syntax indices, as well as semantics and discourse. Importantly, our model is extensible, covers multiple languages (e.g., English, French, Romanian) and is centered on cohesion, introducing innovative indices, such as document cohesion flow (Crossley et al, in press) or Age of Exposure (Dascalu et al, 2016). Third, conversation analysis is applied to support tutors in the time-consuming process of analysing online conversations. Two complementary computational models for assessing participation and collaboration are applied. One model performs a longitudinal analysis of an ongoing conversation and accounts for collaboration from a social knowledge-building perspective. The second model takes a dialogical perspective, while it uses the alternate interactions of speakers for a transversal analysis of the conversation. Fourth, the use of reading strategies by learners during their self-explanations is widely recognised as a determinant of reading comprehension (McNamara, 2004). This RAGE component automatically identifies the employed strategies involving metacognition, causality, bridging, paraphrasing, and elaboration. The identified strategy profiles are reliable predictors of the individual comprehension level, while the accuracy of comprehension prediction can be significantly raised with the integration of textual complexity indices. All components are exposed as REST web services that can be easily integrated in third-party custom applications that query the functionalities provided by the web server. The entire framework is implemented in Java and includes multiple libraries, out of which the most notable are Stanford Core NLP, Apache Mahout, Mallet, and Gephi. 6. The Shared Data Storage Component Since a game might include multiple RAGE components that potentially use shared datasets, a coordinated data management service is needed. This service is provided by the Shared Data Storage component. It offers a client-side technical utility to component developers, as well as game developers, for defining and storing datasets without bothering about compliance with the overall RAGE architecture and the usage in different game engines (Van der Vegt et al, 2016). Simple examples of shared data include the player ID, the associated player scores, or emotion values resulting from the Emotion Detection component. The names and data types should be dictated by a centralized vocabulary. Various RAGE components define taxonomy-based data or data models that can be represented as tree structures (domain model, competency model, player model, etc.). The Shared Data Storage component offers a centralized function to manage and store such information without the need for data duplication and additional coding, thus reducing overheads of testing and maintenance, while preserving the portability to popular game development engines, such as Unity3D and Xamarin. The Shared Data Storage Component allows the model data to be stored in (and/or retrieved from) four different locations (cf. figure 4). ![Persistence sources and destinations of the Shared Data Storage Component.](image) The supported locations are a) the Storage Service offered by the RAGE Proxy Server, b) the client machine’s local filesystem, c) the game using the component, and d) transient data. Online storage offered by the RAGE Proxy Server: This service is available after authentication against the Proxy Service. To use this service, the proxy should be up and running, and the player should be online as well. Storage on the client’s local filesystem: The Shared Data Storage Component’s services include the ability to persist in the local storage when there is no online connection or when authentication against the proxy system fails. Storage in the game: If the game is used as storage location, the game is queried for values whenever needed. This data is read-only for not interfering with the game. Example data would be the time spent within the game, current location, username, or user ID. Transient data: The component also supports that parts of the data structure can be marked as transient and thereby excluded from persistent storage. For accessing the various storage locations, the Shared Data Storage component uses bridge interfaces that are potentially shared with other RAGE components. Three data exchange formats are currently supported: XML, JSON (Crockford, 2006) and .Net Binary. Binary serialisation is used to store the structure in XML and JSON formats: binary data are converted into a textual format by base64-encoding. The Shared Data Storage component keeps developers away from a tedious serialisation process. Serialisation may look deceptively easy, but in fact can be complex for certain data types. Especially JSON is known for problems in inferring datatypes and might not be lossless when used out of the box. JSON serialisers, such as the popular Newtonsoft Json.Net library (Newtonsoft, 2016), often try to deduce the type from the data, whereby data types may change unexpectedly. RAGE’s Shared Data Storage component, however, offers a lossless round-trip serialisation, meaning that structure, data and datatypes are identical after restoring. In order to achieve such lossless round-trip serialisation, type information is saved alongside the data. The component also allows for restoring the data from multiple sources as the data and the internal data structure are serialised and persisted separately. Altogether, the Shared Data Storage component offers developers the benefits of having to integrate only one component, while not having to duplicate code for lossless serialisation, persistence, and the authentication and communication with a storage server. 7. Discussion and Conclusion This paper has presented four separate software components based on the RAGE applied gaming framework. The main purpose of the RAGE framework and its components is to support serious game developers in the process of game development. The components offer game developers easy access to advanced technologies that might otherwise be beyond reach. The components are available as reusable code that can be easily integrated in various existing game engines. Multiple RAGE components can be added to the same game, enhancing their overall support for the learning. More importantly, the usage of RAGE components allows for developing games easier, faster and more cost-effectively. Therefore, the RAGE project aims to accommodate the serious game industry and contribute to amplifying the serious game market. The four components presented in this paper are a first selection out of forty components that are anticipated. But, possibly even more important than the components themselves are the technical infrastructure and component methodology that RAGE will make available and that would enable third parties to create and publish their own RAGE-compliant software technologies. The infrastructure includes tools for adding RAGE metadata, quality assurance checkpoints and authoring widgets for component-related content, and it allows third parties to package, upload, and publish RAGE compliant components to the software repository. Still, some limitations have to be taken into account. In particular, the unconstrained portability of client-side components is subjected to constraints (Van der Vegt et al, 2016a). First, for pragmatic rather than principle reasons, RAGE clients will only use code bases in C# (for desktop and mobile games) and JavaScript/TypeScript (for browser games), which are predominant products of compiled languages and interpreted languages, respectively. Implementing all components in three, six or even more programming languages would simply require too many resources. Second, testing the integration of components in game engines will necessarily be limited to a few major engines for the same reason. It is virtually impossible to cover the wide variety of game engines and platforms that are available (Wikipedia lists 169 different engines), not to speak about compatibility issues with different versions. Third, the inventory of RAGE components should not be conceived as a honey pot that would allow for unlimited snitching. The integration of a large number of components in a game may cause coordination or synchronisation problems, potentially leading to weak performance, malfunctioning, or ultimately system crashes. Although no fundamental problems have been traced so far, the implementation of many components can only be tested on a case-by-case basis. Fourth, although the integration of components in game engines should not be that complex, some components may require extensive preparations before running properly. For instance, in AI components or some of the language technology components, models may need to be trained and tuned to be practicable in specific domains or educational contexts. Notwithstanding these potential issues, RAGE’s software components will work correctly in major game engines and on major platforms, and thereby the project contributes to seizing the potential of serious games for teaching, learning, and various other domains. Acknowledgement This research was partially funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 644187, the RAGE project (www.rageproject.eu). References
{"Source-Url": "http://dspace.ou.nl/bitstream/1820/7109/6/Software_components_ECGBL_2016.pdf", "len_cl100k_base": 5643, "olmocr-version": "0.1.49", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 30706, "total-output-tokens": 7651, "length": "2e12", "weborganizer": {"__label__adult": 0.001010894775390625, "__label__art_design": 0.0010442733764648438, "__label__crime_law": 0.0011301040649414062, "__label__education_jobs": 0.042572021484375, "__label__entertainment": 0.0003991127014160156, "__label__fashion_beauty": 0.0005459785461425781, "__label__finance_business": 0.0007166862487792969, "__label__food_dining": 0.0010776519775390625, "__label__games": 0.0205535888671875, "__label__hardware": 0.0014362335205078125, "__label__health": 0.0011997222900390625, "__label__history": 0.0008459091186523438, "__label__home_hobbies": 0.00019991397857666016, "__label__industrial": 0.0007586479187011719, "__label__literature": 0.001232147216796875, "__label__politics": 0.0007829666137695312, "__label__religion": 0.0013074874877929688, "__label__science_tech": 0.028533935546875, "__label__social_life": 0.0003211498260498047, "__label__software": 0.0158843994140625, "__label__software_dev": 0.87548828125, "__label__sports_fitness": 0.0011749267578125, "__label__transportation": 0.0010852813720703125, "__label__travel": 0.0005602836608886719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34836, 0.02532]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34836, 0.46346]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34836, 0.8998]], "google_gemma-3-12b-it_contains_pii": [[0, 4559, false], [4559, 7792, null], [7792, 11001, null], [11001, 16258, null], [16258, 20543, null], [20543, 24081, null], [24081, 29267, null], [29267, 33781, null], [33781, 34836, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4559, true], [4559, 7792, null], [7792, 11001, null], [11001, 16258, null], [16258, 20543, null], [20543, 24081, null], [24081, 29267, null], [29267, 33781, null], [33781, 34836, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34836, null]], "pdf_page_numbers": [[0, 4559, 1], [4559, 7792, 2], [7792, 11001, 3], [11001, 16258, 4], [16258, 20543, 5], [20543, 24081, 6], [24081, 29267, 7], [29267, 33781, 8], [33781, 34836, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34836, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
5a5d273ce7d89ff15d3483dfae8e610e2db70105
Extending Matrix’s Functionality in a Secure and Sustainable Way Author: Luuk Maas luuk.maas@ru.nl s1010080 First supervisor/assessor: Prof. dr. BPF (Bart) Jacobs bart@cs.ru.nl Second supervisor: H.F. van Stekelenburg harm.vanstekelenburg@ru.nl August 18, 2023 # Contents 1 Introduction .......................................................... 3 1.1 Problem and Motivation ....................................... 3 1.2 Contributions .................................................. 4 2 Preliminaries .......................................................... 5 2.1 What is Matrix? How does it work? ......................... 5 2.1.1 User/device identification .......................... 6 2.1.2 Rooms .................................................... 6 2.1.3 How messages are exchanged ...................... 7 2.2 How does Matrix handle key-management? ................ 7 2.2.1 Using multiple clients ............................. 7 2.2.2 End-to-end message encryption and authentication .. 8 3 Altering Element’s Source Code ................................ 10 3.1 TypeScript ...................................................... 10 3.2 Sustainability .................................................. 11 4 Matrix Widgets .......................................................... 12 4.1 Sustainability .................................................. 13 4.2 Security .......................................................... 13 4.3 Conclusion ....................................................... 14 5 Matrix Bots ............................................................. 15 5.1 Maubot Manager Setup ........................................ 15 5.2 Installing and Using a Plugin ............................... 16 5.3 Developing a Prototype ...................................... 16 5.4 Sustainability ................................................... 17 5.5 Security .......................................................... 17 5.5.1 Other Security Considerations ................... 18 5.6 Conclusion ....................................................... 18 Chapter 1 Introduction For a long time now, instant messaging and social media have played a large role in our lives. Almost all of the major platforms (i.e. WhatsApp, Facebook, Instagram) provide centralized communication, storing your (personal) data in their massive data centres. In 2014, a company called Amdocs started developing Matrix [13]. Matrix realized a standard that aims at secure, decentralized and real-time communication. It has been published and ready for use in production since June 2019 [4]. To learn more about how the Matrix protocol works, we refer to the preliminaries in the next chapter. 1.1 Problem and Motivation Up until the moment of writing this paper, the Matrix protocol has mainly been used to implement instant messaging applications, of which the most well-known example is Element [2]. What if you want something more than “just” instant messaging? As of November 2021, the development of a new (Dutch) community network - named PubHubs - started [10]. PubHubs is an open-source project based on public values. It is open and transparent and protects data of the network’s participants. Obviously, a project like PubHubs needs more than the basic features that are offered by clients such as Element. It would therefore be essential to extend Matrix’s (or its clients’1) functionality. Preferably in such a way that new features can be added without too much endeavor. Consequently, the research question that will be answered during this paper is: “what is the best way to extend Matrix’s functionality in a secure and sustainable way?”. Sustainability in this case refers to the desire that when a new feature or bug-fix is needed, this can be done in a quick, efficient way using the method that we find in this research. We also focus on maintainability and 1A client is the program that allows a user to send messages through Matrix the ease of deployment. Security primarily refers to maintaining end-to-end encryption, one of Matrix’s key features. 1.2 Contributions In this paper the following will be discussed: - We examine the different possibilities to add functionality to Matrix (clients). These include altering the source code of a client like Element (chapter 3); Matrix Widgets (chapter 4) and Matrix bots (chapter 5). For each of these methods we analyze the advantages and disadvantages with respect to sustainability and security. - A prototype in the form of a shared calendar has been developed using the method we deemed “best” from the previous point, which turned out to be Matrix bots. Chapter 2 Preliminaries 2.1 What is Matrix? How does it work? Matrix calls itself an “open standard for interoperable, decentralized, real-time communication over IP.” [4] This sentence contains the key concepts of Matrix: - It is an open standard: Matrix published the protocol in detail in their so-called Matrix Specification [5]. - Interoperable means that it is possible to communicate with other communication systems. - Matrix is decentralized: there is no single point of failure, nor a single point of storage. Anyone can run a homeserver – which is basically a server that forwards messages. - Matrix is designed to operate in real-time, so it is perfect for applications such as instant messaging. In order to use Matrix, you need two things: 1. A client, which is used to connect to a homeserver. The user can choose this freely. Like mentioned before, a commonly used client is Element [2]. 2. A homeserver, which forwards messages to other users. These other users can be using the same homeserver, but also different ones. The Matrix.org Foundation\(^1\) has built Synapse, which can be used to run a homeserver [11]. It is currently the most widely installed implementation for homeservers, according to The Matrix Foundation\(^2\). If \(^{1}\)The non-profit organization that guards the Matrix standard on behalf of the Matrix community. \(^{2}\)https://matrix.org/docs/guides/installing-synapse you do not want to run your own homeserver, you can use Matrix’s homeserver. 2.1.1 User/device identification Users send messages using a client (i.e. Element). In order to do so, every user needs a user account and a device. A user is identified by a username/directory combination, like so: \texttt{@username:domain}. The domain part refers to the current homeserver that a user is using. See figure 2.1. Every time a user logs in on a new client, it registers as a new device. For every new device, a set of keys are generated. These keys consist of long-term keys and one-time keys. More on key-management can be found in section 2.2. ![Figure 2.1: Schema of clients connected to federated homeservers, from the Matrix documentation](https://matrix.org/docs/matrix-concepts/elements-of-matrix/#homeserver) 2.1.2 Rooms Everything in Matrix happens in so-called rooms. Rooms are distributed, meaning that they do not live on a single homeserver. A room is identified by a (unique) room identifier and a domain, like this: \texttt{!opaque_id:domain}. However, a room may optionally have one or more aliases to increase the findability of that room. A room alias looks like this: \texttt{#room_alias:domain}. Note that the ‘domain’ part (in both the room ID and room alias) just refers to the homeserver that the room was created from. Like mentioned earlier, a room does not live on a single homeserver. The domain part is simply there for namespaceing (to avoid clashes of room identifiers between different homeservers) [5]. 2.1.3 How messages are exchanged Assume Alice and Bob each have their own homeserver and established a room together. Now Alice wants to send a message to Bob. First, Alice’s message gets sent to Alice’s homeserver. Next, Alice’s homeserver stores and forwards (“federates”) this message to Bob’s homeserver which stores the message, too. Lastly, Bob can now retrieve the message from his homeserver. If Alice now logs on to a new device, she can retrieve the message she sent earlier from her homeserver (if the homeserver approves this request). Before messages get federated from one homeserver to other homeservers, they are first hashed and signed (using the homeserver’s private key). Upon receiving a message, a homeserver first verifies the hash and signature before processing it. The encryption of messages is discussed in section 2.2.2. 2.2 How does Matrix handle key-management? As mentioned in the previous section, Matrix maintains both long-term and one-term keys for devices. Each time a user logs in to a client for the first time, that client is registered as a new “device”. For this device, both long-term keys and one-term keys are generated: **Long-term device keys** - An Ed25519 fingerprint key pair. This key pair is used for signatures and key verification. - A Curve25519 identity key pair. This key pair is used for shared secret key establishment. **One-term device keys** The one-term key pairs are generated using the aforementioned Curve25519 key pair, and they are used for shared secret key establishment as well. All public keys are published to the homeserver using the client-server API. The private keys are stored within the client. 2.2.1 Using multiple clients When a user registers with a new client (device), it will not be able to read prior sent messages immediately. In order to do that, the user (through the client) has to request the keys from its other device(s) through an m.room_key_request event. If the “old” client receives and accepts this request, the keys can be exported to the new client. 2.2.2 End-to-end message encryption and authentication Whenever two users want to communicate securely (end-to-end encryption) using the Matrix protocol, they first verify each other’s public fingerprint key using SAS (Short Authentication String) verification. These fingerprint keys are used to sign the identity keys, which can be verified by both parties. Upon success, they establish authenticated encryption/decryption as follows: Matrix End-to-End Encryption Alice Generate one-time key pair $O_A$ $O_A^{\text{public}}$ request $O_B^{\text{public}}, I_B^{\text{public}}$ Compute shared secret $S$ using triple Diffie-Hellman Using $S$, compute AES key/IV and HMAC key Construct Megolm session, encrypt using AES key/IV Encrypted Megolm session Alice Bob Generate one-time key pair $O_B$ $O_B^{\text{public}}$ request $O_B^{\text{public}}, I_B^{\text{public}}$ Compute shared secret $S$ using triple Diffie-Hellman Using $S$, compute AES key/IV and HMAC key Construct Megolm session, encrypt using AES key/IV Encrypted Megolm session Bob Figure 2.2: Matrix End-to-End Encryption Protocol Chapter 3 Altering Element’s Source Code The first attempt to extend Matrix’s functionality was to alter the source code of the widely used client Element. Element, like Matrix, is an open-source project [1]. This makes it possible to alter the source code and optionally submit a pull-request to have the features added to the main version (‘branch’) of Element. Element maintains three repositories for three different platforms: the web, iOS and Android. Given the fact that the web version has the most forks and is the easiest to setup and test with, we chose the web version as our playground. 3.1 TypeScript Element for web is largely programmed in a programming language called TypeScript. It is built on top of JavaScript, but with stricter syntax [12]. It converts to native JavaScript, so it runs anywhere where JavaScript runs, like on the web. Although I had not worked with TypeScript before this research, it is fairly straightforward to understand and work with. Even-though the programming language itself can be understood quite easily, the total Element (web) project is quite extensive and takes some time to fully comprehend. The project is broken down in three components: the Element project itself; the Matrix React SDK\(^1\); and the Matrix JS SDK\(^2\). Altering the source code would at the very least require mastering all three of these components and likely also to make changes to multiple components. For the rather short duration of this research it is infeasible to completely master the code enough, let alone extend it. \(^1\)https://github.com/matrix-org/matrix-react-sdk \(^2\)https://github.com/matrix-org/matrix-js-sdk 3.2 Sustainability The fact that it appears to be infeasible to come up with a prototype for an extension within the duration of this research, points to another problem. As stated in the introduction and research question, we are looking for a sustainable solution. This means that we want the solution not to just fit our example (a shared calendar), but we rather want it to be a general method that allows for the development of a multitude of extensions/plugins. From that point of view, altering the source code does not seem like a great idea, since every little change to the code (may that be a bug-fix or a new feature) requires a pull request that needs to be reviewed and deployed. Another option would be to maintain a fork of the project, which is not a sustainable option either. We will therefore move away from this idea and explore other options. Matrix Widgets Matrix widgets is another option that appears to be promising for extending Matrix’ functionality [6]. In essence, a Matrix widget is nothing more than an iframe embedded in a Matrix room. Widgets can be room-based and user-based. Room-based widgets are attached to a single room and available for all users in that room. User-based widgets are linked to a single user, and available to that user in all rooms. An example of a user-based widget is a sticker pack. Examples of room-based widgets are a Spotify widget or a Google Calendar widget. Below is an example from [6] on how to create a room-based widget. Note the options type: "m.widget" and url: "..." which are the key options for creating a widget. ```json { content: { creatorUserId: "@rxl881:matrix.org", data: { title: "Bridges Dashboard", ... }, id: "grafana_@rxl881:matrix.org_1514573757015", name: "Grafana", type: "m.grafana", url: "https://matrix.org/grafana/whatever", waitForIframeLoad: true }, room_id: "!foo:bar", sender: "@rxl881:matrix.org", state_key: "grafana_@rxl881:matrix.org_1514573757015", type: "m.widget" } ``` In principle Widgets seem promising. However, there is a major flaw that prevents Widgets from being a realistic option for extending Matrix (client) functionalities right now. The problem is that the Matrix Widget API is currently nothing but a proposal. There is a live version of the API, but it is not part of the Matrix specification. That implies that as of this moment, a widget - or more specifically, the iframe - cannot communicate (securely) with the Matrix room or the users that are in that room. Although the Matrix foundation does not give an explicit reason as to why Widgets are not part of the specification, the long list of security considerations (see 4.2) might be part of that reason. For static applications - like weather widgets, displaying static dashboards or showing a YouTube video - it is not an issue that there is no API between the application and the Matrix room. However, for our example of a shared calendar, we would need a connection between the application (calendar) and the Matrix room. Given the potential of these Matrix Widgets I do expect that in the future this API will be included in the Matrix specification, after the security considerations have been addressed. Despite the fact that it is impossible right now to develop a prototype using this method, I will elaborate further on this topic for future reference (in case it becomes relevant). 4.1 Sustainability On the assumption that the API would work, widgets would be a very suitable option for our example: a shared calendar. The calendar itself - including the data(base) - could live on the server that hosts the widget and the users in the room could communicate with it by making (shared) appointments, for instance. The convenient part is that almost no knowledge of Matrix or its clients is required to implement a plugin like this. It mainly boils down to developing a web application which communicates through an API with the room that the widget is in. This also has the upside that the widget can easily be changed and maintained, if needed. Since we do not alter any source code of Matrix or Element, updates like bug fixes or added features are visible immediately. 4.2 Security The widget API documentation [6] describes some security issues. For instance, a room administrator could add a widget to a room and retrieve all members’ IP addresses by monitoring requests made to the widget. This is not desired for obvious privacy reasons. Potentially even more worrying is the following note from the security considerations: “The presence of a $matrix_user_id does NOT mean the entity making the request has control of that account, there is no verification step. [...] Widgets may incorrectly assume this and hence do awful things like store important info per-account and think that’s secure in any way.” In other words: there is no such thing as authentication between the Matrix room and the widget. For the shared calendar that would imply that a user can create, update or delete appointments from a different user. This is undoubtedly a security issue that should be fixed by changing the implementation of the widgets and add some form of authentication. Another issue is the fact that interaction between the Matrix client and the widget - and vice versa - seems to be unencrypted. This is apparent from the following example request (from the documentation), which allows a widget to post a message in a room: ``` { api: "fromWidget", action: "send_event", widgetId: $WIDGET_ID, event: { "msgtype": "m.text", "body": "Hi" } } ``` It makes sense that there is no end-to-end encryption, since the widget is not a “user” of the room. It therefore has no key-pair, and even if it could generate a key-pair it cannot publish its public key to a home-server, nor retrieve other public keys. ### 4.3 Conclusion Although widgets have a lot of benefits, there are simply too many drawbacks for now to consider it as a realistic option for extending Matrix (client) functionality. For now, we will move on to a different method, but it does seem worthwhile to revisit this method in the future when the API has been implemented properly and the security flaws have been dealt with. Chapter 5 Matrix Bots Besides altering Element’s source code or implementing widgets, there is also the possibility of adding so-called bots to a Matrix room. Technically, a bot is nothing more than a user in a Matrix room that returns certain messages based on input messages - which you could see as a “command” - with optional arguments. As a simple example, a bot could function as a dice, that upon calling it returns a random number between 1 and 6. Another example is a weather bot, that returns the weather from an external API based on a given location. A bot can be built from scratch, but there also exists a very convenient bot manager called Maubot [9]. Maubot runs as a separate application which can be setup on any server. For convenience I would suggest running it on the same server as the Matrix homeserver (where the room was created), which is what I have done during the development of the shared calendar prototype. Maubot comes with some official and third party plugins [9], but it is designed to allow developing your own, custom plugins (written in Python), which is exactly what we are looking for. 5.1 Maubot Manager Setup As mentioned, the Maubot manager can be run on any server, since it is basically a normal Matrix user. The Maubot manager communicates to the homeserver using the Client-Server API [5], which allows for convenient security features (see 5.5). This also means that Maubot manager is not dependent on the homeserver implementation: it works with any homeserver. In principle anyone could setup a Maubot instance, both a random user or the room administrator. On the other hand, the room administrator can simply restrict this by (not) allowing a user - that is a bot - to join their room. 5.2 Installing and Using a Plugin After setting up the Maubot manager (for instance using their Docker image [7]), installing and using a plugin is quite straightforward: 1. The first step is to create a Plugin by uploading a .mbp file to the server using the Maubot manager. As mentioned earlier, you could download an existing plugin and upload it, or develop and build your own. 2. Next, a Client has to be created in the manager. This Client is an object that is linked to a user from a certain homeserver. 3. Lastly, we create a so-called Instance. An Instance assigns a Plugin to a Client. Once the corresponding user has been added to a room, you can use the bot by calling the command, preceded by an exclamation mark. For instance, the earlier mentioned dice bot can be used by sending !roll. 5.3 Developing a Prototype Creating a custom bot is very straightforward, and a simple bot can be developed in a matter of minutes. Depending on the complexity of the bot, this may obviously take longer. A Maubot bot consists of two components, a maubot.yaml file and some Python modules, which you point to in the yaml file. The yaml file also indicates whether or not your plugin requires a database. In our case we need a database to store the calendar items (appointments). For my testing environment I setup my own homeserver using Synapse [11]. I then downloaded Element for MacOS and set it up with my homeserver. Lastly, I also installed the Maubot manager using the Docker image. Step 1 of developing the calendarbot - as we will call the plugin from now on - is creating maubot.yaml. This looks like the following: ``` 1 maubot: 0.1.0 2 id: pubhubs.calendarbot 3 version: 1.0.0 4 modules: 5 - calendarbot 6 main_class: CalendarBot 7 database: true ``` The above configuration tells Maubot to find the `calendarbot` Python module and look for the `CalendarBot` class inside of it. It also tells Maubot that our plugin requires a database. Now let us take a look at the `CalendarBot` class\(^1\) in appendix A. For the (very) basic prototype that I have developed, I created two commands: 1. `!create Appointment TITLE DATETIME [PARTICIPANT]` This command can be called by any user and will create a database entry with the specified details. The last argument (PARTICIPANT) is optional and may be used to “share” an appointment with another user. It also automatically stores the user\(_id\) of the user who created the appointment and the room\(_id\) of the room the appointment was created in. 2. `!get appointments` As the name suggests, this command retrieves all appointments from the user who calls it. It only returns the appointments that were created in the room that this command is being called from. This way, all appointments can be stored in the same database table, while keeping the appointment separated between rooms. ### 5.4 Sustainability Now that we have implemented a (simple) prototype for a shared calendar, let us review the sustainability of Matrix bots. As we have seen, Matrix bots are relatively simple to implement once the Maubot manager is setup. Besides very rapid development, the plugins are also very easy to maintain. After a change, the new version of the plugin can simply be uploaded to the Maubot manager. The only thing that is required for the changes to take effect after that, is a reload of the `Instance`. ### 5.5 Security From a technical perspective, bots do not add anything to Matrix or the client. They are simply users that listen for certain messages and respond accordingly. Therefore, the security of bots are the same as when two users exchange “normal” messages. Bots are compatible with end-to-end encryption, as long as the Maubot manager runs in an environment that has some dependencies installed [8]. These mainly include Python bindings for the Olm C library. \(^1\)Note: this class depends on the `Database` class from `calendarbot.database`. It is too extensive to include here, but can be found on my Github repository that I made for this thesis [3]. Because of the end-to-end encryption between users/clients, the communication between a user and the bot (which is technically also a user) is authenticated and confidential (see section 2.2.2). The calendar entries are stored in plain text however, like all regular messages in regular clients, like Element. To retrieve calendar entries, the sender ID and room ID are checked to retrieve only the correct entries. 5.5.1 Other Security Considerations Since multiple bots (Plugins) can live on the same Maubot manager instance, it is important to realise that they both use the same database. This means that caution is required when installing multiple Plugins: checking the source code for malicious database queries is probably a good idea. I would even suggest to not install multiple Plugins that use a database on the same Maubot manager, to prevent malicious database queries. 5.6 Conclusion Overall, bots have a lot of advantages, especially from a security point of view since they have the same security features as a normal Matrix user. Moreover, the relative little development time needed to implement a plugin using the above method is a big plus. The simple prototype we developed proves this point. Chapter 6 Conclusions During the course of this thesis, we have looked at three different methods for extending the functionality of Matrix and its clients. The initial attempt - altering the source code of Element - quickly turned out to be infeasible within the time-frame of this research. Moreover, altering source code is such a complicated job, that it is absolutely unsuitable for implementing different, new plugins. Next, we explored Matrix widgets. Although it definitely has some promising features - such as good maintainability and quick development - it still lacks a good and secure API that makes it possible for a widget to communicate with the room. For the time being, it is not quite suitable for developing sustainable plugins. However, it does seem worthwhile revisiting this method in the future. Lastly, we investigated Matrix bots. The simplicity of bots made it to be a surprisingly good option for extending Matrix client functionality. Due to the fact that bots are technically regular Matrix users, they benefit from the same security features. For instance, the communication from the user to the bot (another user) is end-to-end encrypted. Additionally, we have seen that bots are incredibly easy to implement and maintain. When considering both security and sustainability, we conclude that developing bots is the most advisable method to add functionality to Matrix. 6.1 Future work As you may have noticed already, this research did not focus on user-friendliness at all. At the beginning of this research we decided to move that out of the scope. Bots are undoubtedly not a great choice from a usability point of view. That is why I would suggest future research to focus on how to improve the usability of bots. For example by making the “commands” selectable from a list in the client. Bibliography [6] Matrix widget api v2. https://docs.google.com/document/d/1uPF7XWy_dXTKVKV7jZQ2KmsI19wn9-kFRgQ1tFQF7wQ. Appendix A CalendarBot Code Below you can find the CalendarBot Python class. ```python class CalendarBot(Plugin): db: Database async def start(self) -> None: await super().start() self.db = Database(self.database) @command.new("create_appointment") @command.argument("title", required=True) @command.argument("datetime", required=True) @command.argument("participant", required=False) async def create_appointment(self, event: MessageEvent, title: str, datetime: str, participant: str): self.db.create_appointment(event.sender, event.room_id, title, datetime, participant) await event.reply("Successfully saved appointment '" + title + '"") @command.new("get_appointments") async def get_appointments(self, event: MessageEvent): appointments = self.db.get_appointments(event.sender, event.room_id) await event.reply("Appointments: \ " + appointments) ```
{"Source-Url": "https://www.cs.ru.nl/bachelors-theses/2023/Luuk_Maas___1010080___Extending_Matrix's_Functionality_in_a_Secure_and_Sustainable_Way.pdf", "len_cl100k_base": 6263, "olmocr-version": "0.1.50", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 37889, "total-output-tokens": 7600, "length": "2e12", "weborganizer": {"__label__adult": 0.00030112266540527344, "__label__art_design": 0.0002264976501464844, "__label__crime_law": 0.00030422210693359375, "__label__education_jobs": 0.00044035911560058594, "__label__entertainment": 6.091594696044922e-05, "__label__fashion_beauty": 0.00010061264038085938, "__label__finance_business": 0.00016677379608154297, "__label__food_dining": 0.0002605915069580078, "__label__games": 0.0004754066467285156, "__label__hardware": 0.00061798095703125, "__label__health": 0.0002703666687011719, "__label__history": 0.00015485286712646484, "__label__home_hobbies": 6.300210952758789e-05, "__label__industrial": 0.00020062923431396484, "__label__literature": 0.00015366077423095703, "__label__politics": 0.00014257431030273438, "__label__religion": 0.00020623207092285156, "__label__science_tech": 0.00803375244140625, "__label__social_life": 0.00011670589447021484, "__label__software": 0.00977325439453125, "__label__software_dev": 0.9775390625, "__label__sports_fitness": 0.0001809597015380859, "__label__transportation": 0.0002237558364868164, "__label__travel": 0.00014293193817138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29805, 0.03128]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29805, 0.21279]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29805, 0.90057]], "google_gemma-3-12b-it_contains_pii": [[0, 265, false], [265, 2231, null], [2231, 2231, null], [2231, 4116, null], [4116, 4795, null], [4795, 6217, null], [6217, 7826, null], [7826, 9886, null], [9886, 10324, null], [10324, 10998, null], [10998, 12664, null], [12664, 13531, null], [13531, 14700, null], [14700, 17167, null], [17167, 18954, null], [18954, 20699, null], [20699, 22475, null], [22475, 24755, null], [24755, 25974, null], [25974, 27804, null], [27804, 28773, null], [28773, 29805, null]], "google_gemma-3-12b-it_is_public_document": [[0, 265, true], [265, 2231, null], [2231, 2231, null], [2231, 4116, null], [4116, 4795, null], [4795, 6217, null], [6217, 7826, null], [7826, 9886, null], [9886, 10324, null], [10324, 10998, null], [10998, 12664, null], [12664, 13531, null], [13531, 14700, null], [14700, 17167, null], [17167, 18954, null], [18954, 20699, null], [20699, 22475, null], [22475, 24755, null], [24755, 25974, null], [25974, 27804, null], [27804, 28773, null], [28773, 29805, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29805, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29805, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29805, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29805, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29805, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29805, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29805, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29805, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29805, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29805, null]], "pdf_page_numbers": [[0, 265, 1], [265, 2231, 2], [2231, 2231, 3], [2231, 4116, 4], [4116, 4795, 5], [4795, 6217, 6], [6217, 7826, 7], [7826, 9886, 8], [9886, 10324, 9], [10324, 10998, 10], [10998, 12664, 11], [12664, 13531, 12], [13531, 14700, 13], [14700, 17167, 14], [17167, 18954, 15], [18954, 20699, 16], [20699, 22475, 17], [22475, 24755, 18], [24755, 25974, 19], [25974, 27804, 20], [27804, 28773, 21], [28773, 29805, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29805, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
975859e7c51d5f0deb2d3e2c1cfdf4f2870d5dae
EXTREME PROGRAMMING VS SCRUM: A COMPARISON OF AGILE MODELS Asma Akhtar, Birra Bakhtawar, Samia Akhtar Department of Computer Science, Virtual University of Pakistan, Lahore, Pakistan asmaakhtarjanjua@gmail.com, birra.bakhtawar@gmail.com, samiaakhtar9898@gmail.com ABSTRACT For past couple of years agile software methods have been quite popular among the researchers. Agile models are known as light weight in contrast with conventional software development methodologies, due to their casual, versatile, and adaptable style. Agile frameworks became heartily accepted by the software society in view of their concentration towards timely software conveyance, product quality and user satisfaction. For the fulfillment of requirements and needs of different software projects multiple agile frameworks are present to choose from. Out of these models Extreme Programming and Scrum are the most recognizable and generally utilized frameworks. This research contributes by investigating these two frameworks thoroughly. This paper conducts a comprehensive comparison between Scrum and Extreme programming to track down their commonalities, dissimilarities and investigate those attributes which complement each other. Keywords: Extreme Programming, XP, Scrum, XP vs Scrum 1. INTRODUCTION Software development methodologies are utilized for creating all kinds of projects whether small and easy or big and complex. These methodologies minimize the risks of project failure. They are very helpful in developing software in sectors like academic and industries [1]. There are different software development methods present like waterfall, spiral and agile [2][3][4][5]. Due to the benefits and the attributes presented by agile methodologies, agile frameworks have taken the place of traditional models in the software community [6][7][8][9]. These attributes tackle the necessities of present-day development of software. The scholars, analysts and software practitioners blended the best practices and features to conquer the downfalls of traditional models [10]. To produce good quality product, agile frameworks deliver the software considerably fast [11]. Agile frameworks are iterative and incremental in nature hence user’s requirements are fulfilled by constantly delivering fragmentary and partly done software [12][13][14]. Any change in client’s requirements is also easily handled during any phase of development. These models have proven effective for handling the changing business environment [15][16][17][18]. Agile methodologies have been getting admired for quite some time now as they are simple, adaptable, and perfectly suitable for today’s requirements of software development. They have proved to be quite resourceful in various fields like business and education system [18]. Feature driven development [19], Adaptive software development, Dynamic system development method, Lean software development, Scrum and Extreme programming [2] are the examples of agile models. These different agile methods also include a variety of techniques like test-driven development [20]. The most utilized agile frameworks are Scrum and Extreme Programming particularly for limited scope projects. Extreme programming has 6 phases in the development process. 12 best practices of Software engineering are used during these phases. Extreme programming utilizes these practices to achieve better quality software [21]. Change in requirements can be easily managed due to its adaptable, lightweight and iterative nature even in the last stages of development [22][13]. A great interest has been observed as well in customizing XP as it is simple and flexible [23]. An example of such customization is Scaled Extreme Programming [24] XP also assists the developers to consistently recognize and get work done on the software artifacts with higher priorities [15]. Scrum is an iterative project management model which gives a straightforward ‘inspect and adapt’ system [25]. In this model the delivery of the software is in increments known as Sprints. The scrum model is very useful for projects that are complex especially the ones with requirements and essential technology that is still immature [26] and out of all the agile models, Scrum is the most frequently used [27][5][28][29]. It is a very popular model and adopted all around the world. Scrum was also found to be widely used at Microsoft [30]. Like XP, Scrum can also be customized [10]. Many hybrid models of scrum have been developed within the agile family [6]. These two agile frameworks, XP, and Scrum have quite a few similarities and differences. This study is directed to investigate and analyze them thoroughly. This examination gives a profound understanding of these two approaches that will significantly be quite useful for developers and analysts. A systematic research process is performed in this paper for the comparison study of XP and Scrum. To perform a successful systematic research process, an appropriate research method is required which can assist in accomplishing all the research objectives. Multiple studies have been conducted which provide a guideline for the systematic research process [10][23][31]. A systematic search strategy was formed based on these studies which was adopted in this paper. Generally, Systematic research process consists of 3 fundamental steps; plan, conduct and document. The steps which are included in our research methodology are as follows; Research questions are defined, keywords are stated to form the query string, research space is specified to collect data, inclusion and exclusion criteria for articles and studies is defined, literature is extracted based on this criteria, quality of the study is assessed, required data is synthesized and lastly the results are documented [32]. The remaining paper is divided into following sections: Section 2 explains Scrum, Section 3 describes Extreme programming extensively and Section 4 presents the comparison of both models in detail. 2. THEORETICAL FRAMEWORK In 1995 Sutherland and Ken Schwaber came up with an iteration-based software development framework. Later, it began to be known as Scrum when Ken Schwaber and Mike Beedle released a book called "Agile Development Software with Scrum" in year 2001. Scrum tackles loopholes of previous frameworks effectively and its every release is as per the changing customer demands. The overall process is completely transparent, well-inspected and easily adaptable. This strong check and balance system assures quick elimination of anomalies and timely results [33]. The product is released in phases called Sprints. The duration of sprints is 30 days or less. This model consists of 3 roles, 4 ceremonies and 3 artifacts. 2.1 Scrum Phases There is Pregame, Game and Post game [34]. Pregame: Pregame defines the tentative vision based on customer expectations and market demand. This vision is continuously modified throughout the process later. The main purpose in this initial phase is to create a Product Backlog which has a list of functional and non-functional requirements [33]. Other form of data which needs to be reported is time and cost estimation plus the number of releases and the expected delivery date. For all these things, you need people, tools and funds [35]. So, the backlog contains this information as well. It has the names of members, the required development tools and the minimum number of finances needed to carry out each step of the plan. Fig. 1. Shows framework of scrum. Game: Game happens in Sprints. Sprint is a one to four weeks period-based process in which you create, wrap, check and modify the product in question [34]. Post-game: In this phase, the final product is released, and the integration test is performed. Of course, this is after ascertaining that all the established requirements have been met. The user manual and training materials are also prepared to facilitate the user. 2.2 Sprint Cycle The product backlog sets guidelines for the software [34]. The Scrum Master guides the team in its development phase and before the actual delivery of the product. The following steps are taken in the overall process Sprint Planning and Daily Scrum: First, the scrum master and the product owner decide on the requirements. The priority tasks are filtered and guidelines for understanding the user needs are established. Next, the mission is handed to the team members after clearly communicating the requirements [36]. The self-driven, self-organizing and highly trained professionals, who despite having zero involvement from the outside, are expected to answer the following questions on almost a routinely basis [37]. - What is the progress so far? - Are there any deviations from the established sprint goals? - Are there any nuisances in the process? Of course, the goal is to keep a tight check on the over-all performance without disrupting anybody’s personal space for quick and efficient results. 3. LITERATURE REVIEW Each sprint is developed and tested as per the guidelines and established priority, as communicated by the master. Now the sprints are reviewed by the boss and the owner. A detailed inspection is carried out to check whether the product is in line with the customer requirements. Next, a meeting is held where thoughts and opinions between the two parties are shared in detail. Based on this discussion, the next course of action is decided. The team members might have to completely re-think and design the sprint if the owner deems it necessary. For a 1-month sprint, the maximum duration of a sprint review is 4 hours [38]. Having established the basic plan, the next meeting is all about the improvements that could be made. The two parties check for any loopholes and decide on the remedial measures. Scrum is the team established for product development. It consists of one, the product owner, two, the scrum master and three, the team members [37]. The product owner is the master of all trades. He creates the project schedules, manages finances, negotiates with the shareholders, and communicates their needs to the team. He has a fairly good idea of market’s strengths and weaknesses and understands his team [39]. Therefore, it is imperative that he has strong grip on both the management and engineering fields. He stays in touch with the changing market dynamics and keeps his team informed about all the rising opportunities that can be beneficial for the project [40]. The scrum master is the team’s master. He establishes some ground rules and makes sure everybody follows them [41]. Also, he keeps in check the over-all working environment, protects the team from any outside intervention and takes care of the interests of his members. He holds scrum meetings daily where work-related issues are discussed, and practical solutions are offered [42]. The team has generally 3 to 9 members. Although the team size depends on the area of operations, but most studies suggest that a team size of 7 +/- 2 has more success rate [38]. Each member is highly qualified and trained for the task assigned [43]. While each member has the freedom to choose his own task, the team must ensure smooth execution of the process from initial phase to its final practical implementation in the market [44]. 3.2. Research Model In our research, we have developed the following model for the better understanding of the concept used in the research. ![Scrum Framework](image) *Figure 1: Scrum Framework* 3.3. Variables Identification **Extreme Programming:** Extreme programming, an agile model, was invented by Kent Beck in the year of 1996. He then introduced his work on Extreme programming in a much sophisticated and advanced form in the shape of a book known as “Extreme Programming Explained”. It is quite simple, uncomplicated, and more adaptable methodology of development with the capacity to oversee unclear, ambiguous, or quickly varying requirements [26]. This model is well suited for teams of small or medium size [45]. Extreme Programming is a set of values, principles and practices which are implemented in an orderly fashion [46]. Such practices that proved useful in creating a high-quality software were taken to extreme hence the name Extreme programming. This agile model greatly emphasizes on user satisfaction. To tackle the defects and errors at early stages, frequent reports and partially done software releases are necessary. Lesser number of defects bring down the development cost and expenses and produce high quality product at such a low cost. Extreme programming overview is given in Fig. 2. **XP Phases:** There are six phases in Extreme Programming development process: Exploration, Planning, Iteration to release phase, Production, Maintenance and Death phase. **Exploration Phase:** This stage is the first step of Extreme Programming life cycle that manages product’s architectural development and requirements of the product. This stage describes client’s requirements, design and architecture, tools and software used. To schedule the release, a meetup is organized between client and the developer. The story cards are used by clients to compose user stories to present software requirements. The user story cards consist of story’s priority, a brief name and one to two paragraphs of non-technical text [47]. The user stories are supposed to be comprehensive and very much in detail to assist the developer to better grasp and comprehend system requirements and help furthermore with estimations. The time that is needed to execute a story is known as time estimation [48]. In case a story takes more time to execute then the customer can break that story down into rather small fragments of stories. During architectural spike metaphors are formed for the purpose of modeling architecture [49]. A Metaphor does not fulfill all the criteria of a whole architecture rather it is just a structure containing objects and their interfaces. Exploration stage has a duration of several weeks to several months. **Planning Phase:** Planning stage begins right after the exploration stage has completed and this second phase looks for the responses of two inquiries fundamentally: Which parts can be formed within the deadline that consist of a business value? What would be the strategy for the upcoming iteration? Planning stage requires merely one or two days to finish if the exploration stage went well. Utilizing the user stories tasks are drawn and composed on task cards during Planning. In Extreme programming planning is knowns as “Planning game” which is additionally carried out in two sections i.e., Iteration planning and Release planning [50]. **Release Planning:** The main target of release planning is to discover the attributes required by the system and delivery plan of those attributes. In the meeting held for Release planning both the client and the developer take part. Release Planning is further sub divided into three stages: “exploration phase, commitment phase and steering phase” [47]. Story cards are written by the clients to find out the much-needed system attributes [51]. Then all these required attributes are arranged by their significance and for the latest release a smaller group of story cards is chosen. This is a continuous procedure that can be modified and changed by adding, erasing, combining, or breaking up a few stories [52]. **Iteration Planning:** Every iteration usually begins with iteration planning. A plan is devised by developers of the tasks which implement required attributes of the latest release at this stage. During this phase the developer choses the activities that are to be carried out and evaluates the needed expense, time, and effort for the chosen activity [53]. These activities might be shared between the programmers to stabilize and adjust the load of the work. **Iteration to Release Phase:** This stage includes the fundamental development tasks like designing, coding, testing and integration. Every iteration may take 1-4 weeks. In the very first iteration those stories are chosen that design and consist of the architecture of the product. A pair of developers execute the activities chosen for the recent iteration. The developers choose an activity and then create a basic design to code it [54]. Functional testing is the next step after the completion of coding. In case the programmed code does not satisfy the needs then code refactoring is implied. Multiple iterations might be required for development to end. To monitor development process or to discuss any problems that may arise, standup meetings are held. The code is set for the next phase after the last iteration [55]. **Production Phase:** Extreme programming releases small releases of the software as it an incremental process. Continuous release system in this model lets the system to be developed in iterations [56]. A cycle of release may contain many iterations and each iteration may be of one to four weeks of duration. This phase focuses on software delivery in small releases [57]. During production phase developers delay the speed of system evolution to avoid any risks. **Maintenance Phase:** Maintenance is a fundamental process for any software framework. In Extreme programming the software is updated and modified continuously for some time span. A new functionality is developed while the previous is still in use at this stage [58]. Those alterations that might hinder or cause issues with the production are terminated. **Death Phase:** Death phase is the last stage of Extreme programming. To arrive at death phase there are two possibilities. In first scenario the software is released when the built software contains all the required functionality, client satisfaction and zero stories. A record is kept in the shape of a short document of 5-10 pages in case it is needed in the future. In the second scenario “entropic death” of the software occurs and it would be wise to cease the development of the software [59]. ### 4. DATA ANALYSIS #### 4.1 XP Practices Extreme programming has 12 practices. These practices make Extreme programming unique and stand out as compared to other software frameworks. During the development of the software these twelve practices are applied considering the principles of Extreme programming [60]. **Planning Game:** For additional planning requirements of the system are gathered on a story card. During planning game roles and size of the team, time and whole plan and agenda is laid out [61]. This practice is carried out in 2 sections i.e., release planning and iteration planning. **Small Releases:** In each delivery a bunch of requirements are created that have a little business value [47]. These small releases make the product accessible and in reach of the client for analyzation and inspection. **Metaphor:** It is the structural and architectural plan of the system that depicts how system should function. It is a vital method for the understanding of the system for developers [62]. **Simple Design:** This practice serves to be an incredible way to develop fundamental required functionality of the framework and steer clear of less important information. It centers around latest required attributes. **Continuous Testing:** Continuous testing gives a speedy input and response. Unit testing and acceptance testing are utilized by Extreme Programming persistently. **Refactoring:** Refactoring is rebuilding the framework without altering its behavior [61]. To improve the code quality, developers use this activity consistently. **Pair Programming:** In extreme programming two software engineers code at one same machine. To build best quality software and a cheaper cost [63], pair programming is utilized since many defects are caught and rectified within a short span of time by the second programmer. **Collective Ownership:** The concept of collective ownership of the code is such that any developer can have the approach to any piece of code whenever they desire to modify it [64]. By granting this permission of review by multiple programmers boosts the software’s quality. **Continuous Integration:** By the time each task is finished the system integration is performed along with testing which lowers the chances of any integration issues and further enhances the quality [65]. **40-Hour Week:** It is a standard defined by Extreme programming that programmers are not allowed to work more than 40 hours a week. Extra working hours for programmers are not appreciated by this agile model as chances are exhausted and fatigued developers will make more errors. **On-Site Customer:** A representative of the client is included in Extreme programming team and stays on site during all the process. This representative is a specialized professional who has the power to choose required attributes of the system, respond to the questions, and can lead the development process. Being on site helps to reduce the communication issues. Coding Standards: There are a few coding standards that need to be followed in this agile model [66]. The ownership of the code is collective and can be approached and modified by any developer due to which coding standards need to be implied. 4.4 Comparison Extreme programming and Scrum are very popular agile frameworks. To have a different view of these models a thorough comparison is done based on various factors and attributes. For this purpose [67] was considered and consulted. This comparison can be seen in ‘Table 1’ <table> <thead> <tr> <th>Extreme Programming</th> <th>Scrum</th> </tr> </thead> <tbody> <tr> <td>It is Iterative and incremental</td> <td>It is also Iterative and incremental</td> </tr> <tr> <td>It has a small project size</td> <td>Project size: All</td> </tr> <tr> <td>The team consists of 2 to 10 members.</td> <td>More than one team with less than 10 members</td> </tr> <tr> <td>It has following team activities: Planning game, Pair programming, Collective code ownership etc.</td> <td>No Team activities.</td> </tr> <tr> <td>1 to 3 weeks is the sprint duration</td> <td>Sprint duration is 4 weeks</td> </tr> <tr> <td>-----------------------------------</td> <td>---------------------------</td> </tr> <tr> <td>Throughout the process there is</td> <td>Not Specified</td> </tr> <tr> <td>stakeholder’s involvement</td> <td></td> </tr> <tr> <td>Communication is done Orally,</td> <td>Also Orally, through</td> </tr> <tr> <td>through standup meetings</td> <td>Scrum meeting</td> </tr> <tr> <td>There is no project management</td> <td>Project management is</td> </tr> <tr> <td>Co-located teams</td> <td>present</td> </tr> <tr> <td>It focuses more on engineering</td> <td>It focuses on management</td> </tr> <tr> <td>factors</td> <td>and productivity</td> </tr> <tr> <td>It has a Quick response towards</td> <td>Same as extreme</td> </tr> <tr> <td>change</td> <td>programming</td> </tr> <tr> <td>For requirement gathering</td> <td>Not specified</td> </tr> <tr> <td>User stories and on-site customer</td> <td></td> </tr> <tr> <td>practices are utilized</td> <td></td> </tr> <tr> <td>Difference between different</td> <td>Not specified</td> </tr> <tr> <td>requirements is not specified</td> <td></td> </tr> <tr> <td>Documentation is Less</td> <td>Same as Extreme</td> </tr> <tr> <td>There is no upfront design document</td> <td>programming</td> </tr> <tr> <td>Development order is specified by</td> <td>Development order is</td> </tr> <tr> <td>the Customer</td> <td>specified by the Scrum</td> </tr> <tr> <td>The development style is adaptive</td> <td>It is also adaptive for</td> </tr> <tr> <td></td> <td>Scrum</td> </tr> <tr> <td>The Whole team has the access to</td> <td>Not specified</td> </tr> <tr> <td>code</td> <td></td> </tr> <tr> <td>Alteration during iterations is</td> <td>Alteration during</td> </tr> <tr> <td>allowed</td> <td>iterations is not</td> </tr> <tr> <td>Feedback can be given in minutes</td> <td>Feedback can be</td> </tr> <tr> <td>to months</td> <td>delivered in over a</td> </tr> <tr> <td>Unit testing, integration testing</td> <td>month</td> </tr> <tr> <td>and acceptance testing are performed</td> <td>Not specified</td> </tr> <tr> <td>There are no structured review</td> <td>Same as extreme</td> </tr> <tr> <td>meetings</td> <td>programming</td> </tr> <tr> <td>Functional Testing and Acceptance</td> <td>Not specified</td> </tr> <tr> <td>Testing are performed for</td> <td></td> </tr> <tr> <td>validation</td> <td></td> </tr> <tr> <td>Test first approach is carried out</td> <td>Not specified</td> </tr> <tr> <td>for QA</td> <td></td> </tr> <tr> <td>Coding Standards are properly</td> <td>Coding standards are</td> </tr> <tr> <td>defined</td> <td>not defined</td> </tr> <tr> <td>Software configuration practices</td> <td>Software configuration</td> </tr> <tr> <td>are not defined</td> <td>practices are not</td> </tr> <tr> <td>defined as well</td> <td>defined as well</td> </tr> <tr> <td>There is no support for distributed</td> <td>Not defined</td> </tr> <tr> <td>projects</td> <td></td> </tr> <tr> <td>No process management</td> <td>No process management</td> </tr> </tbody> </table> ### 5. DISCUSSION ON RESULTS #### 5.1 XP Values When Extreme programming practices are applied, there are five Extreme programming values that are focused on and they are simplicity, respect, courage, communication, and feedback. *Simplicity:* Things are kept simple by this agile model like simple design, simple code and simple plan. Simple solutions are preferred, and no additional functionality is added unless the customer specifies otherwise [68]. Plain and minor repetition of Extreme programming aides in avoiding the risk of project distraction. Communication: Among the team members, Extreme programming uses continuous and active communication instead of documentation [69]. Meanwhile all the team members and customers continuously communicate on site to find more appropriate and budget-friendly solutions to the problem. Feedback: The feedback that spans on different time scales is used by Extreme programming. Rapid feedback about the developing software is provided by unit testing and integration testing which are performed daily. The project is kept on the right path by the help of feedback and communication. A distinguishing feature of Extreme programming which is presence of a customer on site aids in getting fast feedback about the developing software. Courage: Courage is required for Extreme programming practices. It is sometimes required to rewrite the design or code that is completed after substantial effort. It also refers to making decisions that have never been made for the system before. Respect: Another major value of Extreme programming introduced in [47] is respect. Respect for other members and self-respect is important which makes it possible to execute Extreme programming. The developers can be compelled to do high quality work by showing respect for work [70]. 5.2 XP Roles Seven roles of the team members with their qualities and responsibilities are defined by Extreme programming which they must execute within the team. Programmer: The most important role in the Extreme programming team is that of a programmer. The main activity in Extreme programming performed by the programmer is coding. All these tasks are to be performed by the programmer hence there is no designer, architect, or analyst in this agile model’s team. Customer: Another major member of the Extreme programming team who takes part in a dynamic role throughout the development process is the customer. He writes stories, derives functional tests, and henceforth verifies those tests. Coach: A person acting as a coach should have management skills. His decision making and communication skills assist the team members to be on the right path. Tracker: The tracker's duty is to collect metrics such as load factor and functional test scores regarding the project. After two to three days, the tracker gathers data from all the developers and records the time consumed on a task and the time still required to complete it. It is the tracker's duty to validate the iteration and commitment schedules that are realistic and can be met. Tester: The tester's responsibility is to conduct and aid the customers in writing functional tests and verifying them [48]. A tester has very less to do as the unit testing is performed by the developers in the Extreme programming. Consultant: Extreme programming team does not have a specialist however, in some cases when the team needs technical guidance from an expert, a consultant can be hired for a specific period. The solution of the problem is discussed with the consultant by two or more developers in a meeting. Big Boss: He is a coordinator that has responsibilities of building the team, providing required resources, equipment, and tools of the project. The big boss needs to show determination while supporting the team's decision [49]. 6. CONCLUSION AND RECOMMENDATIONS Extreme Programming and Scrum are well known agile frameworks that are broadly utilized for small scale projects. Best practices of agile industry are utilized by these frameworks for software development. In this research a detailed description of various stages, practices and roles of these models is given. To have a better understanding of these frameworks a comparison is also carried out [53]. This detailed comparison can be extremely useful for all those researchers with an interest in such frameworks of agile. It is uncovered by this comparison that these models have many similarities and differences. A few dissimilar attributes of these models complement each other which can lead to experimentation with combining Scrum and Extreme programming for high quality software development. REFERENCES [30] B. Murphy, C. Bird, T. Zimmermann, L. Williams, N. Nagappan, and A. Begel, “Have agile techniques been the silver bullet for software development at microsoft?,” in *2013 ACM/IEEE international symposium on empirical software engineering and measurement*, 2013, pp. 75–84. [58] N. N. Alnazer, M. A. Alnuaimi, and H. M. Alzoubi, “Analysing the appropriate
{"Source-Url": "https://journals.gaftim.com/index.php/ijtim/article/download/77/43", "len_cl100k_base": 6213, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 44473, "total-output-tokens": 11424, "length": "2e12", "weborganizer": {"__label__adult": 0.00042724609375, "__label__art_design": 0.0002856254577636719, "__label__crime_law": 0.0002892017364501953, "__label__education_jobs": 0.0023059844970703125, "__label__entertainment": 5.4717063903808594e-05, "__label__fashion_beauty": 0.0001862049102783203, "__label__finance_business": 0.0004322528839111328, "__label__food_dining": 0.0003910064697265625, "__label__games": 0.0006279945373535156, "__label__hardware": 0.0005393028259277344, "__label__health": 0.0004472732543945313, "__label__history": 0.0002419948577880859, "__label__home_hobbies": 9.059906005859376e-05, "__label__industrial": 0.0003254413604736328, "__label__literature": 0.00026106834411621094, "__label__politics": 0.00022685527801513672, "__label__religion": 0.0004243850708007813, "__label__science_tech": 0.003873825073242187, "__label__social_life": 0.0001112222671508789, "__label__software": 0.0035305023193359375, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.0004017353057861328, "__label__transportation": 0.0004680156707763672, "__label__travel": 0.00023806095123291016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42977, 0.03046]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42977, 0.24489]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42977, 0.90535]], "google_gemma-3-12b-it_contains_pii": [[0, 2028, false], [2028, 5098, null], [5098, 7696, null], [7696, 10201, null], [10201, 11523, null], [11523, 14095, null], [14095, 16760, null], [16760, 18896, null], [18896, 21055, null], [21055, 22142, null], [22142, 25774, null], [25774, 27896, null], [27896, 29877, null], [29877, 33258, null], [33258, 36939, null], [36939, 40509, null], [40509, 42977, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2028, true], [2028, 5098, null], [5098, 7696, null], [7696, 10201, null], [10201, 11523, null], [11523, 14095, null], [14095, 16760, null], [16760, 18896, null], [18896, 21055, null], [21055, 22142, null], [22142, 25774, null], [25774, 27896, null], [27896, 29877, null], [29877, 33258, null], [33258, 36939, null], [36939, 40509, null], [40509, 42977, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42977, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42977, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42977, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42977, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42977, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42977, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42977, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42977, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42977, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42977, null]], "pdf_page_numbers": [[0, 2028, 1], [2028, 5098, 2], [5098, 7696, 3], [7696, 10201, 4], [10201, 11523, 5], [11523, 14095, 6], [14095, 16760, 7], [16760, 18896, 8], [18896, 21055, 9], [21055, 22142, 10], [22142, 25774, 11], [25774, 27896, 12], [27896, 29877, 13], [29877, 33258, 14], [33258, 36939, 15], [36939, 40509, 16], [40509, 42977, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42977, 0.23214]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
3308ccc2e924e558f11a248cdb1076b6e32e1869
A Platform for Teaching Applied Distributed Software Development Fagerholm, Fabian Institute of Electrical and Electronics Engineers 2013 http://hdl.handle.net/10138/40085 https://doi.org/10.1109/CTGDSD.2013.6635237 Downloaded from Helda, University of Helsinki institutional repository. This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail. Please cite the original version. A Platform for Teaching Applied Distributed Software Development The Ongoing Journey of the Helsinki Software Factory Fabian Fagerholm*, Nilay Oza†, Jürgen Münch‡ Department of Computer Science, University of Helsinki P.O. Box 68 (Gustaf Hällström katu 2b) FI-00014, Finland *fabian.fagerholm@helsinki.fi, †nilay.owa@cs.helsinki.fi, ‡juergen.muench@cs.helsinki.fi Abstract—Teaching distributed software development (DSD) in project courses where student teams are geographically distributed promises several benefits. One main benefit is that in contrast to traditional classroom courses, students can experience the effects of distribution and the mechanisms for coping with distribution by themselves, therefore understanding their relevance for software development. They can thus learn to take more care of distribution challenges and risks when starting to develop software in industry. However, providing a sustainable environment for such project courses is difficult. A development environment is needed that can connect to different distributed teams and an ongoing routine to conduct such courses needs to be established. This article sketches a picture of the Software Factory, a platform that supports teaching distributed student projects and that has now been operational for more than three years. We describe the basic steps of conducting Software Factory projects, and portray experiences from past factory projects. In addition, we provide a short overview of related approaches and future activities. Index Terms—Global software development, distributed software development, education, Software Factory. I. INTRODUCTION The Software Factory is an experimental laboratory that provides an environment for research and education in software engineering, and that was established by the Department of Computer Science at the University of Helsinki [1]. Since the first project in 2010, the Software Factory has been used as a platform for teaching software engineering in close collaboration with industry. The goal is to provide students with a realistic environment in which to integrate previous knowledge of computer science and software engineering with experiential insights about conducting real software projects. Close customer involvement, intensive teamwork, and the use of modern software development tools and processes add realism and working life relevance for the students. The Software Factory’s particular educational focus is teaching global software engineering. Students benefit by learning particular skills that are relevant to globally distributed software development. The Software Factory concept can and has been replicated in other locations, forming a growing network for research and education. Although the Software Factory concept started initially with collocated projects, the Software Factory aims to conduct mainly distributed projects. Today, several companies and universities have established the Software Factory concept on their premises and several distributed projects have been performed (e.g., a pilot project among Finnish universities, a project with Spain-based Indra Sistemas and Technical University of Madrid on intelligent power grids, and a large-scale open source collaboration project with Stanford University, Facebook, and other academic partners worldwide). II. EXPERIENTIAL AND PROJECT-BASED LEARNING IN SOFTWARE ENGINEERING Teaching DSD can be done in many different ways. Classical classroom teaching is usually limited to transferring knowledge about methods or techniques and conducting exercises. Typically, the exercise examples are unrealistically small, and it is difficult to show the complexity of distributed projects in such settings. In the context of real software development projects, students can experience the effects of such methods and techniques (e.g., the risk of making wrong assumptions without appropriately documented code) and see their practical relevance. Studies have shown that such experiences can also lead to performance improvements on the individual as well as the team level [2]. Involving students as subjects in experiments (e.g., [3]) or using simulators for teaching are other alternatives that allow students to experience or explore the effects of software development techniques. In the area of DSD, several efforts have been made to provide realistic project environments for distributed student projects: The Siemens Global Studio Project [4] was one of the first projects that aimed at learning from the collaborative development of student teams in a distributed project environment. In contrast to the Software Factory, the focus of the Global Studio Project was on conducting the project itself, whereas the Software Factory focuses on the development environment and the respective processes needed to operate the environment. The Software Factory has shorter project cycles and faster iterations than the Global Studio Project. The DOSE course [5] embeds a distributed project in an overall course on teaching distributed development. Compared to the Software Factory, DOSE puts less emphasis on the laboratory setting but has a stronger focus on teaching specific techniques, such as API design. Several other approaches and frameworks for teaching DSD via student project settings have been reported in the literature. Damian, for instance, describes a framework that uses Scrum practices to teach distributed development [6]. Fortaleza et al. provide a comparison of 19 global software engineering courses [7]. Student and teacher experiences with distributed development courses have been reported in many different ways (see, for instance, [8]). All these reports have influenced the design of the Software Factory approach, as described subsequently. III. THE SOFTWARE FACTORY APPROACH A Software Factory project is an advanced master’s-level capstone project course at the Department of Computer Science, University of Helsinki. Student participants work in the Software Factory facility for an average of six hours per working day, and can choose between four or five working days per week. Projects last seven weeks and use agile software development methodology to rapidly produce a functionally complete software prototype in cooperation with an external customer. Software Factory projects are conducted in a manner that simulates, as closely as possible, the reality of software development in new product development organizations. The model is either a small software development company or a division of a large corporation. Some projects, however, may include continued development of existing software, and code reuse, e.g., through open source components, is encouraged where applicable. The projects are conducted in a laboratory setting: a standardized but customizable development environment with a specified physical design (i.e., an interior design pattern with specific furniture and equipment, and different activity-related zones), a defined technical infrastructure, and a comprehensive experimental infrastructure that includes instrumentation for performing empirical studies. From an educational perspective, the Software Factory provides students with a realistic experience that serves to integrate their previous theoretical and practical knowledge with working life relevance in order to develop higher-order skills. Students are expected to take responsibility for the entire project, including project management, customer communication, iterative requirements solicitation, continuous development process improvement, and, naturally, the software development itself. The approach also allows teachers to supplement the Software Factory projects with other courses on top of the Software Factory activities; examples include courses on software project leadership, project management, group dynamics, software architecture, and software processes, as well as intensive courses on technical topics, such as version control, programming languages, and testing. While Software Factory projects include much of the uncertainty and open-endedness of real software projects, an established driving process is always in place to provide a frame within which the projects are conducted and learning can occur. The process can be divided into pre-project, per-project, and post-project stages. The different stages of this process are described subsequently. A. Pre-project Activities The pre-project activities aim to reach a defined state in which the project can be handed over to the implementation team. The project’s prerequisites must be fulfilled, but only to the extent necessary for the implementation team to take over. In distributed projects, pre-project activities are especially important. Once a project has started, it is often not feasible to make major changes to schedules and allocation of personnel resources. At the same time, the exact details often depend on experiences gained during the start of the project. An overview of the pre-project process is shown in Figure 1. 1) Project Screening and Selection: From the project perspective, the most important pre-project activity is the selection of a project idea and the initial work that prepares the customer to interact with the implementation team. This activity can be considered as a project portfolio management task. Proposals may arrive through multiple means, including direct contact with the Software Factory staff or through an online project proposal form on the Software Factory web site. The first step is screening, where the minimum prerequisites are evaluated and feedback is given directly to the potential partner. This step is continuously carried out as proposals arrive. Selection proceeds by considering project proposals that have passed the screening stage. Here, the Software Factory works much as a large software development organization: projects are considered in terms of their feasibility, maturity, and contribution to organizational goals, which in this case stem from both educational and research needs. Proposals are either accepted or postponed. In the case of acceptance, the partner is asked to produce more detailed material to be used as the first high-level requirements description. Postponed projects are reconsidered for the following cycle, and the partners may update or withdraw their proposals. A particular challenge in this phase is how to screen project proposals where customers are remotely located. The ability of the customer to communicate in such an environment is of critical importance. http://www.softwarefactory.cc/ 2) Enrollment: An enrollment stage is used to screen students before admission. During this stage, minimum admission requirements are checked. Students are assessed so that a selection can be made in case the number of eligible applicants exceeds the project’s capacity. Another objective of this assessment is to match the students’ skills to the project’s known needs. Consequently, a skilled, motivated, and competitive team of 5-15 students is formed. In some cases, where multiple projects have been selected for simultaneous implementation, students may be divided into multiple teams. 3) Administrative Issues: Once the team composition is known, several administrative issues must be handled. These vary for different universities, but may include things like ordering keys for the Software Factory facility, setting up user accounts for the technical infrastructure. In a global setting, this phase is particularly challenging, with varying processes among university IT departments. These departments are often not prepared to provide services to parties outside their university. Also, local policies may interfere, and student teams must spend time on working around technical and policy issues. A standardized lab environment overcomes many of these problems as remote students and customers can be granted access rights to the systems in a uniform way. 4) Start: Project Kickoff: We have developed specific project kickoff activities for getting the project team up to the speed and style of real-life software development. We emphasize self-directed learning practices, which help students to realize that they are expected to take initiative and engage in the project. This requires changing the students’ mind-set away from the familiar lecturing style, where the initiative is teacher-driven and based on presentations and instructions. Rather, we present the project as an open-ended learning problem where the students must seek the information they need to solve the problem and its parts. Understanding the problem itself and evaluating the solution are parts of the goal. Another aim of the kickoff is to direct the students’ attention to the needs of the customer, and the customer’s attention to the students as the primary point of contact for getting things done in the project. This ensures that communication between the team and the customer is direct, and that it is initiated spontaneously when either party observes a need. With multiple teams in different locations, there are a number of options available for conducting the kickoff. In practice, we have had the most positive experiences by arranging a co-located event at the start of the project. This is also an important lesson to learn: meeting in person can reduce the barriers for continued communication online. When this is not possible, teachers may want to carry out the activities using online tools. In this case, it is important that each site has a local instructor who facilitates communication and encourages students to engage with their remote team mates and not only with the co-located ones. In practice, the exact implementation of the kickoff activity can vary, but it always has the following three elements, which are based on the Extreme Apprenticeship (XA) method and its three stages of modeling, scaffolding, and fading [9]. Modeling is grounded in material that the customer brings to the project. This includes verbal descriptions, diagrams, written documents, and any other material that the customer chooses. Administrative material provides the organizational constraints for the project. The teacher provides the necessary scaffolding by directing how the material is processed. The activity proceeds from individuals to the whole group. Tasks are given first to individuals or pairs, and then gradually, larger subgroups work on larger parts of the problem space. Finally, the team and customer representatives work as a single group to define a first sketch of the whole project. The teacher’s involvement decreases during the process. In the beginning, explicit instructions are given. Gradually, the teacher fades into the background until he or she only provides support and instruction when asked. At the very end of the activity, the teacher encourages the students to work in a similar way throughout the project. The teacher also solidifies all of the participants’ beliefs that they can reach a meaningful outcome for the project. Perhaps most importantly, the teacher explicitly transfers responsibility of the project to students and customer representatives. The teacher then assumes a supporting role and is available on demand, but can intervene if necessary. Another common element is the use of agile software development methodology, specifically, the so-called Scrum process [10], [11]. This process combines many of the Scrum practices with the visual Kanban planning board. The use of this method is gradually introduced, first through an example. However, the value of this method for a project is only visible once there are actual tasks to perform. The process is linked to the previous exercise in order to introduce a systematic element to the cycle of discovery, requirements specification, implementation, and evaluation. In global projects, the physical Kanban board can be replaced by online variants or omitted. A local board may be used to facilitate local work, but some additional effort is then required to synchronize information to remote teams. B. Per-project Activities Since each project varies considerably in the project team, customer, topic, and technology choices, the common per-project activities are fairly general. We have found it beneficial to maintain a regular cycle with weekly customer meetings where the team demonstrates the current state of the software and the customer gives direct feedback to steer the next cycle. Online demonstrations should be well prepared. Screen sharing or other technical means to give the customer an opportunity to conduct interactive demonstrations. In addition to weekly meetings, we follow the Scrum practice of daily meetings. Team members shortly answer three questions: 1) “What have you done since the last meeting?” 2) “What are you planning to do until the next meeting?” and 3) “Are there any obstacles preventing you from carrying out your work?” However, we also acknowledge that for some project stages and some kinds of tasks, daily reporting may be too frequent, and in these cases, the meetings do not have to follow this exact format as long as it fulfills the spirit of efficient information-sharing. Special care should be taken when holding weekly meetings online. Varying image and audio quality may introduce communication overhead. In our experience, successful online meetings require both a meeting moderator who keeps the pace and structure of the meeting, and on-site technical support to ensure that each participant can hear and see each other. In many cases, it may prove more effective to conduct such meetings over text chat. In any case, our experience shows that a separate local meeting is often needed to discuss more intricate details. Finally, in order to provide the team with access to relevant information, we invite the customer to be available frequently for free-form discussions with the team. This must be balanced with enough time for the team to focus on actual implementation. Through these interactions, the team can access the customer directly for key decisions, and can learn the skill of iteratively soliciting requirements. Encouraging the customer to be available online regularly is a good way of enabling this free-form communication. C. Post-project Activities Apart from administrative tasks, such as closing accounts, returning room keys, and other such matters, what remains from the educational perspective is to properly debrief project participants in order to engage the whole group in reflection. A summative assessment of the students is also performed. The debriefing session is conducted differently depending on the events during the project and its outcome. Generally, a good approach is to analyze the project through a time-line, where the students and customer representatives recall the phases of the project chronologically. The teacher asks open-ended questions to encourage the participants to reflect deeply on causes and effects and different interpretations. As with the daily meetings, we find that subtle, but important details may be lost in online communication. Therefore, we always arrange a local debriefing session for our students. Ideally, this event would be co-located, but we have so far not explored this in practice. Finally, summative assessment of experiential learning is a challenge in itself, and is outside the scope of this paper. We note that peer assessment can provide students with an opportunity to reflect on their role in and performance on the project. IV. EXPERIENCE AND RESULTS While there are several challenges involved in conducting Software Factory projects, we find the overall results to be encouraging. By employing a systematic driving process, we have been able to reduce the administrative burden, and have allowed the teachers to focus on the educational aspects of the projects. In this section, we report on particular experiences with specific projects. A. Sustainability One of the important aspects of an endeavor such as Software Factory is sustainability. Our initial investment in properly planning the overall setup and consulting with all relevant stakeholders, including companies, students, and researchers, helped us to develop a course that has minimal overhead and maximum support from all stakeholders. A particular challenge is how to sustain continued development. This requires strong support from the department as well as funding for personnel and equipment maintenance. We believe that our adaptive approach has been a key factor in both gaining support and utilizing existing funding effectively. Our results with making the environment systematic without making it static show that the financial requirements can be scaled up and down while still keeping the educational value intact. B. Globally Distributed Projects With Remote Teams, Customers and Technology We have worked with distributed partners including off-site customers, development teams from other factory nodes, and also distributed and remotely located technology infrastructure. Software Factory has helped students and researchers understand new levels of complexity in distribution – in relation to people, technology and processes. As an example we recently conducted a joint project between Helsinki and Madrid teams (from Technical University of Madrid and Indra Software labs) where the Helsinki team joined an ongoing software product development project. Students gained a unique experience of working with a completely unknown team. They also developed hands-on experience on how to deal with cloud infrastructure, both from a technical perspective and an operational perspective: deciding on access controls, and using a shared code base. Students also gained experience with keeping to a development process. Just deciding to use Kanban was not enough; students had to work quite a lot to better understand, negotiate and fine-tune their approach to task assignment, allocation and commitment. From our experiences, we can identify a number of challenges with conducting distributed educational projects with other universities. These often stem from the same underlying reasons that make professional distributed projects difficult: distances in time, location, and culture. Complete synchronization of teaching schedules is often impossible. We have attempted to turn this into a learning experience: it is common for distributed projects to have staged starts, with different locations starting at different times. Another challenge is in student selection: each university applies their own prerequisites and standards in student selection, and therefore, there may be differing levels of skill in the different locations. We have chosen to accept this risk and attempt to mitigate it for our students by keeping our own selection baseline high and including handling the overhead of differing skills levels in the project scope. Grading poses a final challenge to overcome, again with different standards at different universities. We have chosen to be inclusive in the grading, utilizing the perspective of several project participants as material for grading. C. Team and Student Considerations Our projects have relied heavily on our approach to building self-organized teams [12]. Being able to operate in such a team... is a learning goal in itself. Relying on self-directed students has also helped us a great deal in coordinating the whole course and keeping stakeholder communication efficient. A large number of our master’s-level students have past industry work experience, which helps in conducting professional software projects. We carefully match the students’ technical skills and experience with the needs of the project. D. Project Considerations All projects in the Software Factory undergo the highly iterative, Scrumban development process in a cross-functional, self-organized team environment. We do not provide a project manager for the students. Instead, as previously noted, the teacher supervises the project, is available on demand, but can intervene if needed. In addition, a resident coach actively mentors the team and makes sure that the project lives up to its expected outcomes for the customer. The coach also frequently engages with the customers to help them interact with the team and focus on the underlying reasons for their wishes and on their choices regarding the next step towards the project goals. We have to be quite selective regarding which ideas we work on. We tend to select projects that add concrete value to the customer’s offerings and try to avoid developing “unusable” prototypes with dubious value. This also encourages our customers to be quite active in their involvement during the project. In our experience, close customer participation is a critical success factor for producing a software product in seven weeks with a newly composed software team. This is a particular challenge when the customer is not collocated with the team. Extra effort is needed to ensure that the customer actively participates in the team’s work and provides the feedback necessary for rapid prototyping. E. Involving Students in Research Several researchers have utilized the Software Factory platform. Specific lines of inquiry have ranged from examining sources of waste in Scrumban to psychometric analysis of team behavior. We have found that these ongoing research efforts are also interesting for student participants, as they are keen to see results concerning their own projects. In some cases, researchers have been able to provide the project with empirically based real-time feedback on different aspects of the project performance. Also, some students have utilized the platform for empirical studies in their Master’s theses. We believe there are many opportunities for teaching empirical research skills in the context of the Software Factory. V. Next Steps There are several ongoing developments to evolve the Software Factory concept. One is to use cloud services for development, management, and coordination of the projects. This concept is referred to as the “Cloud Software Factory” and has been partly implemented in Helsinki. Several organizations have networked and already established or are establishing Software Factories. Current sites are in Helsinki, Oulu, and Joensu (Finland), Bolzano (Italy), and two in Madrid (Spain). Sites are planned in Ostrava (Czech Republic) and Novi Sad (Serbia). There are also prospects for sites in China, and collaboration with other European and North American universities. A future direction is to integrate empirical studies more systematically into Software Factory projects. The relatively short setup times for projects in the Software Factory currently make the planning of accompanying empirical studies difficult. However, several longitudinal studies are currently being conducted that are less sensitive to individual project schedules. Finally, customers could be involved in a wider sense. Software Factory can serve to support experimentation with customer value. During the course of a project, prototypes (minimally viable products) would be developed and used by the customer to directly test value with end users. Based on experimental results, development goals might be adjusted. REFERENCES
{"Source-Url": "https://helda.helsinki.fi//bitstream/handle/10138/40085/icsews13ctgdsd_id4_p_16138_preprint.pdf?sequence=2", "len_cl100k_base": 4966, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 16899, "total-output-tokens": 6352, "length": "2e12", "weborganizer": {"__label__adult": 0.000972270965576172, "__label__art_design": 0.0010175704956054688, "__label__crime_law": 0.0007948875427246094, "__label__education_jobs": 0.10552978515625, "__label__entertainment": 0.0001627206802368164, "__label__fashion_beauty": 0.0004558563232421875, "__label__finance_business": 0.0008335113525390625, "__label__food_dining": 0.0011396408081054688, "__label__games": 0.00124359130859375, "__label__hardware": 0.0012989044189453125, "__label__health": 0.0010814666748046875, "__label__history": 0.0006237030029296875, "__label__home_hobbies": 0.00032138824462890625, "__label__industrial": 0.0009984970092773438, "__label__literature": 0.0007519721984863281, "__label__politics": 0.0006756782531738281, "__label__religion": 0.0013675689697265625, "__label__science_tech": 0.00725555419921875, "__label__social_life": 0.00048661231994628906, "__label__software": 0.00685882568359375, "__label__software_dev": 0.86279296875, "__label__sports_fitness": 0.0008478164672851562, "__label__transportation": 0.0017423629760742188, "__label__travel": 0.0006480216979980469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31386, 0.03135]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31386, 0.33942]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31386, 0.94316]], "google_gemma-3-12b-it_contains_pii": [[0, 1022, false], [1022, 5991, null], [5991, 11660, null], [11660, 18264, null], [18264, 24447, null], [24447, 31386, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1022, true], [1022, 5991, null], [5991, 11660, null], [11660, 18264, null], [18264, 24447, null], [24447, 31386, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31386, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31386, null]], "pdf_page_numbers": [[0, 1022, 1], [1022, 5991, 2], [5991, 11660, 3], [11660, 18264, 4], [18264, 24447, 5], [24447, 31386, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31386, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
3ce0167365dd933dedee22d0ca0df19a23543d13
Shalt2 - a Symmetric Machine Translation System with Conceptual Transfer ABSTRACT Shalt2 is a knowledge-based machine translation system with a symmetric architecture. The grammar rules, mapping rules between syntactic and conceptual (semantic) representations, and transfer rules for conceptual paraphrasing are all bi-directional knowledge sources used by both a parser and a generator. 1. Introduction Shalt2 is a research prototype of a knowledge-based, multi-domain/multi-lingual MT system. It has two predecessors, SHALT [31 (1982-90) and KBMT-89 [4] (1987-89), which brought us valuable lessons we reflected in the design and implementation of Shalt2. As a result, Shalt2 has been designed and implemented as a symmetric MT system with frame-based conceptual representation. The advantages of a symmetric architecture coupled with a conceptual representation are obvious. First, it allows us to maintain single knowledge sources for both parsing and generation. We can automatically control the coverage and reversibility of these processes. Second, conceptual structures are a desirable interface for representing the meaning of sentences, machine-generated output (such as diagnosis by an expert system), and expressions in graphical languages. AI-based approaches can provide powerful inference methods for identification of a (semi-)equivalent class of conceptual representations, which corresponds to a paraphrasing capability. Unlike interlingual MT systems, our approach relieves the parser of the burden of generating a unique meaning representation of all equivalent sentences, which becomes harder in proportion to the number of languages the system has to translate. 2. Shalt2 Architecture and Knowledge Sources Shalt2 has five types of knowledge sources: a set G of syntactic grammar rules (including dictionaries), a set C of hierarchically defined conceptual primitives called concept definitions, a set M of mapping rules between syntactic and conceptual structures, a set P of conceptual paraphrasing rules, and a set E of cases (a structurally and conceptually disambiguated set of sample sentences). These knowledge sources are shared by three major processes: a parser, a concept mapper, and a generator. G should be provided for each language, whereas the set C should be defined for each application domain. M, P, and E should be provided for each pair of a language and a domain.† Figure 1 shows an overview of the Shalt2 architecture. †Our theory of multi-domain translation aims to compose a set of mapping rules efficiently when several domains are combined. It is expected that two sets of mapping rules for a single language will differ mainly in lexical mapping rules. 2.1 Syntactic Grammar Shalt2 currently has two English grammars (PEG and a less competent English grammar) and a Japanese grammar. The last two grammars are bi-directional grammars written in an LFG-like formalism called a pseudo-unification grammar (PUG) [10], and were originally employed in KBMT-89 [4]. PEG is not bi-directional, since it has too many destructive operations to build or encode record structures, but it is our primary English grammar for three reasons: (1) broad coverage, (2) ability to encapsulate structural ambiguities in a compact parse tree, and (3) compatibility with other NLP programs that use PEG to analyze English sentences. Our bi-directional English grammar is following up PEG and will be able to replace it. The symmetric architecture of Shalt2, however, allows unidirectional knowledge sources and processes to be hooked into the system. Their coverage and ability to parse or generate sentences can be measured in terms of a set of conceptual representations that they can relate to syntactic structures. Although the syntactic structures of PEG and PUG grammars differ, they are integrated into a single syntactic representation called Dependency Structures (DSs) [6]. Roughly speaking, a DS is a tree-like structure with a set of nodes and a set of arcs, which correspond to maximal projections of syntactic constituents and grammatical relations. The English version was originally written by Gates et al. [2] but was not bi-directional. The Japanese grammar was originally written by Mitamura and Takeda [2]. The Shalt2 versions of these PUG grammars have been modified considerably to allow them to handle coordinations, comparatives, and so on. mathematical relationships, respectively. Some grammatical formalisms such as Constraint Dependency Grammar\[4\] postulate DSs as syntactic representations to which a constraint satisfaction algorithm\[8\] can be applied in order to resolve syntactic/semantic ambiguities efficiently. In the following, we show a PEG parse tree and a PUG f-structure, which will have the same DS, for the simple sentence "insert a diskette into the drive." PEG Parse Tree: ``` (IMP (VERB 'insert') (insert PS)) (NP (DET 'a') (a BS)) (NOUN 'diskette' (diskette SG))) (PP (PREP 'into') (DET (ADJ 'the' (the BS))) (NOUN 'drive' (drive SG))) (PUNC '.') ``` F-structure in the PUG English grammar: ``` (Root insert) (CAT v) (SUBCAT trans) (FORM inf) (Mood imp) (Tense pres) (OBJ (ROOT diskette) (CAT n) (DET indef) (NUM sg))) (PPADJUNCT ((ROOT drive) (CAT n) (DET indef) (NUM sg))) ``` That is, an instance of a class can inherit slot definitions from only one of its immediate superclasses. The idea behind exclusive inheritance is to realize certain identity of verbal and nominal word senses without mixing the slot definitions of both. For example, most verbs have nominal counterparts in a natural language, such as "insert" and "insertion." Such a pair usually shares slot definitions (agent, theme, and goal) and selectional restrictions, except that "insert" inherits tense, aspect, and modality from its "verbal" superclass but not cardinality, ordinal, and definiteness (that is, the quality of being indefinite or definite) from its "nominal" superclass, although these features are inherited by "insertion." The following class definitions ``` (defclass *physical-action (is-a (value *predicate))) (defclass *mental-object (is-a (value *object))) (defclass *action (is-a (value *physical-action *mental-object))) ``` allow every instance of subclasses of *action to inherit slot definitions from either *physical-action or *mental-object. Exclusive inheritance also contributes to performance improvement of the parser since it allows us to reduce the number of possible superclasses from an exponential number to a linear number. There are three meta-classes in NL classes - *var, *set, and *fun - to represent concepts that are not included in the normal class hierarchy. The first, *var, is a variable that ranges over a set of NL classes, which are constants. Pronouns and question words in natural languages usually carry this kind of incomplete concept. The second, *set, is a set constructor that can represent a coordinated structure in natural languages. The third, *fun, is a function from NL objects to NL objects. It captures the meaning of a so-called semi-function word. For example, in some usages, the verb "take" does not really indicate any specific action until it gets an argument such as "a walk," "a rest," "a look." It is therefore well characterized as a function. Since we allow only exclusive inheritance, the NL class system certainly lacks the ability to organize classes from various viewpoints, unlike ordinary multiple inheritance. Virtual classes are therefore introduced to compensate for this inability. For example, ``` (defvclass *option (def (*math-coprocessor *hard-disk *software)) ``` shows two types of virtual class, *option and *male-thing. The *option consists of the classes *mathcoprocessor, *hard-disk, and *software. The *male-thing is a class that includes instances of any class with the sex slot filled with *male. Note that the maintainability of a class hierarchy dramatically declines if we allow classes such as *option to be "actual" rather than virtual, as we will have many is-a links from anything that could be an option. The second type of virtual class helps incorporate so-called semantic features into the NL class system. Existing machine-readable dictionaries (for example, LDOCE\[8\]) often have entries with semantic features such as HUMAN, LIQUID, and VEHICLE that may not fit into a given ontological class hierarchy. A virtual class definition (defclass *human (def (equal :human *true))) with semantic restrictions (agent (sem *human)) make it possible to integrate such entries into the NL class system. The NL class system currently includes a few thousand concepts extracted from the personal-computer domain. The word senses in the SHALT dictionary (about 100,000 entries) and the LDOCE (about 55,000 entries) have been gradually incorporated into the NL class system. 2.3 Mapping Rules Mapping rules define lexical and structural correspondences between syntactic and conceptual representations. A lexical mapping rule has the form \( \text{(emap *insert)} \) \( \text{(<-)} \) \( \text{insert ((CAT v) (SUBCAT trans)) (role =sem (*physical-action)) (:agent =syn (SUBJECT)) (:theme =syn (DOBJECT)) (:goal =syn (PPADJUNCT) ((PREP into) (CAT n) )))} \) where a transitive verb "insert" maps to or from an instance of *insert with its exclusive subclass *physical-action. The three slots for structural mapping between concepts (agent, :theme, and :goal) and grammatical roles (SUBJECT, DOBJECT, and PPADJUNCT) are also defined in this rule. The agent filler, for example, should be an instance that is mapped from a syntactic SUBJECT of the verb "insert." The goal filler must be a prepositional phrase consisting of a noun with the preposition "into." The fragments of syntactic feature structures following a lexical word or a grammatical function in a mapping rule specify the minimum structures that subsume feature structures of candidate syntactic constituents. These structural mappings are specific to this instance. The structural mapping rule \( \text{(emap *physical-action <-s->)} \) \( \text{(*mood =syn (MOOD)) (:time =syn (TENSE))} \) specifies that the conceptual slots :mood and :time map to or from the grammatical roles MOOD and TENSE, respectively. Unlike the structural mapping in a lexical mapping rule, these slot mappings can be inherited by any instance of a subclass of *physical-action. The *insert instance defined above, for example, can inherit these mood and :time mappings. Given a dependency structure (DS), mapping rules work as distributed constraints on the nodes and arcs in the DS in such a way that a conceptual representation R is an image of the DS iff R is the minimal representation satisfying all the lexical and structural mappings associated with each node and arc. On the other hand, given a conceptual representation R, mapping rules work inversely as constraints on R to define a minimal DS that can be mapped to R. Thus, assuming that lexical mapping rules are similarly provided for nouns (diskette and drive) and feature values (imp, pres, and so on), we will have the conceptual representation... 1Conceptual representation of a sentence consists of instances of classes. We use a hyphen and a number following a class name (*insert-1, *imp-1, ...) when it is necessary to show instances explicitly. Otherwise, we identify class names and instance names. 2.4 Conceptual Paraphrasing Rules We assume that a source language sentence and its translation into a target language frequently have slightly different conceptual representations. An adjective in English might be a conjugable verb in translation. These differences result in added/missing information in the corresponding representations. The conceptual paraphrasing rules describe such equivalence and semi-equivalence among conceptual representations. These rules are sensitive to the target language, but not to the source language, since the definition of equivalence among conceptual representations depends on the cultural and pragmatic background of the language in which a translation has been expressed. An example of a paraphrasing rule is \( \text{(equiv (*equal (:agent (*X ( :num (*W)))))} \) \( \text{(:theme (*Y/*person (def *indef) (*num (*W))))} \) \( (*Z/*action (:agent (*X) (*num (*W)))) (such-that (humanization *Z *Y) (sibling V W)) \) where *Y/*person specifies *Y to be an instance of any subclass of *person, *equal is roughly the verb "be," humanization is a relation that holds for pairs such as (*singer, *sing) and (*swimmer, *swim), and sibling holds for two instances of the same class. Intuitively, this rule specifies an equivalence relationship between sentences such as "Tom is a good singer" and "Tom sings well," as the following bindings hold: \( (*\text{equal ((mood (*dec)) ((time (*pres))) (:agent (*tom(:num (*sg)))) (:theme (*singer (*property (*good)))) (def (*indef) (:num (*sg))))} \) \( (*\text{sing ((mood (*dec)) ((time (*pres))) (:agent (*tom (*num (*sg)))) (:property (*good))} \) All the instances that have no binding in a rule must remain unchanged as the same slot fillers (e.g., *dec and *pres), while some fillers that have bindings in a rule may be missing from a counterpart instance (e.g., *indef and *W above). Note that *good has lexical mappings to the adjective "good" and the adverb "well." 2.5 Case Base A case base is a set of DSs with no syntactic/semantic ambiguities. A conceptual representation for a DS can be computed by a top-down algorithm that recursively tries to combine an instance mapped to the root node of a DS with an instance of each of its subtrees. The arc from the node to a subtree determines the conceptual slot name. We have already built a case base that includes about 30,000 sentences from the IBM Dictionary of Presentations if we evaluate all timing mapping rules, and in sentence is analyzed and disambiguated by the parser case base completely manually. Therefore, the Shalt2 parser is used to develop the case base. Starting with a small, manually crafted "core" case base, each new sentence is analyzed and disambiguated by the parser to generate a DS, which is corrected or modified by the user and then added to the case base. As the size of the case base grows, the proportion of human corrections/modifications decreases, since the output of the parser becomes more and more accurate. The process is called knowledge bootstrapping and is discussed by Nagao[6] in more detail. Mapping constraints, however, are associated with only a part of the case base, because the NL class system and the mapping rules are not yet complete. 3. Parser The Shalt2 parser first generates a DS with packed structural ambiguities for a given input sentence. It actually calls a PEG parser or Tomita's LR-parsers[12] for PUG, and then calls a DS converter to map a PEG parse tree or a PUG feature structure[1] to a DS. Next, mapping rules are applied to the DS so that lexical and structural mappings are associated with each node and arc in the DS. Figure 2 shows a DS with mapping constraints for the sentence "Keep the diskette in the drive," where the verb "keep" has five word senses: *execute, *guard, *hold, *employ, and *own. It is clear that we will end up with ten distinct conceptual representations if we evaluate all the mapping rules, and in general, combinatorial explosion could easily make the parser impractical. Viewing the mapping rules as constraints rather than procedural rules is the key to our parser, and is called delayed composition of conceptual representation. A sentence analyzer called SENA[14] disambiguates the DS by using a constraint solver JAUNT[8] and a case base. JAUNT applies grammatical constraints (for instance, modifier-modifiee links between nodes do not cross one another) and semantic constraints (such as selectional restrictions, functional control, and other NL object identity constraints detected by the context analyzer) uniformly to a DS that has ambiguities, and calculates pairwise consistency efficiently for each combination of nodes. Finally, the case base provides preferential knowledge to favor one pair of nodes over all other consistent pairs. The disambiguation process can be summarized as follows: 1. For each conflicting arc in the DS, calculate the "distance"[6] between the two nodes in the arc by using a case base. 2. Leave the arc with the minimal distance and eliminate all the other conflicting arcs. Each NL object associated with a matching node in the case base also gives a higher preference to the same class of instance over the other instances in a node. 3. Propagate the deletion of arcs to other nodes and arcs in the DS. Eliminate nodes and arcs that are no longer valid in the DS. 4. Apply the above steps until there are no conflicting arcs in the DS. The resulting DS has no structural ambiguity. Remaining lexical ambiguities are similarly resolved, because we can also determine which pair of NL objects connected with an arc has the minimal distance in the case base. Our case base for computer manuals would support the "diskette -PPADJUNCT- drive" arc and the "hold-1 instance with diskette as its DOBJECT. Finally, a context analyzer is called to resolve anaphora, identity of definite nouns, and implicit identity between NL objects. It stores the DS in the working space, where references to preceding instances are represented by the links between instances in the DSs. These inter-sentential links are used to determine the scopes of adverbs such as "also" and "only". For example, if the phrase "role of an operator" appears in a text, the word "operator" could be a person who operates a machine or a logical operator for computation, but no sufficient information is available to resolve this ambiguity at this moment. In such cases, creating referential links in a forest of DSs could lead us to find evidence for disambiguating these two meanings. The scope of an adverb, such as "also," is determined by identifying repeated NL objects and newly introduced NL objects, where the latter are more likely to fall within the scope of the adverb. The context analyzer uses a similar method to determine lexical ambiguities that were not resolved by the sentence analyzer when the case base failed to provide enough information. 4. Concept Mapper Given a conceptual representation, which is an output from the parser, and a target language, the concept mapper tries to discover another conceptual representation that has a well-defined mapping to a DS while keeping the semantic content as intact as possible. This process is called conceptual transfer. If the given conceptual representation already has a well-defined mapping to a DS, the concept mapper does nothing and Shalt2 works like an interlingual MT system. It is important that conceptual transfer should be related with the mapping to a DS, because there are generally many conceptual representations with a similar semantic content. The existence of well-defined mapping not only guarantees that the generator can produce a sentence in the target language, but also effectively eliminates unsuccessful paraphrasing. In addition to the paraphrasing rules mentioned earlier, the concept mapper uses the following general rules for conceptual transfer.† The paraphrasing rules are composed to make a complex mapping. - **Projection:** Map an NL object with a filled slot \( s \) to an instance of the same class with the unfilled slot \( s \). Projection corresponds to deletion of a slot \( s \). - **Generalization:** Map an NL object of a class \( X \) to an instance of one of the superclasses of \( X \). - **Specialization:** Map an NL object of a class \( X \) to an instance of one of the subclasses of \( X \). As an example, a projection rule is frequently used when we translate English nouns into Japanese ones, as in the following example: ```plaintext diskette (diskette (:num (*sg))) diskettes (diskette (:num (*pl))) a diskette (diskette (:num (*sg))) the diskettes (diskette (:num (*pl))) ``` ディスクnett (diskette) Here, the four English noun phrases above are usually translated by the same Japanese noun phrase‡ (the fifth one), which does not carry any information on \( \text{num} \) and \( \text{def} \). We provide a paraphrasing rule for translation in the opposite direction such that for any instance of the \( \text{object} \) can obtain appropriate \( \text{num} \) and \( \text{def} \) fillers. The parser, however, is responsible for determining these fillers in most cases. In general, the designer of semi-equivalent rules for translation in one direction has to provide a way of inferring missing information for translation in the opposite direction. Generalization and specialization rules are complementary and can be paired to become equivalent rules when a specialization rule for any instance of a class \( z \) is unambiguous. That is, without losing any fillers, one can always choose an instance of a subclass \( y \) to which \( x \) can be uniquely mapped. A generalization from each \( y \) to \( z \) provides the opposite mapping. †These are semi-equivalent rules. Equivalent rules have higher priority when the rules are to be applied. ‡One exception is that deictic noun phrases are translated when we use the Japanese counterpart “the”. 5. Grammar-Based Sentence Generator Recent investigation of unification grammars and their bi-directionality [15, 9, 10] has enabled us to design and implement a grammar-based generator. Our generator uses a PUG grammar, which is also used by the parser, to traverse or decompose a structure obtained from a DS in order to find a sequence of grammar rule applications, which eventually lead to lexical rules for generating a sentence. The generation algorithm is based primarily on Wedekind’s algorithm, but is modified for PUG. The current implementation of our generator lacks subtle control of word ordering, honorific expressions, and style preference. We are developing a set of discourse parameters to associate preferences with grammar rules to be tried, so that specific expressions are favored by the parameter settings. Acknowledgment Yutaka Tsutsumi built an initial version of conceptual definitions.Katashi Nagao originally designed and implemented the disambiguation algorithm of the sentence analyzer. His continuous efforts to build a large-scale case base and to improve the disambiguation algorithm have been continued by the Shalt2 group. Hiroshi Maruyama enhanced his constraint solver for our project, which has led us to a constraint propagation method with delayed composition of conceptual representations. Michael McDonald read through an early draft of this paper and gave us many suggestions on technical writing. We thank them all. References
{"Source-Url": "http://www.mt-archive.info/90/Coling-1992-Takeda.pdf", "len_cl100k_base": 4916, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 17199, "total-output-tokens": 5993, "length": "2e12", "weborganizer": {"__label__adult": 0.0007047653198242188, "__label__art_design": 0.0010023117065429688, "__label__crime_law": 0.0008525848388671875, "__label__education_jobs": 0.004913330078125, "__label__entertainment": 0.0004715919494628906, "__label__fashion_beauty": 0.00033664703369140625, "__label__finance_business": 0.0004851818084716797, "__label__food_dining": 0.0006213188171386719, "__label__games": 0.0013828277587890625, "__label__hardware": 0.0009860992431640625, "__label__health": 0.0010690689086914062, "__label__history": 0.0006308555603027344, "__label__home_hobbies": 0.0001347064971923828, "__label__industrial": 0.0007505416870117188, "__label__literature": 0.01047515869140625, "__label__politics": 0.0007157325744628906, "__label__religion": 0.0011949539184570312, "__label__science_tech": 0.2222900390625, "__label__social_life": 0.000362396240234375, "__label__software": 0.04290771484375, "__label__software_dev": 0.7060546875, "__label__sports_fitness": 0.0004253387451171875, "__label__transportation": 0.0008111000061035156, "__label__travel": 0.0002472400665283203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25179, 0.01609]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25179, 0.75221]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25179, 0.90085]], "google_gemma-3-12b-it_contains_pii": [[0, 4400, false], [4400, 8425, null], [8425, 13827, null], [13827, 18342, null], [18342, 25179, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4400, true], [4400, 8425, null], [8425, 13827, null], [13827, 18342, null], [18342, 25179, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25179, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25179, null]], "pdf_page_numbers": [[0, 4400, 1], [4400, 8425, 2], [8425, 13827, 3], [13827, 18342, 4], [18342, 25179, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25179, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
324c811f6939319d7247622047dc7398238723ad
EDUCATIONAL PEARL “Little language” project modules JOHN CLEMENTS California Polytechnic State University, San Luis Obispo, CA, USA (e-mail: clements@brinckerhoff.org) KATHI FISLER Worcester Polytechnic Institute (e-mail: kfisler@cs.wpi.edu) Abstract Many computer science departments are debating the role of programming languages in the curriculum. These discussions often question the relevance and appeal of programming-languages content for today’s students. In our experience, domain-specific, “little languages” projects provide a compelling illustration of the importance of programming-language concepts. This paper describes projects that prototype mainstream applications such as PowerPoint, TurboTax, and animation scripting. We have used these exercises as modules in non-programming languages courses, including courses for first year students. Such modules both encourage students to study linguistic topics in more depth and provide linguistic perspective to students who might not otherwise be exposed to the area. 1 Studying the design of (domain-specific) languages Students (and some faculty!) sometimes wonder why the standard computing curriculum includes a course on programming languages. In their eyes, most employers have settled on a small, slowly changing pool of general-purpose languages that share a common programming idiom. Languages that do not fit this model, even very popular ones, are often regarded as “scripting” languages which students can learn on their own as needed. From this perspective, studying the building blocks and design principles that underlie programming languages seems unnecessary to students once they have command of one language. In many computer science departments (at least in America), programming languages is losing its position as core content (Fisher & Krintz 2008). Programming-languages researchers cite many benefits of linguistic training: flexibility as a programmer, ability to design effective abstractions, and appreciation for different computational models. In our experience, these points are too abstract to resonate with many mainstream students. To make these points more concrete, we have developed a series of software construction projects within which students design and implement domain-specific languages (Bentley 1986; Deursen et al. 2000). Each project has students build a prototype of a mainstream software application such as PowerPoint, TurboTax, or an animations-scripting platform such as Flash. Each of these applications processes complex and context-dependent data in well-specified ways: PowerPoint displays user-specified slide decks; TurboTax prompts end-users to fill in forms and fields required by local tax code; animations platforms provide constructs for coordinating interactions between graphical objects. Robust implementations of these packages decouple the data specification from the engine that processes it. In other words, software engineering concerns motivate the construction of domain-specific languages for these projects. Grounding language design in software engineering enables using these projects as modules in courses beyond programming languages. We have used these projects successfully in two such settings: a software construction course for students in their third college year, and perhaps more surprisingly, an honors-level programming course for students in their first college year. In both settings, students confront core linguistic material: they must define syntax and semantics, identify program errors to flag, and provide an implementation via an interpreter, compiler, or embedding into a host language. Our modules therefore support many uses, from providing basic linguistic content to students who will not take a languages course, to advertising for languages courses, to providing real-world applications within languages courses. Given that domain-specific languages are often declarative in nature (Sabry 1999), these projects also provide a framework for exposing students to functional and declarative programming. The paper presents these projects using tax-preparation software as a running example. Section 2 describes the tax-form problem. Section 3 discusses two implementation techniques and their pedagogic implications. Section 4 discusses advanced tax-form features and their linguistic consequences. Section 5 describes other domain-specific language examples that we have used effectively in classes. Section 6 describes our experience using these exercises as modules outside of programming-languages courses. Section 7 offers concluding remarks. 2 Tax forms: a running example The task is to design and implement a tax assistant that helps a taxpayer fill out tax forms. For this example, we use the United States of America’s federal tax forms (which residents complete annually); fragments of these forms appear in Figure 1. The tax assistant program should query the taxpayer for user-supplied fields (such as wages earned), compute the value of fields that are derived from other fields (such as total income), and produce the amount of tax. Some fields require completion of auxiliary forms (called schedules in the U.S. tax code) whose fields are referenced in computations on other forms; the arrows in Figure 1 show a reference to schedule C from both form 1040 (the main form) and another schedule (SE). The tax assistant should prompt for the completion of each form or schedule at most once. As tax laws can change from year to year, the program should be designed to adapt easily to different tax forms and to variations of the same form. Several extensions to this basic problem offer richer features to tax-form authors and taxpayers using the software: Fig. 1. Sample tax forms and the flow of information between them. 1. Print the completed tax forms when the user is finished. 2. Include error checking to catch form-specification errors. 3. Capture and check form invariants. For example, taxpayers who earn more than $1500 in dividends must fill out schedule B. 4. Allow users to prepare their taxes over multiple sessions, each of which resumes where the previous session ended. We first discuss implementations of the basic problem, then address the advanced features. 3 Implementing a basic tax assistant Different implementation approaches raise different pedagogic issues. This section discusses two approaches – interpreters and language embeddings via macros – in detail. Section 7 contrasts these approaches pedagogically in the context of our projects. 3.1 An interpreter-based approach Writing an interpreter follows the style of many programming-languages curricula (Kamin 1989; Friedman et al. 2001; Krishnamurthi 2007). An interpreter-based implementation typically requires three artifacts: a concrete syntax in which a tax expert would describe a tax form, an abstract syntax representation (i.e., a data structure) that the tax assistant processes, and an interpreter which executes programs written in the abstract syntax. For simplicity, we assume that the concrete syntax is textual. Figure 2 shows sample concrete syntax for a portion of the 1040... (form form-1040 (export total-income) (field wages (prompt "Wages earned")) (field big-dividends (prompt-checkbox "are dividends $1500?")) (field dividends (if big-dividends (form-ref schedule-b total-dividends) (prompt "enter total dividends"))) (field total-income (calculated (+ wages dividends)))) (define-struct form (name exports lines)) (define-struct line (label instructions)) (make-form "form-1040" (list "total-income") (list (make-line "wages" (make-prompt "Wages earned")) (make-line "dividends" (make-if (make-prompt "are dividends $1500?" (make-form-ref "schedule B" "total dividends") (make-prompt "enter total dividends"))) (make-line "total-income" (make-calculated (make-plus "wages" "dividends"))))) Fig. 2. Possible concrete (above) and abstract (below) syntaxes for a simple tax form. tax form, including named fields containing both prompted and computed values, along with a corresponding abstract syntax for the same fragment. The abstract syntax, as with all code in the paper, is presented in PLT Scheme (Flatt & PLT 2009); each make-operator is a typed record-structor. Although it might seem obvious that students should design their concrete syntax before a corresponding abstract syntax, designing the abstract syntax first nearly always makes more sense. The abstract syntax is simply a careful definition of the set of possible inputs to the evaluator. When prototyping a known software package, students have many examples (such as actual tax forms) from which to identify an appropriate data structure for the abstract syntax. The concrete syntax, by contrast, is designed to simplify the task of the specifier by providing syntactic short-cuts; its design is far more open ended and thus harder as a starting point for novices. Designing Abstract Syntax: Students encounter certain common problems while designing abstract syntax, including: 1 The tax-form specification is separate from the data that the tax assistant gathers from an end-user. Many students include dummy placeholders or default values in the abstract syntax for the data to be requested on each line, (e.g. (make-prompt "enter wages" false) in the abstract syntax of Figure 2). The false serves as a placeholder for the actual wages value that a user will enter when running the tax program. Students who do this generally cite principles about keeping related data together, without realizing the competing principle of separating instructions from data. This difficulty is compounded by the existence of the tax form’s physical artifact, wherein tax-form instructions appear next to blank spaces for a user to fill in. This is a nontrivial step for some students as they learn to think about languages. 2 Meta-language identifiers are not inherently visible in the object language. Consider this incorrect program fragment: \[ \begin{align*} \text{(define line1 (make-prompt "Enter your wages"))} \\ \text{(define line2 (make-prompt "Enter your dividends"))} \\ \text{(define line3 (make-calculated (make-plus line1 line2)))} \end{align*} \] This code’s author imagines that line1 refers at run-time to a number representing the end-user’s wage, when in fact it refers at compile-time to a piece of a tax-form abstract syntax.\(^1\) 3 Languages must be more general than the artifacts they capture. In the section of tax forms capturing dividend amounts, for example, the printed form provides a fixed number of lines. The tax assistant, however, should use an arbitrary-length list rather than expect the number of entries printed on the form. 4 Arbitrary-size data structures require more interesting naming schemes. Each entry in a list of dividend amounts, for example, involves several pieces of information; it is effectively a table with a fixed number of columns and variable number of rows. Referencing data within tables introduces the need for addressing schemes, since students cannot introduce fixed names in instances of the abstract syntax. Starting with abstract syntax helps students confront these issues directly. Asking students to define the data structures for their abstract syntax, express a representative subset of an actual tax form in those structures, and load that expression in the meta-language reveals many of these problems to the students. Ironically, then, the abstract syntax turns out to be more concrete! **Defining an Interpreter:** Conceptually, a tax-form interpreter must process each line of the form by either prompting the user for data or computing a value based on previously-entered data. Figure 3 shows a possible fragment of the tax-form interpreter corresponding to the abstract syntax from Figure 2. Each line in the form, whether prompted or calculated, constitutes a binding which other lines may reference. Implementing bindings correctly tends to be the biggest conceptual hurdle for students: they struggle to separate bindings in the tax language from bindings in the interpreter, and to devise a run-time data structure to manage the bindings. Their attempts can generally be grouped into three categories: 1 Define a meta-language variable for each entered datum and reference those variables as values in other pieces of abstract syntax (as shown in item 2 at top of this page): Inability to rule out this option reflects fundamental misunderstandings about the idea that programs are data and the differences between object and meta-language. \(^1\) Curiously, this can be made to work as part of a macro solution, but the use of the host-language binding mechanism destroys the ability to, for example, compute the number of lines in a form or print out the completed form. \[ \text{process-line : line env} \rightarrow \text{env} \] \[ \text{requests or computes data for one tax-form line, storing it for future reference} \] \begin{verbatim} (define (process-line a-line env) (let ([instructions (line-instructions a-line)]) (add-to-env env (line-label a-line) ) (cond [(prompt? instructions) (begin (printf (prompt-text instructions)) (read-line)) [(calculated? instructions) (evaluate-computed instructions)]) )) \end{verbatim} Fig. 3. Fragment of tax-form interpreter corresponding to the line structure. 2. Extend the abstract syntax with a placeholder for the answer, and mutate the abstract syntax as line values are determined: this approach admits fewer data structure definitions, but fails to separate abstract syntax from run-time data and requires repeated searches over the abstract syntax to look up previously-determined values. 3. Maintain a separate data structure linking forms and line numbers to values: this is the approach that most successful students eventually settle on. **Defining Concrete Syntax:** Making the tax-form language available to tax-form specifiers requires a specification of concrete syntax and a mechanism to convert the concrete syntax into the abstract syntax. The specification and conversion method are closely linked, in that adopting constraints on the form of the concrete syntax enables different conversion methods. At one extreme, students could choose an arbitrary concrete syntax, but would need to write a parser to produce the abstract syntax. At the other extreme, students could adopt the meta-language’s concrete syntax and avoid writing any conversion method: the bottom half of Figure 2, for example, is concrete syntax in Scheme that corresponds to the abstract syntax. The first approach requires more time than a project module might allow (especially if students aren’t already familiar with parsing). The second is neither compelling nor motivating: students don’t see data structures as languages. Hygienic macros can provide a gentle solution between these extremes. The macro system’s pattern language constrains the shape of the concrete syntax in exchange for producing abstract syntax via rewrite rules. There are many hygienic macro systems for functional languages (Kohlbecker et al. 1986; Dybvig et al. 1988; Clinger & Rees 1991; Mauny & de Rauglaudre 1992; Cardelli et al. 1994; Herman & Wand 2008), some of which are syntactically quite complex. We find that simple Scheme macros in the **syntax-rules** system pose little difficulty for students, especially given the abstract syntax to guide development of the rules. Indeed, our experience is that students find macros amusing and even fascinating; some try to create more elaborate compilations that require richer macro systems (such as **syntax-case**). Even within **syntax-rules**, we augment the macros with checks for syntax errors to inspire students to use macros in more sophisticated ways. (define-syntax form (syntax-rules (export) ;; ⇒ (define name (form-internals (export var . .) entry . .))))) (define-syntax form-internals (syntax-rules (export field) ;; ⇒ (let ([name content]) (form-internals (export var . .) entry . .))) ;; ⇒ (list (list 'var var . .))) Fig. 4. Form expansion macros. 3.2 Language embedding Tax-form evaluators can be implemented without interpreters. For advanced students, we instead espouse reuse: build the tax form as an embedded language atop an existing one (Hudak 1996; Clements et al. 2001, 2004). Rather than specifying an abstract syntax and an interpreter for it, students define a concrete syntax that is an extension of a host language, and use hygienic macros to map this syntax onto the native abstract syntax of the existing evaluator.\(^2\) They give up control over the surface syntax of the language, and in return they get to reuse the existing evaluator. In a design such as this, there is no need to specify abstract expression forms (e.g. \texttt{make-plus}) or to define their meanings; tax-form specifiers can simply reuse the + of the host language. This approach concretely demonstrates the advantages of reuse, not just in an implementation but also in a design sense. Reuse is particularly important in prototyping, where the goal is to produce a working program in a limited time. Figure 4 shows a simplified set of macros for expanding forms. For brevity, these examples omit error-checking clauses. The tax-form expansion is implemented as a simple two-level macro. Each tax-form expands into a single (\texttt{define} . . .) statement, containing a cascaded (\texttt{let} . . .). Each form binds its name to an association list containing pairs of exported field names and their associated values.\(^3\) The macros do not show the (\texttt{calculated} . . .) terms from the language in Figure 2, but handling them is literally trivial: (\texttt{calculated} \texttt{exp}) expands into \texttt{exp}. The syntax of these terms is simply that of Scheme itself. This is the most vivid illustration of the work saved by working with macros; rather than implementing a heavyweight interpreter for calculations in dozens of lines, a two-line macro suffices. \(^2\) There are competing definitions of the term “domain-specific embedded language”. We use it to refer to a simple extension of the host-language semantics using macros. \(^3\) Using host-language variables for form values is appropriate in the macro context, whereas it constituted an error in the interpreter context. This reinforces the distinction between meta and object language, which is blurred in the macro context. Students confront several issues in writing the macro-based embedding: - The seeming lack of work involved: Macros can confound students at first, because students expect programs as complex as a tax assistant to require a substantial amount of code. We frequently see students steer themselves away from elegant solutions in macros that seem too easy. This experience is useful for helping students think differently about languages; many gain new appreciation for programs as data through writing macros for nontrivial tasks. - Tradeoffs between exposing implementation details and complicating the macros: as an example, consider tables (item 4 on page 7). Scheme has a well-developed set of operations on lists. The student could represent a table as a list, and simply expose this list to the author of the form. Exposing the Scheme list (and its accompanying library functions) in the domain-specific language makes expansion simple, but requires more host-language knowledge on the part of those programming in the domain-specific language. It would also be difficult to prevent errors such as “car of null” in such an implementation. A student might alternatively choose to build a set of custom table-access functions for each table that a form contains. This expansion would require more work, but might discover certain form errors more quickly, and would probably lead to more readable forms. For instance, this style might allow named references to tables: (field . . (calculated (sum-up (table-ref foreign-taxes tax-paid)))). In practice, such tradeoffs expose students to subtle challenges of language design. - Restricting the language: clever students realize that their tax-form language can contain arbitrary source code, thus allowing tax-form specifiers to embed hostile code with no relationship to a tax-form computation. Most students do not address this issue, but the questions it raises are educational for those who discover it. Graham’s example of embedding a database query language in Lisp through macros (Graham 1994) provides another interesting example of this approach. Of course, macros are not the only way to build a domain-specific language by linking the meaning of the new language to the meaning of an existing one. Two competing approaches – one offering less control over the details of the language, one offering more – are presented by Haskell and Ziggurat (Fisher & Shivers 2008). Using a “combinator library” approach in Haskell (Hudak 1996) leverages laziness, type classes, and monads to make it possible to dramatically extend the language with new values, operations, and pattern-matching forms (Rhiger 2009) without adding a macro-like transformation system. At the other extreme, systems such as Ziggurat promise the ability to equip each language extension with its own semantics, analysis, and other tools. In Ziggurat, a language consists of a tower of languages, where each additional layer expands and compiles into the next one down, and static analyses may be inherited and extended. Students would have great flexibility in designing language extensions, and the corresponding responsibility for implementing specialized extensions to the existing analysis tools. 4 Advanced features and concepts Whenever we assign students large projects (such as a language implementation), we espouse iterative refinement, building towards the final system. We first implement a core of the system, then augment the system’s features. For the tax-forms project, each of the extensions for type checking, assertions checking, and multiple sessions requires modifications with interesting linguistic content. 4.1 Type and error checking A basic tax assistant probably performs little to no error checking, neither for program-specification errors nor for run-time errors. At run time, a taxpayer could enter non-numeric data in response to a prompt for numeric data. At specification time, tax-form authors could insert unbound references or circular data dependencies. Tax forms are also subject to type errors, particularly with respect to units of measure (so-called unit-checking (Kennedy 1997; Allen et al. 2004; Antoniu et al. 2004)). Consider the following example. A taxpayer can claim a reduction in taxable income for himself, his spouse, and all of his dependents. The total deduction for dependents in 2008 is calculated by multiplying the number of dependents by US$3500. Multiplying a simple number by a dollar amount is fine, and the resulting unit is the dollar. If the developer of the tax form were to mistakenly add these numbers rather than multiplying them, the resulting total would be a nonsensical combination of dollars and people. Extending the language to support error checking is a natural next step once the core tax system is working. Unbound-reference errors can be checked using macros, or as a separate phase in the interpreter. We find the former particularly instructive, as it shows macros doing work beyond rewriting. To handle run-time data-entry errors, we ask students to extend their syntax with type declarations for prompt expressions: (prompt "Enter wages" number) Their implementations then check that entered data conforms to those types, re-prompting the taxpayer if necessary. For unit-checking, we show students how to extend their syntax with type labelers and their implementations with variants of standard operators that are aware of the unit types. This is much simpler than an approach such as Kennedy’s (1997), which addresses unit-checking problems via the addition of relational parametricity to an existing type system. Tax forms, for example, need numeric values representing both dollar amounts and scalars. We therefore introduce the following pair of macros per unit type: (units <type> exp) ⇒ (make-<type> exp) (prompt query-string <numeric-type>) ⇒ (units <numeric-type> (num-prompt query-string)) We then expand tax-form programs to label constants appropriately, as in (field deductions (calculated (* num-dependents (units dollars 3500)))) then rewrite multiplication in terms of these units \[ \text{(define} \ (\text{tax-} \ a \ b) \\ \text{(match} \ (\text{list} \ a \ b) \\ \quad [\text{(list} \ (\text{struct} \ \text{dollarv} \ (d)) \ (\text{struct} \ \text{scalarv} \ (s))) \ \text{(make-dollarv} \ (* \ d \ s))]) \\ \quad [\text{(list} \ (\text{struct} \ \text{scalarv} \ (s)) \ (\text{struct} \ \text{dollarv} \ (d))) \ \text{(make-dollarv} \ (* \ d \ s))]) \\ \quad [\text{(list} \ (\text{struct} \ \text{scalarv} \ (s1)) \ (\text{struct} \ \text{scalarv} \ (s2))) \ \text{(make-scalarv} \ (* \ s1 \ s2))])\] \] Topics such as unit checking are most appropriate for senior students. With younger students, we cover only unbound-reference checking. Covering some form of error checking with these students has proven extremely useful, however. Once we reduce language implementation to writing and processing data structures for programs, many students begin to ask what distinguishes a language from a library (recalling Gosper’s quote that “A data structure is nothing more than a stupid programming language.” (Hewitt et al. 1973)). Ideally, languages include both static and dynamic constraints on well-formed programs. This idea, that languages embody principles of use as well as computation, only starts to take root when students implement language-specific error handling. ### 4.2 Form invariants Invariant-checking, like unit checking, requires language extensions. Students who wish to add assertion checking may add a tax-form construct similar to the following (building off the identifiers in Figure 2): \[ \text{(assertion} \ \text{(if} \ \text{big-dividends)} \\ \quad (> \ \text{dividends} \ \text{(units dollars} \ 1500)) \\ \quad (<= \ \text{dividends} \ \text{(units dollars} \ 1500)))) \] Assertion checking poses a nice contrast to unbound-identifier checking, since it typically requires dynamic, rather than static, checks. ### 4.3 Multiple sessions, revisions, and out-of-order evaluation A simple evaluator would force users to work through a tax form in order in a single session. More sophisticated tools could allow users to edit previous answers, save and resume previous sessions, or complete sections of the form in an order of their choosing. These features raise advanced linguistic topics, such as dataflow programming (to automatically propagate edits), continuations (to save and resume session state), or laziness (to compute form data as needed in other forms). While we sometimes use these features as motivators for these topics in full-fledged programming languages courses, we do not introduce them when using these projects as modules in other courses. ### 5 Other domain-specific language problems In addition to tax forms, we have used several other domain-specific languages in a similar manner. Each highlights different language-design issues. **Slideshows (a.k.a. PowerPoint):** Slideshow presentations provide a compelling language design example for two main reasons: first, students are extraordinarily familiar with PowerPoint; second, PowerPoint’s limited abstraction mechanisms and control flow operators motivate the benefits of building products around domain-specific languages. Concretely, our slideshow language covers the following linguistic topics: - **Conditionals:** we introduce a feature that bases slide sequencing on the time that has elapsed since the start of the talk. - **Variables:** we compute some slide data dynamically (such as example numbering – this is useful once conditionals allow us to skip slides). We contrast a narrow construct for dynamic example numbering with a more general implementation of program variables. The second point helps students understand that language design is about tradeoffs: what we choose to exclude is often as important as what we choose to include. **An Automated Testing Service:** Computerized testing systems administer exams in which different questions may be posed to different people based on their performance so far. During exams, people may also receive feedback about their performance on particular topics. Specifications of alternative questions, question sequencing, and when to provide feedback constitute a domain-specific language. Our version of this problem features multiple question styles (multiple choice and free-response) and optional hints, as well as question sequencing and feedback. This example highlights the following linguistic concepts: - **Conditionals:** these arise from sequencing questions and their alternatives. - **Structuring program data for querying:** giving feedback to users requires tabulating a user’s performance on different categories of questions. Students contrast tabulating questions on a per-section basis with tagging data and allowing queries over those tags (which engenders another language-design question). - **Separation of model and view:** should layout information for multiple-choice questions be built into the abstract-syntax data structure or customized externally? - **Web-based control flow:** if the interpreter uses the web to display questions and process answers, does the system work properly if the test-taker uses the back-button during the exam? This is fundamentally a question of environments versus stores. **Animations Scripting:** Students create a language for scripting basic animations over interacting objects. Our objects are basic shapes (circles and rectangles) that can move across the screen, change size, jump to new locations, collide with one another, and appear or disappear during the animation. This example highlights the following linguistic concepts: • Conditionals and defining predicates: animated objects change behavior when they collide. Do we implement general predicates to capture collisions or special-purpose constructs for pre-defined types of collisions? • Variables: animated objects may have attributes defined in terms of variables that change during the animation (such as a circle and square with dimensions computed from a shared program variable). • Parallel versus sequential operations: some animations are easier to express through independent operations executing in parallel rather than purely sequential execution. State Machine Simulation: A simple state-machine simulator consumes the description of a state machine and a list of input symbols. The simulator produces either a trace of corresponding outputs or a simple flag indicating whether the input list is recognized by the state machine. This example highlights the following linguistic concepts: • Interpretation versus compilation: this example is small enough that students can implement two versions of it in a short time frame. One version converts the state-machine syntax to a data structure of states and transitions which an interpreter must then simulate against inputs. Another version, due to Krishnamurthi, compiles state machines into mutually-recursive functions, each of which consumes the remaining inputs (Krishnamurthi 2006). While this may seem less like a domain-specific application than the others, we have found it resonates well with engineering students. • Error checking: typos can abound in state-machine descriptions. A useful language would check for errors such as unbound state names in transitions. This raises questions of what languages should do for programmers, thus distinguishing languages from mere data structures. 6 Experience Both authors have used domain-specific language design exercises in the classroom, but in different kinds of courses with different levels of students. In an Advanced Freshman-Level Course: The second author has used all of these examples in an accelerated introductory Computer Science course at the college level. Her course, aimed at first-semester college students with prior programming experience (usually a year of Java in high school), spends roughly 10 lecture hours introducing functional programming (including lists, trees, and higher-order functions), then another 10 hours on domain-specific language design and implementation. The lectures work through the slide show example to introduce ASTs as a data structure, writing interpreters, and macros.4 4 Notes, pacing, and exercise descriptions are at http://www.cs.wpi.edu/~cs1102/a08/. Students do assignments based on two of the remaining examples. One example—often the automated tester—becomes two assignments: a homework in which pairs of students design abstract syntax for the full example, and a lab in which students write an interpreter for a core of the example. Another example—the tax form or the animations language—serves as the course project: students individually design the AST, implement an interpreter, and optionally provide a clean concrete syntax via macros. Analysis of student projects and grades across four offerings of this course show that almost all of the students understood the concepts well enough to design a basic AST and interpreter. In fact, instructor experience suggests that even those who receive poor grades can attribute their failures to late starts and poor planning, rather than difficulty with the material. Additionally, some students go beyond the stated assignment, embellishing their languages in interesting and original ways. In-class reviews of AST designs contribute to our success with these assignments. For each AST-design assignment, we conduct a class-wide critique of three or four different styles of designs. Students are very engaged in these design reviews. Typical issues raised by students include whether to model certain constructs in the meta-language or the object language, whether multiple constructs could be abstracted into common core constructs, and whether a construct design is sufficiently flexible to accommodate reasonable extensions to the language. Analysis of grade data shows that a significant fraction of students had poor designs at the AST phase, but got C-grade or better implementations working by the final deadline. We find many students understand the concept of an interpreter more readily than the concept of capturing programs as data. These students benefit from seeing multiple examples of plausible ASTs during the design reviews. We encounter relatively few students who are able to write ASTs and not able to produce simple interpreters. That we are able to achieve these results with students in the first semester of college speaks to the power of domain-specific languages as a project topic. We also attribute success with this audience to our choice of functional programming curriculum and meta-language. This course uses *How to Design Programs* (Felleisen et al. 2001), which teaches students to design programs by first defining their data then deriving the program structure from the data. Using this approach, abstract syntax leads directly to the structure of the interpreter. PLT’s pedagogic Scheme programming environment (Findler & PLT 2009) provides a hygienic, source-correlating macro system that supports both syntax-rules and syntax-case forms. PLT Scheme also supports images as first-class values (Felleisen et al. 2009), which simplifies the animations-language exercise. **In a Junior-Level Software Construction Course:** The first author (jointly with Matthias Felleisen) used the tax-form example in a junior-level software construction class at Rice University in 1998. Students had a choice of implementation style, host/meta-language, and final feature set. The scope and quality of the students’ projects varied widely. Some students were content to produce a bare-bones interpreter that was little more than a spreadsheet, while others undertook pretty-printers and sophisticated input mechanisms. To assess whether the students’ languages could accommodate changes in the tax forms, teams exchanged projects late in the semester. Each team was charged with updating another team’s project to accommodate the changes occasioned by the annual update to the tax forms. Each receiving team assessed the ease with which they could adapt their given code base; this feedback had substantial weight in the authoring team’s project grade. 7 Perspective and conclusion Domain-specific languages for concrete software applications provide fertile ground for teaching students about programming language design. Implementing a domain-specific language forces students to confront many of the same issues that arise when implementing the core of a more general-purpose language. Focusing on domain-specific languages that underlie recognizable software products, however, casts the problem in software engineering terms that students find compelling. Moreover, these applications provide concrete artifacts that help students get off the ground with language design. Our approach contrasts most directly with that of teaching programming languages via interpreters. That approach tends to have students implement either a small abstract language core or fragments of real languages (sometimes using the same language as both meta and object language). While we use this approach successfully in our own upper-level programming languages courses, the domain-specific examples have allowed us to bring this material into nonstandard courses with a broader range of students. In our experience, many first-year students simply don’t understand the concept of meta-circular interpreters (à la SICP (Abelson et al. 1996)) at all, while finding domain-specific language interpreters an exciting challenge. Both the interpreter- and macro-based implementations offer pedagogic strengths. On the one hand, the macro solution emphasizes reuse at many levels: the host-language syntax, implementation, and the programming environment tools. As a result, the solution is approximately one tenth the size of the full interpreter. The time savings are similar. This implementation style teaches students the importance of reuse, as well as language features to look for in a potential host language. The coolness factor of doing a complicated task with macros is also appealing and motivating to many students. On the other hand, the interpreter lies on a clear conceptual path from other programming problems, and is thus easier for less experienced students to grasp. We also find the interpreter forces students to confront certain fundamental issues that are easy to overlook in a macro-based solution (such as identifiers in the object language). In the interpreter, students have to manage their own mapping between identifiers and values. In the macro-based solution, students can fall through to treating identifiers in the host language with simple binding forms as shown in Figure 4; while elegant, this masks the core issues of namespace separation. Our positive experiences with domain-specific languages projects should be widely replicable. Hopefully, this report can guide others in designing their own projects. Naturally, there are many other real-world applications of our ideas, and we are hoping that the community will create a repository of similar case studies for the benefit of all language instructors. References
{"Source-Url": "https://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1090&context=csse_fac", "len_cl100k_base": 7867, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 81156, "total-output-tokens": 10234, "length": "2e12", "weborganizer": {"__label__adult": 0.0007886886596679688, "__label__art_design": 0.0013723373413085938, "__label__crime_law": 0.0005593299865722656, "__label__education_jobs": 0.059417724609375, "__label__entertainment": 0.00022459030151367188, "__label__fashion_beauty": 0.0003695487976074219, "__label__finance_business": 0.0005998611450195312, "__label__food_dining": 0.0008320808410644531, "__label__games": 0.0010128021240234375, "__label__hardware": 0.0008397102355957031, "__label__health": 0.0007348060607910156, "__label__history": 0.0005693435668945312, "__label__home_hobbies": 0.0002498626708984375, "__label__industrial": 0.0007224082946777344, "__label__literature": 0.0013513565063476562, "__label__politics": 0.0006251335144042969, "__label__religion": 0.0010805130004882812, "__label__science_tech": 0.015380859375, "__label__social_life": 0.0003790855407714844, "__label__software": 0.00635528564453125, "__label__software_dev": 0.904296875, "__label__sports_fitness": 0.0005249977111816406, "__label__transportation": 0.001094818115234375, "__label__travel": 0.0003650188446044922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44053, 0.01751]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44053, 0.74007]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44053, 0.88123]], "google_gemma-3-12b-it_contains_pii": [[0, 2410, false], [2410, 5751, null], [5751, 7179, null], [7179, 9941, null], [9941, 12884, null], [12884, 15935, null], [15935, 18614, null], [18614, 21848, null], [21848, 24691, null], [24691, 27563, null], [27563, 30348, null], [30348, 33007, null], [33007, 36459, null], [36459, 39671, null], [39671, 42715, null], [42715, 44053, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2410, true], [2410, 5751, null], [5751, 7179, null], [7179, 9941, null], [9941, 12884, null], [12884, 15935, null], [15935, 18614, null], [18614, 21848, null], [21848, 24691, null], [24691, 27563, null], [27563, 30348, null], [30348, 33007, null], [33007, 36459, null], [36459, 39671, null], [39671, 42715, null], [42715, 44053, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44053, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44053, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44053, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44053, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44053, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44053, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44053, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44053, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44053, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44053, null]], "pdf_page_numbers": [[0, 2410, 1], [2410, 5751, 2], [5751, 7179, 3], [7179, 9941, 4], [9941, 12884, 5], [12884, 15935, 6], [15935, 18614, 7], [18614, 21848, 8], [21848, 24691, 9], [24691, 27563, 10], [27563, 30348, 11], [30348, 33007, 12], [33007, 36459, 13], [36459, 39671, 14], [39671, 42715, 15], [42715, 44053, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44053, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
442fc019f57517901bc051d8bd154044dfed4687
[REMOVED]
{"len_cl100k_base": 5194, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 28669, "total-output-tokens": 6380, "length": "2e12", "weborganizer": {"__label__adult": 0.0004475116729736328, "__label__art_design": 0.00026607513427734375, "__label__crime_law": 0.0003516674041748047, "__label__education_jobs": 0.00113677978515625, "__label__entertainment": 6.258487701416016e-05, "__label__fashion_beauty": 0.0001710653305053711, "__label__finance_business": 0.00022685527801513672, "__label__food_dining": 0.0003542900085449219, "__label__games": 0.0004916191101074219, "__label__hardware": 0.0006833076477050781, "__label__health": 0.0004978179931640625, "__label__history": 0.0001646280288696289, "__label__home_hobbies": 9.41157341003418e-05, "__label__industrial": 0.00032138824462890625, "__label__literature": 0.00025081634521484375, "__label__politics": 0.0002589225769042969, "__label__religion": 0.00041031837463378906, "__label__science_tech": 0.00811767578125, "__label__social_life": 0.00013053417205810547, "__label__software": 0.0034427642822265625, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.0003616809844970703, "__label__transportation": 0.0004851818084716797, "__label__travel": 0.00019049644470214844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26428, 0.02724]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26428, 0.48777]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26428, 0.8811]], "google_gemma-3-12b-it_contains_pii": [[0, 3172, false], [3172, 5826, null], [5826, 7680, null], [7680, 11849, null], [11849, 15838, null], [15838, 20378, null], [20378, 20981, null], [20981, 25306, null], [25306, 25306, null], [25306, 26428, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3172, true], [3172, 5826, null], [5826, 7680, null], [7680, 11849, null], [11849, 15838, null], [15838, 20378, null], [20378, 20981, null], [20981, 25306, null], [25306, 25306, null], [25306, 26428, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26428, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26428, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26428, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26428, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26428, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26428, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26428, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26428, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26428, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26428, null]], "pdf_page_numbers": [[0, 3172, 1], [3172, 5826, 2], [5826, 7680, 3], [7680, 11849, 4], [11849, 15838, 5], [15838, 20378, 6], [20378, 20981, 7], [20981, 25306, 8], [25306, 25306, 9], [25306, 26428, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26428, 0.16832]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
82e1ed8e58f01f858ae20d9e1b7cc292b8bb8b7b
Towards HPSG-based Concurrent Machine Translation via Oz Liviu-Virgil Ciortuz LIFL, University of Lille I – France and University of Iasi – Romania Abstract — We introduce CHALLENGER 2, a demonstrative project in the field of concurrent Natural Language Processing/Machine Translation. This paper gives an overview on the purpose of this project, the linguistic model involved, its computational support and current state of work. An interpreter and a compiler written in DFKI Oz for DFL – a very expressive (kernel) language for HPSGs implementation, built on top of a feature constraint system [5] using F-logic semantics [7] – are briefly presented and evaluated. 1. Introduction: Project Presentation Automate Machine Translation (MT) is for some of us not a dream. It is (or it may be) just a task to be accomplished. Doing it simply one have to choose: - the suitable linguistic model, and - the supporting programming paradigm and language. The CHALLENGER 2 project has decided respectively for: - HPSG – the most used frame-based descriptive theory of language, and - Oz – the first truly multi-paradigm programming language, underlined by a comprehensive theoretical model.\(^2\) Somehow hidden in between, making in fact the “fix-point” of the current state of work in our project is DFL, a logic Data Frame-based Language for HPSGs implementation. DFL is built on top of a constraint system remaking the F-logic operational semantics [7]. It offers an expressive way to reason on frame data structures extending in some particular respects the Alt-Kaci’s e-terms [1] and Smolka and Treinen’ logic records [12]. An interpreter for DFL has already been written in Oz, and a compiler translating DFL logic programs to Oz is currently implemented at LIFL, France. The second section of this paper presents the link between HPSGs and DFL, while the third one links DFL to Oz making (hope) some interesting points with respect to the above mentioned implementations. The fourth section gives a first simple example of automatic translation from English into French and Romanian. The purposed goal of this project is to do demonstrative concurrent Machine Translation from English into Romanian and French. 2. From HPSGs to DFL Head-driven Phrase Structure Grammars [10] deal with signs, feature-based constructs usually called frames. They express in a unitary form both linguistic data and relations between these data (“principles”). For instance, the simple HPSG grammar implemented by one of the DFKI Oz demonstrative programs can be visualized as the frame in Figure 2.\(^3\) ![Fig. 1. A HPSG sign](image) OSF was the first constraint system conceived to reason on order-sorted feature constructs [2]. It conceptualizes LIFE, an extension of Prolog with (unification on) e-terms over an a priori given sort hierarchy [1]. Then OSF was followed by CFT, the base constraint system in Oz [3]. Making use of the terminology in [7], we can say that while e- \(^{2}\)There are a number of Prolog-based systems doing HPSG implementation. The most important are ALE (Attribute Logic Engine) by Bob Carpenter and Gerald Penn at Carnegie-Mellon University and HPSG-JL by Fred Popowich, Snadi Kodric and Carl Vogel at Simon Fraser University of Canada[11]. \(^{3}\)The * marks are tags, + denotes string concatenation, the <> brackets enclose different values for multiple-valued features, _ is the anonymous tag, and () enclose (maybe empty) ordered sequences. terms have (only) first-order typing features, logic records in CFT have (only) functional features.\textsuperscript{4} F-logic, a new foundation for frame-based and object-oriented languages \textsuperscript{7} introduces F-terms, constructs with both functional and typing features. The so-called Well-Typing Condition Principle links function values to the corresponding types, possibly involving other principles like Type Inheritance, Argument Sub-Typing and Range Super-Typing. F-logic allows F-terms to contain multi-valued features and a higher-order syntax. So, object labels and features (with arguments, values and types) can be represented by Prolog terms.\textsuperscript{5} F-logic completeness w.r.t. a first-order semantics has been proven. We argue that: - F-terms can serve to adequately represent HPSGs, and - Reasoning on them could be done quite nice in the functional object-oriented concurrent constraint language Oz. For the first point, we would consider the simple task to write a program in F-logic doing parsing on the above given HPSG grammar. The program in Figure 2 does this work similarly to the previously referred DFKI Oz demonstrative program.\textsuperscript{6} The first two clauses, C1 and C1', impose that rules' application results are phrases (i.e., two signs L and R should always combine within a sign), and then give the sign's feature structure, according to which one, either R or L, is the head of the sign. The next two clauses, C2 and C2', condition building a phrase by three principles satisfaction: Subcategorization principle\textsuperscript{7}, saturated complements\textsuperscript{8}, and Head-feature principle\textsuperscript{9}. Clauses C3 and C4 carry on lexical recognition and launch parsing. The intermediary parsing function (\texttt{parsel}, defined by C5 and C5') implies application of grammar rules (by using the \texttt{move} function, see clauses C6 and C6'). Finally, the clause C7 prepares for introduction of lexical entries. The second point stated above makes the central subject of the next section. DFL is a (constraint) logic language doing Prolog-like inferences on Horn clauses over F-terms. It allows one to reason on a partially known/specicated sort hierarchy. The sort hierarchy can be concurrently completed during goal solving. A bidirectional suspend and resume mechanism relates the goal solver to the hierarchy completer. The feature constraint system underlying DFL was presented in [5]. \textsuperscript{4}Variables in Oz allow one to reason on features as higher-order constructs. \textsuperscript{5}For efficiency and also sufficiency reasons, we use only Datalog terms. \textsuperscript{6}In this example, the F-logic syntax was enhanced with the well-known selection operator `\texttt{+}' in object-oriented languages. It is assumed left-associative. The `\texttt{+}' operator corresponds to list construction, and it is right associative. `\texttt{'}' and `\texttt{+}' have lower priority than `\texttt{,}' and `\texttt{''} has greater priority than `\texttt{,}' \textsuperscript{7}Head's subcategory is a 'cons' made of 1. the category of the complement daughter of the phrase and 2. the phrase subcategory. \textsuperscript{8}The subcategory of the complement daughter of the phrase should be empty. \textsuperscript{9}The phrase category is the category of its head. For example, given the simple DFL program in the Figure 3, the execution of the goal lucy[happy] will suspend until the completion procedure will “close” the program by adding the clause (D7) lucy:person. due to Type Inheritance and Well-Typing Conditioning between the (D2) and (D6) clauses. Order-sorted unification to be implemented in the near future is expected to increase the DFL interpreter efficiency. It will be possible thus to extend Peter Van Roy’s idea for implementing ψ-term unification - using finite domain constraints [8] - to concurrent specification of domains. The DFL interpreter highly exploits Oz concurrency. Stateful variables acting as unit substitutions were defined and then used for unification, thus dis-mouning monotonic constraint handling on Oz (stateless) variables. There are two versions of stateful variables implemented in the DFL interpreter: one involves object-orientation, the other generates variables as concurrent agents. Sequentialization control in this last version is under study. Frame unification is in progress at the time of writing this paper. The Oz code in Figure 4 is a simplified version of the main function in the DFL interpreter. Full object-oriented capacity of Oz will be highly addressed in the DFL into Oz compiler. 3.2 Compiling DFL into Oz Due to objective limitations, the strength of this subsection is not on the HPSP/DFL into Oz compiling program. We have chosen instead to show how compiled DFL programs look like when translated into Oz programs. Let us consider for instance the translation of the priority given DFL program into Oz, given in figures 5 and 6. We can set up the following points guiding the compilation work: i. Ground terms on identification [a[...]] or value/type (a[... → t]) position in the DFL program translate into Oz classes. See for example the classes Person and Albert corresponding respectively to person and albert. They will be further addressed through message passing. This point applies also for Top, the highest class in the is-a hierarchy. ii. Other ground terms which appear only in the (super)class position of is-a terms (a : c) translate into Oz atoms. This is the case of symmetric and friend. ```plaintext (D1) X[happy] - X:person[fri → Y]. (D2) person[fri =⇒ person]. (D3) Z[f → W] - W[f: symmetric → Z]. (D4) fri: symmetric. (D5) albert:person. (D6) albert[fri =⇒ lucy]. ``` Fig. 3. A simple DFL logic program Due to Type Inheritance and Well-Typing Conditioning between the (D2) and (D6) clauses. iii. Is-a relationships in the DFL program correspond (in part) to class derivations in Oz. Top is derived from UrObject; the other compiled classes are derived from Top. These relationships together with the remaining others (like friend:symmetric) are stored in the global signature defined by the list IsA. The {SubType X} and {TypeB O C} functions return respectively the list of all descendents of X, and True (respectively False) if O is (is not) derived from C w.r.t. IsA. iv. Definite ground constraints in DFL translate into Oz features in the case of functional constraints. The “translated” features are unary functions. They will be applied to classes sending messages via a Query/High mechanism. v. Higher-order functional features in DFL translate into high methods, one for each class, when required. See for example the high method in the Top class. A high method in a class C is a case construct selecting one function for each second-order DFL feature of the sort C. The higher-order protocol in DFL is “reflective” in the sense of [7]. The needed occur-check is implemented via the couple checkIn/checkOut methods in Top. vi. Goal evaluation (or: entailment for existentially closed constraints in DFL) is managed by the Query function. If no homonymous feature can be found (via the inheritance mechanism of Oz), then explicit access to a higher-order function working for that feature is demanded. This is done by invoking the High function. ```plaintext fun{ Query Prog Goal } case { ToEnd Goal } then True else HG = { Car Goal } FC = { GetFirst CI Goal } AS = { GetSub FC } OS OG in { Goal saveTo(OG) } { AS saveTo(OS) } { ForSomeB Prog } fun({ Clause }) { Clause renew } HC = { First { GetClause Clause } } in case { Unify HG#(AS.list) HC#({ GetSub Clause } . list } then NewClause = { New CI } init([Tail { GetClause Clause } ] { GetSub Clause } ) NewGoal = { New LazyList init(NewClausal Goal) } in case { Query Prog NewGoal } then True else { AS restoreFrom(OS) } Goal restoreFrom(OS) False end else { AS restoreFrom(OS) } False end end end ``` Fig. 4. The engine of the DFL interpreter 10Typing constraints in the DFL program are object of precompilation. create Top from UrObject % general attr + methods attr check: nil meth checkIn(X ?'B') case \{MemberB X \@check\} then <<checkOut(X)>> B = False else check <- X?@check B = True end end meth checkOut(X) check <- \{FilterB \@check fun \{\$ Y\} X\_Y=Y end\} end meth high(HOF ?F) % specific (compiled) methods case \{TypeB HOF symmetric\} then F=fun \{\$ X\} Y in case \{ForSomeB \{FilterB \{SubType Top\}\} \{InheritableB\} fun \{\$ W\} case \{Query W HOF\} X=Y then W = Y True else False end end then Y else NoValue end end else F = NoValue end end end Fig. 5. The Top class in the simple DFL program translated into Oz Dynamic completion of the sort hierarchy is not (yet) supported in the compiler. The compiler design is under full development. 4. AUTOMATE TRANSLATION A FIRST EXAMPLE We resume the program in Figure 2. Before explaining how translation works, let’s see the syntactic analysis result. Solving the goal\textsuperscript{11} “the girl is nice” \(\text{parse} \rightarrow Z\), generates the goal lists in Figure 8. Due to objective space limitations, this list is selective. Failed steps are omitted. It should be also understood that ning, for example, the clause \(C_5\) is followed by applying \(C_1\) and \(C_2\). (The last one is demanded by static type checking, see \(C_1\). Note that when solving the \(G_3\) goal, the program is getting the \(g(\text{the}, \text{girl})\) as representation\textsuperscript{12} for “the girl”, with the following sign description: \textsuperscript{11}Here and in the sequel, a string “a b c” will be assimilated with the list of literals [a][b][c] [a]. \textsuperscript{12}The functional symbol g, with arity 2, is automatically associated by the DFL interpreter to the clause \(C_1\) to identify – in the sense of F-logic, through Skolemization – the first value of the feature rule. Similarly, h is associated to the second value of the same feature. create Person from Top feat happy: fun \{\$ X\} Y = \{Query X friend\} in if Y \_ NoValue then True else False fi end end create Albert from Person feat friend: fun \{\$ X\} \end end create Lucy from Person end % \textsuperscript{0} the IsA signature IsA = [Person \#Top Albert \#Person Lucy \#Person symmetric \#Top friend \#symmetric] Fig. 6. Classes in the simple DFL program translated into Oz NoValue = \{NewName\} fun \{High Object Feature\} F in \{Object high(Feature F)\} F end fun \{Checkln Object Feature\} B in \{Object checkln(Feature B)\} B \_ True end fun \{Query Object Feature\} case \{MemberB Feature\} \{Map \{Record.tolistInd Object\} fun \{\$ Y\_\} Y end\} then \{Object Feature Object\} else G = \{High Object Feature\} in case G \_ NoValue andthen \{Checkln Object Feature\} then \{G Object\} else NoValue end end end Fig. 7. The Query\&High Protocol for DFL program execution g(\text{the}, \text{girl})[ phon \rightarrow (\text{the}, \text{girl}) subcat \rightarrow nil headDtr \rightarrow \text{girl}[cat \rightarrow \text{noun} \text{gender} \rightarrow \text{fem}] compDtr \rightarrow \text{the}] Similarly for “is nice”, at G5, we obtain: h(\text{is}, \text{nice})[ phon \rightarrow (\text{is}, \text{nice}) subcat \rightarrow (\_) headDtr \rightarrow \text{is}[cat \rightarrow \text{verb}] \text{compDtr} \rightarrow \text{nice}] The subcat feature of h(\text{is}, \text{nice}) will be instantiated to (noun), cf. the subcategorization principle, C0, and C1. And finally, for the whole sentence “the girl is nice”, the structure is: The DF clauses designed to carry on translation are given in Figure 9. C8 translates sentences, while C9 and C10 do a similar job for verb phrases and respectively noun phrases. C8: P: phrase[translate@l -> P: compDtr: translate@l + P: headDtr: translate@l] : [P: cat -> verb compDtr: cat -> noun]. C9: P: phrase[translate@l, Gen -> P: headDtr: translate@l] : [P: cat -> verb compDtr: cat -> noun]. C10: P: phrase[translate@l -> P: headDtr: translate@l, det] : [P: cat -> noun compDtr -> the]. Now, asking for translation? "the girl is nice" [translate@fr -> X translate@ro -> Y], the system will resume the precedent goal list, as shown in Figure 10. G9: g(g(the, girl), h(is, nice)) nil [translate@fr -> X] (+C8) G10: g(g(the, girl), translate@fr -> X1), h(is, nice) [translate@fr -> X2], X1 [append@X2 -> X]. (+C10, C9) G11: la fille nil [append@ est] jolie nil -> X]... Fig. 8. Solving parsing g(g(the, girl), h(is, nice))) phon -> (the, 'girl', 'is', 'nice') subcat -> nil headDtr -> is [cat -> verb gender -> fem] compDtr -> g(the, girl)] The DF clauses designed to carry on translation are given in Figure 9. C8 translates sentences, while C9 and C10 do a similar job for verb phrases and respectively noun phrases. C8: P: phrase[translate@l -> P: compDtr: translate@l + P: headDtr: translate@l] : [P: cat -> verb compDtr: cat -> noun]. C9: P: phrase[translate@l, Gen -> P: headDtr: translate@l] : [P: cat -> verb compDtr: cat -> noun]. C10: P: phrase[translate@l -> P: headDtr: translate@l, det] : [P: cat -> noun compDtr -> the]. Now, asking for translation? "the girl is nice" [translate@fr -> X translate@ro -> Y], the system will resume the precedent goal list, as shown in Figure 10. G9: g(g(the, girl), h(is, nice)) nil [translate@fr -> X] (+C8) G10: g(g(the, girl), translate@fr -> X1), h(is, nice) [translate@fr -> X2], X1 [append@X2 -> X]. (+C10, C9) G11: la fille nil [append@ est] jolie nil -> X]... Fig. 8. Solving parsing g(g(the, girl), h(is, nice))) phon -> (the, 'girl', 'is', 'nice') subcat -> nil headDtr -> is [cat -> verb gender -> fem] compDtr -> g(the, girl)] The DF clauses designed to carry on translation are given in Figure 9. C8 translates sentences, while C9 and C10 do a similar job for verb phrases and respectively noun phrases. C8: P: phrase[translate@l -> P: compDtr: translate@l + P: headDtr: translate@l] : [P: cat -> verb compDtr: cat -> noun]. C9: P: phrase[translate@l, Gen -> P: headDtr: translate@l] : [P: cat -> verb compDtr: cat -> noun]. C10: P: phrase[translate@l -> P: headDtr: translate@l, det] : [P: cat -> noun compDtr -> the]. Now, asking for translation? "the girl is nice" [translate@fr -> X translate@ro -> Y], the system will resume the precedent goal list, as shown in Figure 10. G9: g(g(the, girl), h(is, nice)) nil [translate@fr -> X] (+C8) G10: g(g(the, girl), translate@fr -> X1), h(is, nice) [translate@fr -> X2], X1 [append@X2 -> X]. (+C10, C9) G11: la fille nil [append@ est] jolie nil -> X]... Fig. 8. Solving parsing So, the French translation is X = "la fille est jolie" and, similarly, the Romanian translation: Y = "fata este frumoasa". 5. Conclusions We implement a nucleus for a demonstrative NLP/MT system. The accent is for now switched on to the computational resource investigation process. Expressivity for both a descriptive language (DFL) for HPKG implementation and the supporting programming environment - Oz - is of the first interest. In fact, the main interest we set down while starting this project is just to be an open-minded guest into the interesting land newly created by the Oz language and computational model. Otherwise said: - try to evaluate from different (language and implementation) perspectives the expressivity of its logic records; - learn to abstract knowledge using its higher-order functionality; - wisely increase constraints on stateless constructs while highly inheriting on stateful objects; - let us come over partial knowledge by concurrently demanded computations. To achieve all these above mentioned goals in the ground of MT became for us a truly CHALLENGER task! References
{"Source-Url": "https://profs.info.uaic.ro/~ciortuz/PAPERS/ALL/elvetia.new.pdf", "len_cl100k_base": 5056, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 22961, "total-output-tokens": 6122, "length": "2e12", "weborganizer": {"__label__adult": 0.0004267692565917969, "__label__art_design": 0.0003409385681152344, "__label__crime_law": 0.0004367828369140625, "__label__education_jobs": 0.0005421638488769531, "__label__entertainment": 0.00011372566223144533, "__label__fashion_beauty": 0.00015926361083984375, "__label__finance_business": 0.0001854896545410156, "__label__food_dining": 0.0004367828369140625, "__label__games": 0.0006184577941894531, "__label__hardware": 0.0007061958312988281, "__label__health": 0.0005860328674316406, "__label__history": 0.00023698806762695312, "__label__home_hobbies": 8.344650268554688e-05, "__label__industrial": 0.0005292892456054688, "__label__literature": 0.0005645751953125, "__label__politics": 0.000362396240234375, "__label__religion": 0.0006399154663085938, "__label__science_tech": 0.035186767578125, "__label__social_life": 0.00012505054473876953, "__label__software": 0.0058441162109375, "__label__software_dev": 0.95068359375, "__label__sports_fitness": 0.0003414154052734375, "__label__transportation": 0.0005693435668945312, "__label__travel": 0.00017380714416503906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21194, 0.01209]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21194, 0.494]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21194, 0.81043]], "google_gemma-3-12b-it_contains_pii": [[0, 3477, false], [3477, 6849, null], [6849, 11726, null], [11726, 15215, null], [15215, 21194, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3477, true], [3477, 6849, null], [6849, 11726, null], [11726, 15215, null], [15215, 21194, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21194, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21194, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21194, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21194, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21194, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21194, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21194, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21194, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21194, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21194, null]], "pdf_page_numbers": [[0, 3477, 1], [3477, 6849, 2], [6849, 11726, 3], [11726, 15215, 4], [15215, 21194, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21194, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
0a3a8b54c8f63e231c51c2a715b651ce2b8faba7
A short fscaret package introduction with examples Jakub Szlek (j.szlek@uj.edu.pl) October 7, 2016 1 Installation As it is in case of caret, the fscaret uses large number of R packages but it loads them when needed. To fully take advantage of the package it is recommended to install it both with dependent and suggested packages. Install fscaret with the command ```r > install.packages("fscaret", dependencies = c("Depends", "Suggests")) ``` from R console. Be advised! Running above code would install all possible packages (in some cases more than 240!), but it is necessary to fully benefit from fscaret. If you wish to use only specific algorithms, check which parameter from funcRegPred corresponds to which package. In the second case install fscaret with the command ```r > install.packages("fscaret", dependencies = c("Depends")) ``` 2 Overview In general fscaret is a wrapper module. It uses the engine of caret to build models and to get the variable ranking from them. When models are build package tries to draw variable importance from them directly or indirectly. The raw feature ranking would be worthless, since the results in this form cannot be compared. That is why within fscaret the scaling process was introduced. Developed models are used to get prediction errors (RMSE and MSE). Finally the output is produced. It contains the data frame of variable importance, errors for all build models and preprocessed data set if the preprocessData = TRUE A short \texttt{fscaret} package introduction with examples when calling the main \texttt{fscaret()} function. Also is possible to retrieve the original models build with \texttt{train()} function of \texttt{caret}, to do this you should set \texttt{saveModel=TRUE} in the call of \texttt{fscaret} function. In summary the whole feature ranking process can be divided into: 1. User provides input data sets and a few settings 2. Models are build 3. Variable rankings are draw out of the models 4. Generalization error is calculated for each model 5. Variable rankings are scaled according to generalization error 6. The results are gathered in tables 3 Input 3.1 Data set format Be advised that \texttt{fscaret} assumes that data sets are in MISO format (multiple input single output). The example of such (with header) is: <table> <thead> <tr> <th>Input_no1</th> <th>Input_no2</th> <th>Input_no3</th> <th>...</th> <th>Output</th> </tr> </thead> <tbody> <tr> <td>2</td> <td>5.1</td> <td>32.06</td> <td>...</td> <td>1.02</td> </tr> <tr> <td>5</td> <td>1.21</td> <td>2.06</td> <td>...</td> <td>7.2</td> </tr> </tbody> </table> For more information on reading files in R, please write \texttt{?read.csv} in R console. If \texttt{fscaret()} function is switched to \texttt{classPred=TRUE}, the output must be in binary format (0/1). 3.2 An example There are plenty of methods to introduce data sets into R. The best way is to read file (presumably csv with \texttt{tab} as column separator) as follows: 1. Select file name \begin{verbatim} > basename_file <- "My_database" > file_name <- paste(basename_file,".csv",sep="") \end{verbatim} A short \texttt{fscaret} package introduction with examples 2. Read data file into matrix \begin{verbatim} > matrixTrain <- read.csv(file_name,header=TRUE,sep="\t", + strip.white = TRUE, na.strings = c("NA","")) \end{verbatim} 3. Put loaded matrix into \texttt{data.frame} \begin{verbatim} > matrixTrain <- as.data.frame(matrixTrain) \end{verbatim} Be advised to use \texttt{header=TRUE} when you have data set with column names as first row and \texttt{header=FALSE} when there are no column names. Starting from \texttt{fscaret} version 0.9 setting \texttt{header} is fixed to \texttt{TRUE}. The last step is obligatory to introduce data into \texttt{fscaret} functions as it checks if the data presented is in \texttt{data.frame} format. 4 Function \texttt{fscaret()} 4.1 Settings All the settings are documented in Reference manual of \texttt{fscaret} \url{http://cran.r-project.org/web/packages/fscaret/fscaret.pdf}. Here we will concentrate only on a few valuable ones. - \texttt{installReqPckg} The default setting is \texttt{FALSE}, but if set to \texttt{TRUE} prior to calculations it installs all packages from the sections ‘Depends’ and ‘Suggests’ of \texttt{DESCRIPTION}. Please be advised to be logged as root (admin) if you want to install packages for all users. - \texttt{preprocessData} The default setting is \texttt{FALSE}, but if set to \texttt{TRUE} prior to calculations it performs the data preprocessing, which in short is realized in two steps: 1. Check for near zero variance predictors and flag as near zero if: - the percentage of unique values is less than 20 - the ratio of the most frequent to the second most frequent value is greater than 20, 2. Check for susceptibility to multicollinearity - Calculate correlation matrix - Find variables with correlation 0.9 or more and delete them - \texttt{regPred} Default option is \texttt{TRUE} and so the regression models are applied - \texttt{classPred} Default option is \texttt{FALSE} and if set \texttt{classPred=TRUE} remember to set \texttt{regPred=FALSE} A short `fscaret` package introduction with examples - **myTimeLimit** Time limit in seconds for single model development, be advised that some models need as much time to be build, if the option is omitted, the standard 24-hours time limit is applied. This function is off on non-Unix like systems. - **Used.funcRegPred** Vector of regression models to be used, for all available models please enter `Used.funcRegPred="all"`, the listing of functions is: ```r > library(fscaret) > data(funcRegPred) > funcRegPred ``` ``` [1] "ANFIS" "avNNNet" "bag" [4] "bagEarth" "bayesglm" "bdk" [7] "blackboost" "Boruta" "bstLs" [10] "bstSm" "bstTree" "cforest" [13] "ctree" "ctree2" "cubist" [16] "DENFIS" "dnn" "earth" [19] "elm" "enet" "evtree" [22] "extraTrees" "FIR.DM" "foba" [25] "FS.HGD" "gam" "gamboost" [28] "gamLoess" "gamSpline" "gaussprLinear" [31] "gaussprPoly" "gaussprRadial" "gbm" [34] "gcvEarth" "GFS.FR.MOGAL" "GFS.LT.RS" [37] "GFS.Thrift" "glm" "glmboost" [40] "glmnet" "glmStepAIC" "HYFIS" [43] "icr" "kernelpls" "kknn" [46] "knn" "krlsPoly" "krlsRadial" [49] "lars" "lars2" "lasso" [52] "leapBackward" "leapForward" "leapSeq" [55] "lm" "lmStepAIC" "logicBag" [58] "logreg" "M5" "M5Rules" [61] "mlp" "mlpWeightDecay" "neuralnet" [64] "nnet" "nodeHarvest" "parRF" [67] "partDSA" "pcaNNet" "pct" [70] "penalized" "pls" "plsRglm" [73] "ppc" "qrf" "qrnn" [76] "relaxo" "rf" "ridge" [79] "rknn" "rknnBel" "rlm" [82] "rpart" "rpart2" "RRF" [85] "RRFglobal" "rvmLinear" "rvmPoly" [88] "rvmRadial" "SBC" "simpls" [91] "spls" "superpc" "svmBoundrangeString" [94] "svmExpoString" "svmLinear" "svmPoly" [97] "svmRadial" "svmRadialCost" "svmSpectrumString" [100] "treebag" "widekernelpls" "WM" [103] "xyf" ``` - **Used.funcClassPred** Vector of classification models to be used, for all available models please enter `Used.funcClassPred="all"`, the listing of functions is: A short fscaret package introduction with examples ```r > library(fscaret) > data(funcClassPred) > funcClassPred [1] "ada" "bagFDA" "brnn" [ 4] "C5.0" "C5.0Cost" "C5.0Rules" [ 7] "C5.0Tree" "CSimca" "fda" [10] "FH.GBML" "FRBCS.CHI" "FRBCS.W" [13] "GFS.GCCL" "gpls" "hda" [16] "hdda" "lda2" "Linda" [19] "lda" "LogitBoost" "lssvmLinear" [22] "LMT" "lssvmPoly" "lvq" [25] "lssvmRadial" "Mlda" "multinom" [28] "nb" "oblique.tree" "OneR" [31] "ORFLog" "ORFpls" "ORFridge" [34] "ORFsvm" "pam" "PART" [37] "pda" "pda2" "PenalizedLDA" [40] "plr" "protoclass" "qda" [43] "QdaCov" "rFerns" "rda" [46] "rFerns" "rFlda" "rocagc" [49] "rf" "RFLda" "RsImca" [52] "rpartCost" "rrlda" "RSimca" [55] "sda" "sddaLDA" "sddaQDA" [58] "SLAVE" "slda" "smda" [61] "sparseLDA" "stepLDA" "stepQDA" [64] "svmRadialWeights" "vbmPradial" "avNNNet" [67] "bag" "bagEarth" "bayesglm" [70] "bdk" "blackboost" "Boruta" [73] "bstLs" "bstSm" "bstTree" [76] "cforest" "ctree" "ctree2" [79] "dnn" "earth" "elm" [82] "evtree" "extraTrees" "gam" [85] "gamboost" "gamLoess" "gamSpline" [88] "gaussprLinear" "gaussprPoly" "gaussprRadial" [91] "gbm" "gcvEarth" "glm" [94] "glmboost" "glmnet" "glmStepAIC" [97] "kernelpols" "knn" "knn" [100] "logicBag" "logreg" "mlp" [103] "mlpWeightDecay" "nnet" "nodeHarvest" [106] "parRF" "partDSA" "pcaNNet" [109] "pls" "plsRglm" "rf" [112] "rknn" "rknnBel" "rpart" [115] "rpart2" "RF" "RFGlobal" [118] "simpols" "spls" "svmBoundRangeString" [121] "svmExpoString" "svmLinear" "svmPoly" [124] "svmRadial" "svmRadialCost" "svmSpectrumString" [127] "treebag" "widekernelpols" "xyf" ``` - **no.cores** The default setting is `NULL` as to maximize the CPU utilization and to use all available cores. • **missData** This option handles the missing data. Possible values are: – **missData="delRow"** - for deletion of observations (rows) with missing values, – **missData="delCol"** - for deletion of attributes (columns) with missing values, – **missData="meanCol"** - for imputing mean to missing values, – **missData=NULL** - no action is taken. • **supress.output** Default option is **FALSE**, but it is sometimes justified to suppress the output of intermediate functions and focus on ranking predictions. • **saveModel** Default option is **FALSE** as some models have large size, and therefore saving all obtained would lead to 100-500 MB RData files. Keep in mind that loading such large objects into R would require a lot of RAM, e.g. 140MB RData file consumes about 1.5GB of RAM. On the other hand one may want to utilize developed models. To export a model from a result of `fscaret()` function, e.g. `myFS` object: ```r > my_res_foba <- myFS$VarImp$model$foba > my_res_foba <- structure(my_res_foba, class="train") ``` ### 4.2 Regression problems - an example A simple example of regression problem utilizing the data provided in the `fscaret`: ```r > library(fscaret) > data(dataset.train) > data(dataset.test) > trainDF <- dataset.train > testDF <- dataset.test > myFS<-fscaret(trainDF, testDF, myTimeLimit = 5, preprocessData=TRUE, + Used.funcRegPred=c("pcr","pls"), with.labels=TRUE, + supress.output=TRUE, no.cores=1) > myRES_tab <- myFS$VarImp$matrixVarImp.MSE[1:10,] > myRES_tab <- subset(myRES_tab, select=c("pcr","pls","SUM%","ImpGrad","Input_no")) > myRES_rawMSE <- myFS$VarImp$rawMSE > myRES_PPlabels <- myFS$PPlabels ``` ### 4.3 Classification problems - an example An example of classification problem utilizing the data `data(Pima.te)` in the `MASS`: > library(MASS) > # make testing set > data(Pima.te) > Pima.te[,8] <- as.numeric(Pima.te[,8])-1 > myDF <- Pima.te > myFS.class<-fscaret(myDF, myDF, myTimeLimit = 5, preprocessData=FALSE, + with.labels=TRUE, classPred=TRUE, regPred=FALSE, + Used.funcClassPred=c("knn","rpart"), supress.output=TRUE, no.cores=1) > myRES.class_tab <- myFS.class$VarImp$matrixVarImp.MeasureError > myRES.class_tab <- subset(myRES.class_tab, select=c("knn","rpart","SUM%","ImpGrad","Input_no")) > myRES.class_rawError <- myFS.class$VarImp$rawMeasureError 5 Output For regression problems, as it was stated previously there are three lists of outputs. 1. Feature ranking and generalization errors for models: > # Print out the Variable importance results for MSE scaling > print(myRES_tab) <table> <thead> <tr> <th>pcr</th> <th>pls</th> <th>SUM%</th> <th>ImpGrad</th> <th>Input_no</th> </tr> </thead> <tbody> <tr> <td>1.000000</td> <td>0.000000</td> <td>4.000000</td> <td></td> <td></td> </tr> <tr> <td>1.000000</td> <td>0.000000</td> <td>4.000000</td> <td></td> <td></td> </tr> <tr> <td>1.000000</td> <td>0.000000</td> <td>4.000000</td> <td></td> <td></td> </tr> <tr> <td>1.000000</td> <td>0.000000</td> <td>4.000000</td> <td></td> <td></td> </tr> <tr> <td>1.000000</td> <td>0.000000</td> <td>4.000000</td> <td></td> <td></td> </tr> <tr> <td>1.000000</td> <td>0.000000</td> <td>4.000000</td> <td></td> <td></td> </tr> <tr> <td>1.000000</td> <td>0.000000</td> <td>4.000000</td> <td></td> <td></td> </tr> <tr> <td>1.000000</td> <td>0.000000</td> <td>4.000000</td> <td></td> <td></td> </tr> <tr> <td>1.000000</td> <td>0.000000</td> <td>4.000000</td> <td></td> <td></td> </tr> <tr> <td>1.000000</td> <td>0.000000</td> <td>4.000000</td> <td></td> <td></td> </tr> </tbody> </table> 2. Raw RMSE/MSE errors for each model > # Print out the generalization error for models > print(myRES_rawMSE) <table> <thead> <tr> <th>pcr</th> <th>pls</th> </tr> </thead> <tbody> <tr> <td>716.6597</td> <td>671.8195</td> </tr> </tbody> </table> 3. Reduced data frame of inputs after preprocessing > # Print out the reduced number of inputs after preprocessing > print(myRES_PPlabels) 7 of 10 As one can see in the example there were only two models used "pcr","pls", to use all available models please set option `Used.funcRegPred="all"`. The results can be presented on a bar plot (see Figure 1). Then the arbitrary feature reduction can be applied. For classification problems, two lists of outputs. 1. Feature ranking and errors (F-measure) for models: ``` > # Print out the Variable importance results for F-measure scaling > print(myRES.class_tab) ``` 2. Raw F-measures for each model ``` > # Print out the generalization error for models > print(myRES.class_rawError) ``` As one can see in the example there were only two models used "knn","rpart", to use all available models please set option `Used.funcClassPred="all"`. The results can be presented on a bar plot as the previous ones. Then the arbitrary feature reduction can be applied. A short `fscaret` package introduction with examples Figure 1: A sum of feature ranking of models trained and tested on `dataset.train`, two models were used "pcr","pls". 6 Known issues 1. In some cases during model development stage users can encounter "caught segfault" errors. It is highly dependent on the input data and the model. The nature of the error prevents function `fscaret()` returning proper results, therefore no scaling of variable importance is done, and no summary of feature ranking is presented. The way around is to exclude the troublesome method from calculations. If you encounter an odd behaviour of your working script, e.g. results of an object `myFS` in `VarImp` is an empty list(), search for "segfault" in a Rout file. In the example given below "partDSA" is the trouble maker. Then run once again computations. ```r > library(fscaret) > myFuncRegPred <- funcRegPred[which(funcRegPred!="partDSA")] > print(funcRegPred) > myFS<-fscaret(trainDF, testDF, myTimeLimit = 12*60*60, preprocessData=TRUE,regPred=TRUE, + Used.funcRegPred=myFuncRegPred, with.labels=TRUE, + supress.output=TRUE, no.cores=NULL, saveModel=FALSE) ``` 7 Acknowledgments This work was funded by Poland-Singapore bilateral cooperation project no 2/3/POL-SIN/2012. 8 References
{"Source-Url": "https://cran.r-project.org/web/packages/fscaret/vignettes/fscaret.pdf", "len_cl100k_base": 4826, "olmocr-version": "0.1.51", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20891, "total-output-tokens": 5559, "length": "2e12", "weborganizer": {"__label__adult": 0.0003142356872558594, "__label__art_design": 0.00045943260192871094, "__label__crime_law": 0.0003197193145751953, "__label__education_jobs": 0.0009851455688476562, "__label__entertainment": 0.00011992454528808594, "__label__fashion_beauty": 0.0001729726791381836, "__label__finance_business": 0.00031638145446777344, "__label__food_dining": 0.0003757476806640625, "__label__games": 0.0005741119384765625, "__label__hardware": 0.0009889602661132812, "__label__health": 0.0008993148803710938, "__label__history": 0.0002288818359375, "__label__home_hobbies": 0.00015807151794433594, "__label__industrial": 0.0004565715789794922, "__label__literature": 0.00026154518127441406, "__label__politics": 0.00023937225341796875, "__label__religion": 0.0004045963287353515, "__label__science_tech": 0.08135986328125, "__label__social_life": 0.00017845630645751953, "__label__software": 0.06610107421875, "__label__software_dev": 0.84423828125, "__label__sports_fitness": 0.0003514289855957031, "__label__transportation": 0.0002696514129638672, "__label__travel": 0.0002071857452392578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15075, 0.05776]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15075, 0.27781]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15075, 0.74663]], "google_gemma-3-12b-it_contains_pii": [[0, 1483, false], [1483, 3052, null], [3052, 5130, null], [5130, 7008, null], [7008, 8828, null], [8828, 10621, null], [10621, 12233, null], [12233, 13124, null], [13124, 14429, null], [14429, 15075, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1483, true], [1483, 3052, null], [3052, 5130, null], [5130, 7008, null], [7008, 8828, null], [8828, 10621, null], [10621, 12233, null], [12233, 13124, null], [13124, 14429, null], [14429, 15075, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15075, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15075, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15075, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15075, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15075, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15075, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15075, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15075, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15075, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15075, null]], "pdf_page_numbers": [[0, 1483, 1], [1483, 3052, 2], [3052, 5130, 3], [5130, 7008, 4], [7008, 8828, 5], [8828, 10621, 6], [10621, 12233, 7], [12233, 13124, 8], [13124, 14429, 9], [14429, 15075, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15075, 0.07252]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
bfd97c8f59ed6826a5d066fe179eaf0c69a5db13
Specifying Effective Non-Functional Requirements John Terzakis Intel Corporation June 24, 2012 ICCGI Conference Venice, Italy Version 1.0 Legal Disclaimers Intel Trademark Notice: Intel and the Intel Logo are trademarks of Intel Corporation in the U.S. and other countries. Non-Intel Trademark Notice: *Other names and brands may be claimed as the property of others Agenda • Requirements Overview • Natural Language & Its Issues • Natural Language Non-Functional Requirements • Planguage: A Technique for Writing Effective Non-Functional Requirements • Essential Planguage Keywords for Non-Functional Requirements • Using Planguage to Rewrite the NFR Examples to be Verifiable • Wrap up Requirements Overview What is a Requirement? A **requirement** is a statement of one of the following: 1. **What** a system must do 2. A known **limitation** or **constraint** on resources or design 3. **How well** the system must do what it does The first definition is for **Functional Requirements** The second and third definitions are for **Non-Functional Requirements (NFRs)** Examples of Functional and Non-Functional Requirements Video over IP Conference Calling Functional Requirements • Add Participant • Count Participants • Drop Participant • Lock Call to New Participants • Summon Operator • Mute microphone Non-Functional Requirements • Voice and Video Quality • Reliability • Availability • Ease of Use • Cost • Localization Functional Requirements A Functional Requirement: - is a statement of what a system must do (#1) - is measured in “yes” or “no” terms - usually employs the word “shall” Examples: Add Participant “The software shall display an option to add a participant” Summon Operator “The software shall summon the operator if the participant clicks the Operator Help icon.” Non-Functional Requirements (1 of 2) A **Non-Functional Requirement**: - is a known **limitation** or **constraint** on resources or design (#2) - usually measured in yes/no terms - can include documentation, marketing collateral, product localization, legal compliance restrictions - typically employs the word “must” Examples: **Cost** “The retail cost of the software must be between $175 and $199.” **Localization** “The help file must be released in English, French and Spanish.” Non-Functional Requirements (2 of 2) A **Non-Functional Requirement**: - is a measure of *how well* the system must do what it does (#3) - Is measured over an interval or range - usually employs the word “must” - includes the “ilities” (e.g., quality, reliability, scalability, availability) This type of requirement is problematic within most Requirements Engineering practices, and will be the focus of this tutorial. We’ll look at good examples later. Natural Language & Its Issues What is Natural Language? Natural language is unconstrained, informal language as it is used in every day speech and writing (e.g., email). Natural language is the most common medium for expressing requirements in most industries; It is flexible, easy to use and requires no additional training. Exercise: Write Natural Language NFRs The instructor will divide the class into small groups 1. Write 3-4 natural language non-functional requirements for the purchase of a new car. Example: The car must be reliable. 2. Discuss whether these non-functional requirements are verifiable or not Issues with Natural Language NFRs While useful in everyday interactions, natural language is fertile ground for a number of issues relating to requirements (functional as well as non-functional) including: • Weak words • Unbounded lists • Implicit Collections • Ambiguity • Issues around verb choice, semantics, and grammar Natural language tends to produce NFRs that are not verifiable Weak Words Weak words are subjective or lack a common or precise definition. Examples include: - Quick, Quickly - Easy, Easily - Timely - Fast - Frequently - Intuitive - Feel, Feeling - Normal - Reliable - State-of-the-art - Effortless - Friendly, User-friendly - Secure - Immediate This is just a partial list. Don’t use weak words – define what you mean using precise, measurable terms. **Unbounded Lists** An **unbounded list** is one that lacks a starting point, an end point, or both. Examples include: - At least - Including, but not limited to - Or later - Such as **Unbounded lists are impossible to design for or to test against** For example, how would you design and test a system that “must maintain a list of **at least** 250 users”? Or, how would you test software that “must install on Windows® Vista or later in under 5 seconds”? Implicit Collections Often, collections of objects within requirements are not explicitly defined anywhere Without a definition, readers may assume an incorrect meaning Example: “The software must support 802.11 and other network protocols supported by competing applications under Linux.” - What is counted as a “competing application”? - What belongs to the collection of “other network protocols”? - What specific protocols of 802.11 are included? - “Linux” is also a collection of OS vendors, versions, and revision levels Ambiguity Ambiguity occurs when a word or statement has multiple meanings or there is doubt about the meaning. These problems (and others) create ambiguity: - Vagueness - Subjectivity - Optionality - Under-specification - Under-reference Ambiguity leads to differences in interpretation amongst the various stakeholders for a requirement. Ambiguity Examples **Vagueness:** “The system must pass between 96-100% of the test cases using current standards for video encoding before launch.” **Subjectivity:** “The debug code must easily and seamlessly integrate with the validation test automation software. **Optionality:** “The software should be tested under as many OSes as possible.” **Under-specification:** “The software must support 802.11n and other network protocols” **Under-reference:** Users must be able to complete all previously-defined operations in under 5 minutes 80% of the time.” Issues With Verb Choice, Semantics, and Grammar Be careful with verb choice • What is difference between the words “enable”, “allow”, “assist”, “permit”, “authorize”, and “provide the capability to”? Be careful with each, every, and only • “Each” refers to one; “every” refers to all; both are universal qualifiers • The placement of “only” can totally change the intent of the requirement Avoid grammatical issues • Use of a slash (e.g., “object1/object2”) creates confusion • Is it both terms (object1 and object 2) or just one (object 1 or object2)? • Use of “and” with a preceding qualifier creates two options • Does the qualifier before the “and” apply to just the term before the “and” or both terms? Verb Choice, Semantics, and Grammar Examples Verb Choice: • The SW must quickly provide the capability to users to access their invoices • The SW must quickly authorize users to access their invoices Each and every: • Unless each user is authenticated, the SW must securely protect the data • Unless every user is authenticated, the SW must securely protect the data Placement of “only”: • Only authorized users can access medical information • Authorized users can only access medical information Grammatical Issues: • The SW must email/log improper access attempts after 3 failures • The SW must rapidly disable the accounts of unregistered users and guests Exercise: Identify the Issues The usability objective of the AlphaBeta Plus client is to be usable by the intended customer at a 5’ distance. The client should be an integrated system that is both reliable and responsive. Reliability and responsiveness are more critical for this device than for PC desktop systems. Reliability should be as good as that of consumer home entertainment devices (e.g., TV or VCR) and response to user interaction should be immediate. The applications should provide an easy-to-learn, easy-to-use, and friendly user interface, even more so than PC desktop applications. Users should be able to start using the application immediately after installation. Users should be able to satisfactorily use the device with little instruction. Friendly means being engaging, encouraging, and supportive in use. Users must feel comfortable with the client and must not be given reason to worry about accidentally initiating a destructive event, getting locked into some procedure, or making an error. Feedback for interactions should be immediate, obvious, and appropriate. Natural Language Non-Functional Requirements Examples of Natural Language NFRs - Order processing must be fast - The software must support at least 25 users - Make the web site software reliable - The configuration software should be intuitive to use - The audio software must reproduce music nearly perfectly Do you see any issues with these requirements? Issues Identified 1. Order processing must be fast • How long is “fast”? Seconds, minutes or hours? Can we test “fast”? 2. The software must support at least 25 users • What is the meaning of “support”? Are these concurrent users or not? • How many is “at least” 25 users? 26 users? 200,000 users? 3. Make the web site software reliable • What is “reliable”? Can we test for it? 4. The configuration software should be intuitive to use • “should” implies optionality • What does “intuitive” mean? It is subjective (reader dependent) 5. The audio software must reproduce music nearly perfectly • What does “nearly perfectly” mean? An audiophile will have a different opinion than a causal listener. Effective NFRs Must Be Verifiable For a NFR to be effective, it must be verifiable. A requirement is verifiable if it can be proved that the requirement was correctly implemented (i.e., we can test for correct implementation) Requirements are often unverifiable because they contain weak words, utilize unbounded lists, include implicit collections, are ambiguous or have grammatical issues Eliminating these issues is the first step towards writing effective NFRs Planguage: A Technique for Writing Effective Non—Functional Requirements What is Planguage? Planguage is an informal, but structured, keyword-driven planning language - Developed by Tom Gilb in 1988 and explained in detail in his book *Competitive Engineering* - Can be used to create all types of requirements - Is a combination of the words Planning and Language - Is an example of a Constrained Natural Language Planguage aids communication about complex ideas *Competitive Engineering*, Butterworth-Heinemann, 2005 Planguage Planguage provides a rich specification of requirements that results in: - Fewer omissions in requirements - Reduced ambiguity and increased readability - Early evidence of feasibility and testability - Increased requirements reuse - Effective priority management - Better, easier decision making Basic Planguage Keywords & Definitions **Tag**: A unique, persistent identifier **Gist**: A brief summary of the requirement or area addressed **Requirement**: The text that details the requirement itself **Rationale**: The reasoning that justifies the requirement **Priority**: A statement of priority and claim on resources **Stakeholders**: Parties materially affected by the requirement **Status**: The status of the requirement (draft, reviewed, committed, etc.) **Owner**: The person responsible for implementing the requirement **Author**: The person that wrote the requirement Continued… Basic Planguage Keywords & Definitions **Revision**: A version number for the statement **Date**: The date of the most recent revision **Assumptions**: All assumptions or assertions that could cause problems if untrue now or later **Risks**: Anything that could cause malfunction, delay, or other negative impacts on expected results **Defined**: The definition of a term (better to use a glossary) Fuzzy concepts requiring more details: `<fuzzy concept>` A collection of objects: `{item1, item2, …}` The source for any statement: ← --- Basic Planguage Keywords are useful for any requirement, and are sufficient for requirements measured as “present” or “absent” A Simple Planguage Functional Requirement **Tag:** Invoice ← {C. Smith, 07/06/05} **Requirement:** When an Order is shipped and Order Terms are not “Prepaid”, the system shall create an Invoice. **Rationale:** Task automation decreases error rate, reduces effort per order. Meets corporate business principle for accounts receivable. **Priority:** High. If not implemented, it will cause business process reengineering and reduce program ROI by $400K per year. **Stakeholders:** Shipping, finance **Author, Revision, Date:** Julie English, rev 1.0, 5 Oct 05 Choosing Planguage Keywords Recall that requirements generally fall into two categories based on the nature of how they are measured. Functional Requirements are measured in Boolean (simple yes/no) terms as either present or absent in the completed system. - This category includes system functions and constraints. Non-Functional Requirements are typically measured on some scale or interval, not simply “present” or “absent”. - This category includes system qualities and performance levels. Because of the way they are measured, Non-Functional Requirements use some additional Planguage keywords. ### Additional Planguage Keywords for Non-Functional Requirements <table> <thead> <tr> <th><strong>Ambition</strong></th> <th>A description of the goal of the requirement (this replaces the Requirement keyword used in functional requirements)</th> </tr> </thead> <tbody> <tr> <td><strong>Scale</strong></td> <td>The scale of measure used to quantify the requirement (e.g., time, temperature, speed)</td> </tr> <tr> <td><strong>Meter</strong></td> <td>The process or device used to establish location on a Scale (e.g., watch, thermometer, speedometer)</td> </tr> <tr> <td><strong>Minimum</strong></td> <td>The minimum level required to avoid political, financial, or other type of failure</td> </tr> <tr> <td><strong>(Must)</strong></td> <td></td> </tr> <tr> <td><strong>Target</strong></td> <td>The level at which good success can be claimed</td> </tr> <tr> <td><strong>(Goal)</strong></td> <td></td> </tr> <tr> <td><strong>Outstanding</strong></td> <td>A feasible stretch goal if everything goes perfectly</td> </tr> <tr> <td><strong>(Stretch)</strong></td> <td></td> </tr> </tbody> </table> A Simple Planguage NFR Tag: Menu Complexity Ambition: Make Accessing Patient Dental History Menus easier Scale: Number of menus Meter: Measured from the login menu to display of the most recent patient dental visit Minimum: 4 Target: 3 Outstanding: 2 Note: the term “easier” in the Ambition is OK since it is qualified by the keywords that follow Notes on Planguage Keywords • Use the keywords that add value to your statement - no more, no less • There are many more keywords to Planguage than presented here – See Competitive Engineering for more examples • Extend Planguage as needed with new keywords - but it’s good to check to see whether there is already a keyword that will work Exercise: Using Planguage for NFRs The instructor will divide the class into the same groups as the previous car purchase exercise. Use the following template to write NFRs for the top speed and fuel economy for a new car purchase: Ambition: Scale: Meter: Minimum: Target: Outstanding: Essential Planguage Keywords for Non-Functional Requirements Focus on Essential NFR Planguage Keywords The following Planguage keywords are important for specifying effective Non-Functional Requirements: - Scale - Meter - Minimum - Target - Outstanding Let’s look at all five in detail Scales Scale: The scale of measure used to quantify the statement There are three types of scales: • Natural: Scales with obvious association to the measured quality • Constructed: A scale built to directly measure a quality • Proxy: An indirect measure of a quality # Examples of Scales <table> <thead> <tr> <th>Natural</th> <th>Time measured in seconds</th> </tr> </thead> <tbody> <tr> <td></td> <td>Number of users</td> </tr> <tr> <td>Constructed</td> <td>A 5-point scale <strong>created</strong> to measure perceived sound quality</td> </tr> <tr> <td></td> <td>A 10-point scale <strong>created</strong> to register user satisfaction</td> </tr> <tr> <td>Proxy</td> <td>An in-field MTTF goal <strong>predicted</strong> using pre-release reliability test results</td> </tr> <tr> <td></td> <td>“Critical” defect <strong>prediction</strong> for first year of released software based on defect trending during Beta testing</td> </tr> </tbody> </table> Finding Scales Start by looking for a natural scale. If none comes to mind: - Create a constructed scale - Look for a proxy scale - Decompose the concept being measured into its parts and try again Other hints: - Use known, accepted scales of measure when possible - Derive new scales from known scales by substituting terms - Incorporate qualifiers in the scales to increase specificity - Don’t confuse scale with meter - Share effective scales with others Meters **Meter**: The process or device used to establish location on a Scale Most meters have an obvious association with the scale they are measuring (e.g., time with a stop watch) Some meters may require a process or test procedure to be utilized or created # Examples of Meters <table> <thead> <tr> <th>Category</th> <th>Description</th> </tr> </thead> </table> | Natural | A stopwatch Log of users authenticated | | Constructed| “Double blind” tests User satisfaction survey | | Proxy | Stress testing of pre-production software, analyzing results and predicting the Mean Time to Failure (MTTF) Validation testing of Beta software, analyzing results and predicting the number of critical defects in the first year of customer release | Finding Meters First, study the scale carefully. If no meter comes to mind: - Look at references and handbooks for examples for ideas - Ask others for their experience with similar methods - Look for examples within test procedures Once you have a candidate, check to see that: - The meter is adequate in the eyes of all stakeholders - There is no less-costly meter available that can do the same job (or better) - The meter can be employed before product release or completion of the deliverable Examples of Paired Scales and Meters **Tag:** Response Time **Scale:** Time in seconds **Meter:** Measured from mouse click to display of next menu **Tag:** Software Maintainability **Scale:** Average engineering time from report to closure of defects **Meter:** Analysis of 30 consecutive defects reported and corrected during product development **Tag:** Market Share **Scale:** % of Total Available Market **Meter:** Quarterly market survey Remember: Scale = units of measure, Meter = Device or process to measure position on the Scale Minimum, Target & Outstanding Keywords **Minimum**: The minimum level required to avoid political, financial, or other type of failure **Target**: The level at which good success can be claimed **Outstanding**: A stretch goal if everything goes perfectly Notes: • Development and testing is typically focused on achieving the Target level • Values not meeting at least the Minimum level mean the NFR has not been correctly implemented (verification has failed) • At least one of these keywords should be specified for a NFR • Collectively, these keywords can be referred to as a Landing Zone. Landing Zones A Landing Zone defines a “region of success” for a Non-Functional requirement. Any time between 7 seconds and 10 seconds meets the requirement. Any time greater than 10 seconds means the requirement has not been met. Landing Zones focus attention on what will create success ### Example Landing Zones <table> <thead> <tr> <th>Requirement</th> <th>Minimum</th> <th>Target</th> <th>Outstanding</th> </tr> </thead> <tbody> <tr> <td>Release Date</td> <td>1 Sep 11</td> <td>15 Aug 11</td> <td>13 Jul 11</td> </tr> <tr> <td>Install time</td> <td>5 seconds</td> <td>4 seconds</td> <td>3 seconds</td> </tr> <tr> <td>Peak Project Headcount</td> <td>40 SW developers</td> <td>35 SW developers</td> <td>25 SW developers</td> </tr> <tr> <td># of transactions per minute</td> <td>375</td> <td>450</td> <td>500</td> </tr> <tr> <td>Design Wins</td> <td>20+</td> <td>30+</td> <td>40+</td> </tr> <tr> <td>Total First Year Volume</td> <td>95K</td> <td>110K</td> <td>125K</td> </tr> </tbody> </table> Using Planguage to Rewrite the NFR Examples to be Verifiable Example 1 Order processing must be fast Tag: Order Processing Time Ambition: Don’t make the users wait too long for order processing Scale: Time Meter: Measured from the user clicking on the “Submit Order” icon to the display of the “Order Complete” message on the order entry menu. Minimum: 5 seconds Target: 4 seconds Outstanding: 3 seconds Exercise: Rewrite Example 2 The software must support at least 25 users Tag: Ambition: Scale: Meter: Minimum: Target: Outstanding: Hint: 25 users at a time or one at a time? Exercise: Rewrite Example 3 Make the web site software reliable Tag: Ambition: Scale: Meter: Minimum: Target: Outstanding: Hint: How will we measure this reliability? What is our scale? Exercise: Rewrite Example 4 The configuration software should be intuitive to use Tag: Ambition: Scale: Meter: Minimum: Target: Outstanding: Hint: Think of an example of configuration SW Exercise: Rewrite Example 5 The audio software must reproduce music nearly perfectly Tag: Ambition: Scale: Meter: Minimum: Target: Outstanding: Hint: What type of Scale? Natural, Constructed or Proxy? Wrap up Session Summary In this session we have: • Provided an overview of functional and non-functional requirements • Defined natural language and identified its issues (weak words, unbounded lists and ambiguity) • Introduced Planguage, a technique for writing effective non-functional Requirements • Examined critical Planguage keywords in detail • Rewritten natural language non-functional requirements so that they are verifiable Final Thoughts • Effective NFRs are verifiable *You must be able to verify a NFR to know it’s been implemented correctly* • Removing weak words, unbounded lists and ambiguity is key to making NFRs verifiable *Specify NFRs using objective, bounded terms* • Planguage provides the framework to make NFRS verifiable *Use the critical Planguage keywords to assist in developing the proper test for a NFR* *Writing effective NFRs is crucial for determining whether product performance and quality goals have been met* Contact Information Thank You! For more information, please contact: John Terzakis john.terzakis@intel.com Back up Possible Solution: Example 2 The software must support at least 25 users **Tag**: Number of Concurrent Users **Ambition**: Handle as many concurrent users as possible **Scale**: Number of concurrent users **Meter**: Concurrent users logged in, authenticated and registering for the same conference using the Beta software while maintaining a response time of 1 sec or less for any single user **Minimum**: 25 **Target**: 50 **Outstanding**: 70 Possible Solution: Example 3 Make the web site software reliable **Tag:** Web Site Software Reliability **Ambition:** Make the web site software as reliable as possible **Scale:** Number of “show stopper” defects **Meter:** Measurement of all classes of defects reported by customers during Alpha testing **Minimum:** 5 **Target:** 3 **Outstanding:** 0 Possible Solution: Example 4 The configuration software should be intuitive to use **Tag**: Configuration SW Usability **Ambition**: Make the configuration software easy to use **Scale**: Average time required for a Novice to configure the wireless router for WPA using only the online help system for assistance **Meter**: Measurements obtained on 100 Novices during user interface testing. **Minimum**: Less than 30 seconds **Target**: Less than 25 seconds **Outstanding**: Less than 20 seconds **Defined**: Novice: A user with no prior experience with the software Possible Solution: Example 5 The audio software must reproduce music nearly perfectly Tag: Perceived Audio Quality Ambition: Produce nearly perfect music reproduction Scale: Score on a five-point scale: 5=imperceptible; 4=perceptible, but not annoying; 3=slightly annoying; 2=annoying; 1=very annoying Meter: The “double-blind triple-stimulus with hidden reference” method as found in Recommendation ITU-R BS.1116-1, “Methods For The Subjective Assessment Of Small Impairments In Audio Systems Including Multi-Channel Sound Systems”. Minimum: 4.0 Target: 4.5 Outstanding: 4.8
{"Source-Url": "http://www.iaria.org/conferences2012/filesICCGI12/Tutorial%20Specifying%20Effective%20Non-func.pdf", "len_cl100k_base": 5628, "olmocr-version": "0.1.53", "pdf-total-pages": 64, "total-fallback-pages": 0, "total-input-tokens": 94856, "total-output-tokens": 7846, "length": "2e12", "weborganizer": {"__label__adult": 0.00036716461181640625, "__label__art_design": 0.0012178421020507812, "__label__crime_law": 0.00039887428283691406, "__label__education_jobs": 0.0198974609375, "__label__entertainment": 0.00021219253540039065, "__label__fashion_beauty": 0.000255584716796875, "__label__finance_business": 0.0051727294921875, "__label__food_dining": 0.0004019737243652344, "__label__games": 0.0015802383422851562, "__label__hardware": 0.001857757568359375, "__label__health": 0.00041604042053222656, "__label__history": 0.00030303001403808594, "__label__home_hobbies": 0.0003132820129394531, "__label__industrial": 0.0007696151733398438, "__label__literature": 0.0009355545043945312, "__label__politics": 0.00020194053649902344, "__label__religion": 0.000461578369140625, "__label__science_tech": 0.03570556640625, "__label__social_life": 0.00017893314361572266, "__label__software": 0.043060302734375, "__label__software_dev": 0.88525390625, "__label__sports_fitness": 0.00029659271240234375, "__label__transportation": 0.00054168701171875, "__label__travel": 0.00020229816436767575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25436, 0.0108]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25436, 0.53613]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25436, 0.86687]], "google_gemma-3-12b-it_contains_pii": [[0, 140, false], [140, 373, null], [373, 695, null], [695, 717, null], [717, 1082, null], [1082, 1444, null], [1444, 1813, null], [1813, 2305, null], [2305, 2763, null], [2763, 2793, null], [2793, 3091, null], [3091, 3393, null], [3393, 3783, null], [3783, 4175, null], [4175, 4638, null], [4638, 5176, null], [5176, 5519, null], [5519, 6083, null], [6083, 6798, null], [6798, 7462, null], [7462, 8557, null], [8557, 8602, null], [8602, 8916, null], [8916, 9640, null], [9640, 10109, null], [10109, 10182, null], [10182, 10632, null], [10632, 10941, null], [10941, 11547, null], [11547, 12221, null], [12221, 12785, null], [12785, 13389, null], [13389, 14774, null], [14774, 15129, null], [15129, 15472, null], [15472, 15761, null], [15761, 15822, null], [15822, 16050, null], [16050, 16320, null], [16320, 16899, null], [16899, 17359, null], [17359, 17623, null], [17623, 18604, null], [18604, 19105, null], [19105, 19660, null], [19660, 20258, null], [20258, 20550, null], [20550, 21203, null], [21203, 21264, null], [21264, 21609, null], [21609, 21792, null], [21792, 21987, null], [21987, 22176, null], [22176, 22380, null], [22380, 22388, null], [22388, 22817, null], [22817, 23343, null], [23343, 23454, null], [23454, 23462, null], [23462, 23914, null], [23914, 24275, null], [24275, 24852, null], [24852, 25436, null], [25436, 25436, null]], "google_gemma-3-12b-it_is_public_document": [[0, 140, true], [140, 373, null], [373, 695, null], [695, 717, null], [717, 1082, null], [1082, 1444, null], [1444, 1813, null], [1813, 2305, null], [2305, 2763, null], [2763, 2793, null], [2793, 3091, null], [3091, 3393, null], [3393, 3783, null], [3783, 4175, null], [4175, 4638, null], [4638, 5176, null], [5176, 5519, null], [5519, 6083, null], [6083, 6798, null], [6798, 7462, null], [7462, 8557, null], [8557, 8602, null], [8602, 8916, null], [8916, 9640, null], [9640, 10109, null], [10109, 10182, null], [10182, 10632, null], [10632, 10941, null], [10941, 11547, null], [11547, 12221, null], [12221, 12785, null], [12785, 13389, null], [13389, 14774, null], [14774, 15129, null], [15129, 15472, null], [15472, 15761, null], [15761, 15822, null], [15822, 16050, null], [16050, 16320, null], [16320, 16899, null], [16899, 17359, null], [17359, 17623, null], [17623, 18604, null], [18604, 19105, null], [19105, 19660, null], [19660, 20258, null], [20258, 20550, null], [20550, 21203, null], [21203, 21264, null], [21264, 21609, null], [21609, 21792, null], [21792, 21987, null], [21987, 22176, null], [22176, 22380, null], [22380, 22388, null], [22388, 22817, null], [22817, 23343, null], [23343, 23454, null], [23454, 23462, null], [23462, 23914, null], [23914, 24275, null], [24275, 24852, null], [24852, 25436, null], [25436, 25436, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25436, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25436, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25436, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25436, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25436, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25436, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25436, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25436, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25436, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25436, null]], "pdf_page_numbers": [[0, 140, 1], [140, 373, 2], [373, 695, 3], [695, 717, 4], [717, 1082, 5], [1082, 1444, 6], [1444, 1813, 7], [1813, 2305, 8], [2305, 2763, 9], [2763, 2793, 10], [2793, 3091, 11], [3091, 3393, 12], [3393, 3783, 13], [3783, 4175, 14], [4175, 4638, 15], [4638, 5176, 16], [5176, 5519, 17], [5519, 6083, 18], [6083, 6798, 19], [6798, 7462, 20], [7462, 8557, 21], [8557, 8602, 22], [8602, 8916, 23], [8916, 9640, 24], [9640, 10109, 25], [10109, 10182, 26], [10182, 10632, 27], [10632, 10941, 28], [10941, 11547, 29], [11547, 12221, 30], [12221, 12785, 31], [12785, 13389, 32], [13389, 14774, 33], [14774, 15129, 34], [15129, 15472, 35], [15472, 15761, 36], [15761, 15822, 37], [15822, 16050, 38], [16050, 16320, 39], [16320, 16899, 40], [16899, 17359, 41], [17359, 17623, 42], [17623, 18604, 43], [18604, 19105, 44], [19105, 19660, 45], [19660, 20258, 46], [20258, 20550, 47], [20550, 21203, 48], [21203, 21264, 49], [21264, 21609, 50], [21609, 21792, 51], [21792, 21987, 52], [21987, 22176, 53], [22176, 22380, 54], [22380, 22388, 55], [22388, 22817, 56], [22817, 23343, 57], [23343, 23454, 58], [23454, 23462, 59], [23462, 23914, 60], [23914, 24275, 61], [24275, 24852, 62], [24852, 25436, 63], [25436, 25436, 64]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25436, 0.0559]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
711d9e6e43107e1818bd41b3d6c9cade932fcc98
AQUAPLANE: The Argument Quality Explainer App Sebastian Britner Lorik Dumani Ralf Schenkel sebastianbritner@gmail.com dumani@uni-trier.de schenkel@uni-trier.de Trier University Trier University Trier University Trier, Germany Trier, Germany Trier, Germany ABSTRACT In computational argumentation, so-called quality dimensions such as coherence or rhetoric are often used for ranking arguments. However, the literature often only predicts which argument is more persuasive, but not why this is the case. In this paper, we introduce AQUAPLANE, a transparent and easy-to-extend application that not only decides for a pair of arguments which one is more convincing with respect to a statement, but also provides an explanation. CCS CONCEPTS • Information systems ➔ Information systems applications; Web applications; Information retrieval query processing; Retrieval models and ranking. KEYWORDS argumentation, argument quality, explanations ACM Reference Format: 1 INTRODUCTION Argumentation is an essential part of human communication when there are divergent opinions or conflicts of interest [9]. People argue, among others, in social media, newspaper articles, and political speeches. The goal of argumentation is to persuade an audience, reach agreements, resolve disputes, portray justifications, and find decisions [36]. In the field of computational argumentation (CA), an argument is defined as a claim that is supported or opposed by at least one premise [45]. While the claim portrays a controversial standpoint for which the speaker wants an audience to either increase or decrease its acceptance, a premise serves as evidence or clue to do so. The polarity from a premise to the claim, i.e., whether it is supporting or rejecting, is defined as stance. An example for a (controversial) claim is "tv is better than books", a supporting premise to this claim is "Books and newspapers can’t give you emergency warnings", an opposing premise is "watching tv has a negative effect on mental health". The research area of CA includes tasks such as extracting arguments from natural language texts (argument mining) [22], classifying arguments into their viewpoints (stance prediction) [34], retrieving and ranking arguments to a query (argument retrieval) [42], or generating new arguments (argument generation) [1]. In this paper, we address a subtask of argument retrieval by considering the quality of arguments for ranking [6], as it has a significant impact on whether an argument can achieve its goals [41]. However, the literature often only predicts which argument is more persuasive or of higher quality, but not why this is the case. Explaining such choices is essential because the effect of an argument on a person may differ due to distinct values and their weighting [2]. For instance, a person might regard an argument as good if convinced by the truth of its premise, while another person might be convinced by a persuasive language. These different qualities are called argument quality dimensions [41]. This subjectivity in perceiving the effect of arguments implies the necessity to additionally show explanations for assigning a higher quality to an argument. A positive side effect is that it establishes more trust to decisions made by an automated system. In this paper we address explainable argument quality. More precisely, given a pair of arguments with the same stance regarding a controversial claim, our goal is not only to decide which argument is more convincing overall and in several argument quality dimensions, but also to automatically explain and justify this decision. We present AQUAPLANE, the Argument Quality Explainer, a transparent, modular, extensible, and interactive system. Given a claim and two premises with the same stance, it compares them in 15 quality dimensions and explains its decisions. Users can interactively explore the customized explanations to understand the decisions for each dimension. 2 RELATED WORK Habernal and Gurevych [15] present the dataset UKPConvArg1 which consists of 16k pairs of arguments each with the same stance on the same topic collected from debate portals. They introduce a relative approach to evaluate their persuasiveness by picking the more convincing argument. The methods are promising but it turns out to be complex to derive the reasons for the decision from these models. Gleize et al. [10] take evidence from Wikipedia into account when assessing persuasiveness. They propose a Siamese neural network to solve the task which outperforms the aforementioned approach [15]. Potash et al. [31] and Gleize et al. [10], among others, find a length bias in the UKP datasets, causing methods that use text length to determine persuasiveness to produce results similar to deep learning models. Toledo et al. [37] provide the state-of-the-art approach to determine the more persuasive 1 Code and demo video: https://github.com/recap-utr/Aquaplane. argument through binary text classification with BERT [4]. For this purpose, they conduct an annotation study on 6.3k arguments collected by using Speech by Crowd, a service developed by IBM to support the collection of arguments. Among others, they prevent length bias by lower differences in text lengths and by limiting text length. However, this does not capture deeper reasoning and more complex argumentation. Further, it is also not robust for use in the real world, where texts are not curated but noisy. Habernal and Gurevych [14] use the natural language justifications for the decisions which are captured next to the labels in the dataset UKPConvArg1 to evaluate the qualitative properties in each argument pair. Their corpus UKPConvArg2 consists of 9,111 argument pairs annotated with 17 categories targeting different aspects such as information content, subjectivity or comprehensibility. The evaluation showed that a fine-grained analysis of the persuasiveness of arguments requires further investigation. Wachsmuth et al. [41] analyzed various argument quality dimensions from the literature and divide the overall quality into logical, rhetorical, and dialectical quality, which in turn can be divided into sub-dimensions. They provide a taxonomy comprising 15 argument quality dimensions and Dagstuhl-15512 ArgQuality Corpus, a dataset of arguments annotated with respect to the different quality dimensions. Their work was a cornerstone for a lot of further works [5, 12, 32, 48]. Based on this corpus, Wachsmuth and Werner [43] examine which linguistic features of a text can be used to evaluate the different dimensions of argument quality. They establish eight features quantified using various aspects such as spelling errors, use of personal pronouns, length of sentences and words, and types of argument units. They achieved moderate, yet significant, success for scoring arguments. Among others, they prevent length bias by lower differences in text lengths and by limiting text length. However, due to the small size of the dataset, it was not possible to identify additional and more complex features. El Baff et al. [8] investigate how the style of a news article influences persuasiveness, showing that stylistic features have a greater influence on predicting persuasiveness among certain readers than content features. Persing and Ng [30] measure how unconvincing an argument is while also examining why an argument is unconvincing. They define five types of errors and annotate a corpus of arguments from debates with their persuasiveness and thus which errors the author committed. It remains an open question whether the error types are specific enough to help authors identify errors concretely and thus make arguments more persuasive. 3 ARGUMENT QUALITY DIMENSIONS We now review the 15 logical, rhetorical, and dialectical quality dimensions for arguments from Wachsmuth et al. [41] for which we implemented methods for measurement. The logical quality considers if the reasons given for an argument are reasonable and comprehensible. The rhetorical perspective evaluates how effectively an argument is presented, and the dialectical perspective whether objections are adequately refuted by the argument. They distinguish between higher-level dimensions and sub-dimensions and provide definitions for them: _Cogency (Co)_ (refers to the _logical_ quality): The premises of an argument are acceptable, relevant to the conclusion, and sufficient to draw it. • _Local Acceptability (LA)_: The premise of an argument is rationally worth believing to be true. • _Local Relevance (LR)_: The premise of an argument contributes to the acceptance or rejection of the conclusion of the argument. • _Local Sufficiency (LS)_: The premises of an argument are sufficient to draw the conclusion. _Effectiveness (Ef)_ (refers to the _rhetorical_ quality): The argument convinces the target audience of the author’s stance on a particular issue. • _Credibility (Cr)_: The argument is conveyed in a way that makes the author seem credible. • _Emotional Appeal (Em)_: The emotions generated by the argument make the target audience more open to the author’s arguments. • _Clarity (Cl)_: The argument uses correct and clear language, avoids unnecessary complexity, and does not stray from the topic. • _ Appropriateness (Ap)_: The language used in the argument supports the emergence of credibility and emotion and is appropriate to the topic. • _Arrangement (Ar)_: The topic, arguments, and conclusion are placed in the argument in a proper order. _Rationalness (Re)_ (refers to the _dialectical_ quality): The argument makes a sufficient contribution to the solution of the problem and is accepted by the target audience. • _Global Acceptability (GA)_: The target audience accepts both the consideration of the arguments given and the way they are portrayed. • _Global Relevance (GR)_: The argument cites information and arguments that lead to a final conclusion and thereby contribute to problem solving. • _Global Sufficiency (GS)_: The argument adequately refutes expected counterarguments. The _Overall Quality_ is the general assessment of quality. In this paper, it is considered as a function of the other dimensions. 4 MAPPING METHODS TO QUALITIES We now present the methods we use to (i) measure and determine the argument quality dimensions from Section 3 and (ii) explain the decision which argument is better. Note that our mapping is based on theoretical assumptions which we justify below. Note that we kept the mapping of the methods as well as adding or removing them to the dimensions flexible in the code. **Implemented Methods. ** _Profanity:_ Profanity refers to the use of unacceptable, insulting, or offensive language in the form of cursing [24]. We employ the blacklist by Parker [26] to detect it, setting the profanity of an argument _arg_ as _\frac{\text{number of profane words in arg}}{\text{number of words in arg}}_. It has a negative impact on _Cl_ and _Ap_, since this inappropriate language makes the author look unprofessional, immature, and thus untrustworthy. _Fact-Checking:_ To prevent negative effects of misinformation it is necessary to check the correctness and reliability of information with fact-checking. Automated fact-checking systems [17] often divide the task into three stages [13]: identifying claims to be verified (check-worthiness), collecting relevant information, and assigning truthfulness. We use the ClaimBuster API [19] to determine check-worthiness and check the truthfulness through the Google FactCheck Claim Search API [11]. We then determine the similarity of the yielded claims using _SBERT_ and cosine similarity to the clause part, and only proceed with the most similar one. We trained a RoBERTa model [23] on the MNLI [47] dataset to detect the stance and invert the ratings if necessary. We map these cosine values to _LA_ and _GA_ because false claims in an argument lead to less acceptance. Spell Check: Spell checking is necessary to guarantee correct language usage. We follow a rule-based approach [25] to detect spelling errors. For an argument, the number of misspelled and unknown words is related to the argument length in words. Spelling errors have an indirect influence on Cr as many spelling errors can make an author look unprofessional, and on Cl as arguments with fewer spelling errors are more readable and lead to fewer comprehension problems. Stylometry: Stylometry refers to the analysis of linguistic features of a natural language text to capture and characterize an author’s writing style [20]. We use a subset of the stylometric features implemented in StyleExplorer [38] and map these to the Cl because a complex sentence structure and vocabulary can lead to an argument not being understood or even misunderstood. Search Engine for SimpleWiki: Wikipedia serves as a modern online lexicon for general knowledge. We indexed the simpleWiki [46] dump (417,965 entries) with Apache Lucene [35] (version 9.4.1), applying BM25F [28]. We use this to get the most relevant SimpleWiki article to an argument and claim with its BM25F score. Since simpleWiki [46] provides general knowledge and the query uses claim and argument, we assume that a higher BM25F score means that the argument contains information that is more generally relevant. Thus, the method influences LR and GR. Search Engine for debate-org: In debate forums, people argue on controversial topics in order to convince opponents to a particular standpoint. We use the DDO dataset by Durmus and Cardie [7] consisting of 51,594 debates to estimate the relevance of arguments, creating a search engine similar to the one for SimpleWiki. Each entry in DDO includes the claim, as well as pro and con arguments. We infer the relevance of an argument which we use as query on the basis of the highest BM25F score. Since a search query consists of an claim and an argument, we conclude from a high BM25F score that an argument is generally more relevant to solve problems. Therefore, this method is used to determine GR. URL Sources: Arguments may include sources placed to support claims, often providing sources in the form of URLs. To detect URLs within arguments, we use a regular expression of Perini [29]. By adding sources, both argument and author may appear more credible. Thus, the number of sources employed serves gauging Cr. Excessive Punctuation: We define excessive punctuation to be a sequence of three or more punctuation marks. We assume that it has a negative impact on argument quality: an author repeatedly using exclamation points or question marks may appear angrier or guided by emotion. Thus, anger makes a person appear inexpertly and is judged less appropriate [39]. Therefore, we assume that excessive punctuation has a negative influence on Cr, Em, and Ap. All-Caps-Words: All-Caps-Words are words composed of capital letters only, mostly used for emphasizing, or to express emotions. The use of all-caps words could be construed as shouting in the context of social media [18], but also indicate emotional states such as anger, excitement, or joy. We hypothesize that all-caps words have a negative impact on the dimension of Em and that shouting has a negative influence on Cr and Ap. Dramatic Language: We define dramatic language as descriptive and figurative with metaphors, exaggerations, and other rhetorical stylistic devices. We adopt a simple list-based approach to recognize it, using the adverb lists provided by Rashkin et al. [33]. We determine the dramatic nature of the language as the fraction of these adverbs among all words of an argument. We suppose that dramatic language has a positive influence on the dimension Em. Ad-hominem-arguments: Ad-hominem-arguments attack individuals based on their characteristics or circumstances, rather than making reference to a counterargument [44]. They represent a fallacy within an argument. Based on the strong results obtained by Patel et al. [27], we fine-tune ALBERT [21] on the dataset of Habernal et al. [16] for the binary detection of ad hominem arguments. We assign this method to the dimension GA since Wachsmuth et al. [40] show that the justification that an argument is attacking or offensive correlates strongly with GA. This goes along with the view that personal attacks are generally unacceptable. Determining the more convincing argument. Given a pair of arguments, we apply all methods to obtain scores for both. We declare the argument with a higher score for a method to be the comparatively better one for it. There are three outcomes for each dimension and pair: 0: no decision possible, 1: decision for argument 1, and 2: decision for argument 2. While some methods return binary scores (e.g. ad-hominem), some even return multiple scores (e.g. stylometry). The final decision then results from the Boyer-Moore Majority Vote algorithm [3]. To obtain the scores for subdimensions and dimensions, the mapped methods and the hierarchical structure are applied by majority decision. For example, the decision for the subdimension Em results from the decisions of excessive punctuation and dramatic language. This makes the decision-making process transparent. Since there are fewer methods for mapping so far and no methods could be assigned to the LS, Ar, and GS, ambiguous decisions (0) occur frequently. To prevent a strong impact, these were removed from majority decisions. In future work, we expect AQUAPLANE to be augmented by more methods. Generation of explanations. We generate static explanations corresponding to predefined templates for all decisions for all 15 dimensions. While the decision making process is bottom-up, i.e. from the decisions of the methods to the overall decision, the explanations are presented top-down. The explanations are presented in stages, intended to achieve the interactive aspect of explanations. The overall decision on Overall Quality is summarized in a short statement, e.g. “Argument 1 is more convincing than argument 2”. This overall decision can be explained by the decisions on the next lower dimensions, e.g. “Argument 1 is more (cogent or effective or reasonable) than argument 2, because of its [dimension]”. Likewise, their subdimensions are explained by “Argument 1 has a higher [subdimension] than argument 2”. Lastly, for each method, the values computed and the information returned are presented by applying customized explanations (as they include pieces of arguments inserted in the tool) to provide more understanding. For example for the dimension Cr and the method URL sources an explanation could be “Argument 1 generally gives more sources. The following sources were provided: http://example.org, http://anotherexample.org”. 5 EVALUATION We now examine how well the qualities presented in Section 3 can be determined by the methods detailed in Section 4. Table 1: Results for the evaluation dataset <table> <thead> <tr> <th>Quality dimension</th> <th>Acc. BL</th> <th>Macro-F1 BL</th> </tr> </thead> <tbody> <tr> <td>Co Cogency</td> <td>.37</td> <td>.45</td> </tr> <tr> <td>LA Local Acceptability</td> <td>.42</td> <td>.42</td> </tr> <tr> <td>LR Local Relevance</td> <td>.42</td> <td>.42</td> </tr> <tr> <td>LS Local Sufficiency</td> <td>.53</td> <td>.29</td> </tr> <tr> <td>Ef Effectiveness</td> <td>.25</td> <td>.35</td> </tr> <tr> <td>Cr Credibility</td> <td>.36</td> <td>.36</td> </tr> <tr> <td>Em Emotional Appeal</td> <td>.33</td> <td>.65</td> </tr> <tr> <td>Cl Clarity</td> <td>.33</td> <td>.45</td> </tr> <tr> <td>Ap Appropriateness</td> <td>.38</td> <td>.43</td> </tr> <tr> <td>Ar Arrangement</td> <td>.44</td> <td>.44</td> </tr> <tr> <td>Re Reasonableness</td> <td>.42</td> <td>.42</td> </tr> <tr> <td>GA Global Acceptability</td> <td>.42</td> <td>.38</td> </tr> <tr> <td>GR Global Relevance</td> <td>.46</td> <td>.36</td> </tr> <tr> <td>GS Global Sufficiency</td> <td>.68</td> <td>.68</td> </tr> <tr> <td>Ov Overall Quality</td> <td>.40</td> <td>.44</td> </tr> <tr> <td>Mc More Convincing</td> <td>.44</td> <td>.45</td> </tr> </tbody> </table> Dataset. We derive an evaluation dataset from the datasets UKPConvArg1 [15] and Dagstuhl-15512-ArgQuality [41]. UKPConvArg1 contains argument pairs from debate portals with the same viewpoint on 16 topics. We use the version UKPConvArg1Strict where argument pairs with equal persuasiveness were removed. The Dagstuhl-15512-ArgQuality corpus contains assessments of 320 arguments from the UKPConvArg1 corpus on the 15 argument quality dimensions. For the assessments, three experts assigned a value to each argument regarding the different dimensions on the scale from 1 (Low) to 3 (High). We take the median of these three ratings for each argument for each dimension. Further, we only use argument pairs from UKPConvArg1Strict if both arguments are in the corpus Dagstuhl-15512-ArgQuality which holds for 985 pairs. For these, we compare the scores on each dimension and derive a decision value that numerically identifies the argument with the higher score. For all 985 instances, we now determine the more convincing argument with Aqaplane and compare them with the labels of the evaluation dataset. We create a baseline for each dimension, where we always take the decision that occurs most frequently in a dimension. Results. Table 1 shows the calculated accuracies and macro F1-scores to the decisions of each dimension together with the baselines (BL). Only for a few dimensions are the accuracy values above the baselines. Even though the accuracy values and F1-scores are quite low, a good tendency can be seen for some dimensions like GR, which indicates that the assigned methods have a positive influence. In general, however, the accuracy values and F1 scores are not satisfactory. In a manual investigation, we found that some of the methods are not mature and can generate errors. Determination of the Overall Quality. We evaluated the extent to which the more convincing argument can be determined by a majority decision from the dimensions. Specifically, for each of the dimensions Co, Ef, and Re, we tested whether their decision value follows from the majority decision of the respective subdimensions. Further, we tested whether the decision on the Overall Quality or the More-Convincing label taken from the UKPConvArg1 dataset follows from the majority decision of the Co, Ef, and Re dimensions. The dimensions Co and Re in many cases infer their decision by a majority vote from their assigned subdimensions. Thus, these dimensions are shown to be well, but not fully, represented by their subdimensions. In contrast, Ef seems to be much more difficult to determine by a majority decision of its subdimensions. This could indicate that the subdimensions encompass more than is captured by the Ef dimension. This could also account for another reason for the very low accuracy and F1 score for Ef in the Table 1. The Overall Quality can often be derived from the Co, Ef, and Re dimensions by majority vote at 0.7827. This is also consistent with the correlation that Wachsmuth et al. [41] measured. When all dimensions are added to the decision of Overall Quality as a test, the frequency reduces to 0.603, which shows that deriving Overall Quality from the three previously mentioned dimensions is a good choice. Figure 1 illustrates this. 6 APPLICATION The application provides transparency to the decisions as it presents its explanations. In addition, researchers as well as interested users can interactively navigate through the generated explanations to gain understanding of the decision. A user can enter two arguments together with the claim they refer to. Alternatively it would also be possible to upload a CSV file to enable calculations for multiple inputs. By clicking a button, for both arguments each method to compute the argument quality will be processed, then the qualities will be compared, and explanations of the decisions presented. Users can then interactively navigate through the explanations to gain a deeper understanding of the decision if needed. Interaction happens by clicking on specific terms (highlighted in color) within the explanation text, e.g. clicking on the term “clarity” in the text “Argument 1 has a higher CI than argument 2”, which explains in a detailed view why the argument has a higher CI. The results can be downloaded in a JSON file along with all the information used in the argument quality comparison. Figure 2 shows the application. the 4th Workshop on Argument Mining. Association for Computational Linguistics, Copenhagen, Denmark, 49–59. https://doi.org/10.18653/v1/W17-5106
{"Source-Url": "https://recap.uni-trier.de/static/535c5dfc381f88c27d16b56010b33b0f/Britner2023AQUAPLANEArgumentQuality.pdf", "len_cl100k_base": 5282, "olmocr-version": "0.1.46", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19683, "total-output-tokens": 5946, "length": "2e12", "weborganizer": {"__label__adult": 0.0007967948913574219, "__label__art_design": 0.0017328262329101562, "__label__crime_law": 0.0023403167724609375, "__label__education_jobs": 0.0435791015625, "__label__entertainment": 0.0007781982421875, "__label__fashion_beauty": 0.000640869140625, "__label__finance_business": 0.0017061233520507812, "__label__food_dining": 0.0009832382202148438, "__label__games": 0.0029659271240234375, "__label__hardware": 0.0009675025939941406, "__label__health": 0.0022678375244140625, "__label__history": 0.0011310577392578125, "__label__home_hobbies": 0.00022292137145996096, "__label__industrial": 0.0008091926574707031, "__label__literature": 0.031585693359375, "__label__politics": 0.003131866455078125, "__label__religion": 0.0012884140014648438, "__label__science_tech": 0.36767578125, "__label__social_life": 0.0008835792541503906, "__label__software": 0.09765625, "__label__software_dev": 0.43505859375, "__label__sports_fitness": 0.0005755424499511719, "__label__transportation": 0.0008950233459472656, "__label__travel": 0.00031065940856933594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25572, 0.03933]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25572, 0.55867]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25572, 0.88955]], "google_gemma-3-12b-it_contains_pii": [[0, 5290, false], [5290, 12287, null], [12287, 19216, null], [19216, 24520, null], [24520, 24520, null], [24520, 25572, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5290, true], [5290, 12287, null], [12287, 19216, null], [19216, 24520, null], [24520, 24520, null], [24520, 25572, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25572, null]], "pdf_page_numbers": [[0, 5290, 1], [5290, 12287, 2], [12287, 19216, 3], [19216, 24520, 4], [24520, 24520, 5], [24520, 25572, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25572, 0.22785]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
c7fc191337403ff2aa384fb49adec97eb687ffa8
[REMOVED]
{"Source-Url": "https://www.cs.ubc.ca/~hkhosrav/ai/slides/chapter5.pdf", "len_cl100k_base": 5043, "olmocr-version": "0.1.50", "pdf-total-pages": 67, "total-fallback-pages": 0, "total-input-tokens": 88772, "total-output-tokens": 7590, "length": "2e12", "weborganizer": {"__label__adult": 0.00052642822265625, "__label__art_design": 0.0004911422729492188, "__label__crime_law": 0.0010786056518554688, "__label__education_jobs": 0.0022296905517578125, "__label__entertainment": 0.00018489360809326172, "__label__fashion_beauty": 0.0003077983856201172, "__label__finance_business": 0.0007643699645996094, "__label__food_dining": 0.00049591064453125, "__label__games": 0.0035724639892578125, "__label__hardware": 0.001247406005859375, "__label__health": 0.0010957717895507812, "__label__history": 0.0006194114685058594, "__label__home_hobbies": 0.00023853778839111328, "__label__industrial": 0.0012121200561523438, "__label__literature": 0.0004680156707763672, "__label__politics": 0.0005764961242675781, "__label__religion": 0.0005979537963867188, "__label__science_tech": 0.2421875, "__label__social_life": 0.0001735687255859375, "__label__software": 0.01387786865234375, "__label__software_dev": 0.72607421875, "__label__sports_fitness": 0.0007262229919433594, "__label__transportation": 0.0010433197021484375, "__label__travel": 0.0002887248992919922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18267, 0.07522]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18267, 0.3993]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18267, 0.7933]], "google_gemma-3-12b-it_contains_pii": [[0, 33, false], [33, 158, null], [158, 479, null], [479, 806, null], [806, 1202, null], [1202, 1419, null], [1419, 1660, null], [1660, 2061, null], [2061, 2369, null], [2369, 2469, null], [2469, 2865, null], [2865, 3233, null], [3233, 3735, null], [3735, 4490, null], [4490, 4767, null], [4767, 4788, null], [4788, 4809, null], [4809, 4830, null], [4830, 4851, null], [4851, 5012, null], [5012, 5228, null], [5228, 5440, null], [5440, 5652, null], [5652, 5801, null], [5801, 5956, null], [5956, 6202, null], [6202, 6499, null], [6499, 6568, null], [6568, 6784, null], [6784, 6873, null], [6873, 7125, null], [7125, 7215, null], [7215, 7434, null], [7434, 7579, null], [7579, 7660, null], [7660, 7769, null], [7769, 7983, null], [7983, 8321, null], [8321, 8615, null], [8615, 8855, null], [8855, 9097, null], [9097, 9280, null], [9280, 9778, null], [9778, 10367, null], [10367, 11190, null], [11190, 11780, null], [11780, 12013, null], [12013, 12122, null], [12122, 12652, null], [12652, 13228, null], [13228, 13302, null], [13302, 13551, null], [13551, 14154, null], [14154, 14568, null], [14568, 14793, null], [14793, 15288, null], [15288, 15305, null], [15305, 15362, null], [15362, 15446, null], [15446, 15715, null], [15715, 16177, null], [16177, 16535, null], [16535, 16714, null], [16714, 16845, null], [16845, 17159, null], [17159, 17613, null], [17613, 18267, null]], "google_gemma-3-12b-it_is_public_document": [[0, 33, true], [33, 158, null], [158, 479, null], [479, 806, null], [806, 1202, null], [1202, 1419, null], [1419, 1660, null], [1660, 2061, null], [2061, 2369, null], [2369, 2469, null], [2469, 2865, null], [2865, 3233, null], [3233, 3735, null], [3735, 4490, null], [4490, 4767, null], [4767, 4788, null], [4788, 4809, null], [4809, 4830, null], [4830, 4851, null], [4851, 5012, null], [5012, 5228, null], [5228, 5440, null], [5440, 5652, null], [5652, 5801, null], [5801, 5956, null], [5956, 6202, null], [6202, 6499, null], [6499, 6568, null], [6568, 6784, null], [6784, 6873, null], [6873, 7125, null], [7125, 7215, null], [7215, 7434, null], [7434, 7579, null], [7579, 7660, null], [7660, 7769, null], [7769, 7983, null], [7983, 8321, null], [8321, 8615, null], [8615, 8855, null], [8855, 9097, null], [9097, 9280, null], [9280, 9778, null], [9778, 10367, null], [10367, 11190, null], [11190, 11780, null], [11780, 12013, null], [12013, 12122, null], [12122, 12652, null], [12652, 13228, null], [13228, 13302, null], [13302, 13551, null], [13551, 14154, null], [14154, 14568, null], [14568, 14793, null], [14793, 15288, null], [15288, 15305, null], [15305, 15362, null], [15362, 15446, null], [15446, 15715, null], [15715, 16177, null], [16177, 16535, null], [16535, 16714, null], [16714, 16845, null], [16845, 17159, null], [17159, 17613, null], [17613, 18267, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18267, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18267, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18267, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18267, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 18267, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18267, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18267, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18267, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18267, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18267, null]], "pdf_page_numbers": [[0, 33, 1], [33, 158, 2], [158, 479, 3], [479, 806, 4], [806, 1202, 5], [1202, 1419, 6], [1419, 1660, 7], [1660, 2061, 8], [2061, 2369, 9], [2369, 2469, 10], [2469, 2865, 11], [2865, 3233, 12], [3233, 3735, 13], [3735, 4490, 14], [4490, 4767, 15], [4767, 4788, 16], [4788, 4809, 17], [4809, 4830, 18], [4830, 4851, 19], [4851, 5012, 20], [5012, 5228, 21], [5228, 5440, 22], [5440, 5652, 23], [5652, 5801, 24], [5801, 5956, 25], [5956, 6202, 26], [6202, 6499, 27], [6499, 6568, 28], [6568, 6784, 29], [6784, 6873, 30], [6873, 7125, 31], [7125, 7215, 32], [7215, 7434, 33], [7434, 7579, 34], [7579, 7660, 35], [7660, 7769, 36], [7769, 7983, 37], [7983, 8321, 38], [8321, 8615, 39], [8615, 8855, 40], [8855, 9097, 41], [9097, 9280, 42], [9280, 9778, 43], [9778, 10367, 44], [10367, 11190, 45], [11190, 11780, 46], [11780, 12013, 47], [12013, 12122, 48], [12122, 12652, 49], [12652, 13228, 50], [13228, 13302, 51], [13302, 13551, 52], [13551, 14154, 53], [14154, 14568, 54], [14568, 14793, 55], [14793, 15288, 56], [15288, 15305, 57], [15305, 15362, 58], [15362, 15446, 59], [15446, 15715, 60], [15715, 16177, 61], [16177, 16535, 62], [16535, 16714, 63], [16714, 16845, 64], [16845, 17159, 65], [17159, 17613, 66], [17613, 18267, 67]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18267, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
0ff8b63f69ece98ec6e29c6d4dfaffffba5f2858
Context for the Intelligent Search of Information Syopiansyah Jaya Putra, Ismail Khalil Department of Information System, State Islamic University Syarif Hidayatullah Jakarta, Indonesia Institute of Telecooperation, Johannes Kepler University Linz, Austria Email: 1. syopian@uinjkt.ac.id, 2. ismail.khail@jku.at Abstract- Three forms of contextual search have been proposed in the literature. The first one is to scan the full text of a query to figure out user needs and based on that scan, HTML pages for content will return an index of the relevant content. In this case, the user has no control over the context of the query. The second form of contextual search is used by meta-search engines and requires the user to supply explicitly contextual information about the query to increase the precision of the returned results. The meta search engine acts as a mediator between the user query and search engines. This will increase significantly the precision of the results but add to the complexity of the user interface. The third form is to automatically infer the context of the query based on the content of the other documents. This results in the modification of search results based on previous knowledge and situations. The work presented in this paper aims to develop a search engine based on a contextual search for the translation of the Quran in the Indonesian language in order to improve the performance of the search engine and provide information needed by the user based on the context of the query. In this paper, we present and discuss an algorithm that makes use of the semantics of an information source in the form of context to support the intelligent search for information over the Web or database. Keywords: Context, intelligence search, contextual search, search engine I. INTRODUCTION The word context is derived from Latin for “weave together.” In common usage, it refers to the parts of a written or spoken statement that precede or follow a specific word or passage, usually influencing its meaning or effect. Many words and phrases have multiple possible meanings that are clarified by the context of their use. For example, the word “bank” has several meanings: river bank, financial institution and others. Such ambiguity is rarely a problem in common practice because content makes the meaning clear: “she is walking along the bank,” “I paid the money at the bank.” Context as the associations between a central item of interest and surrounding items. The existence of the surrounding items clarifies the “meaning” of the central item. Typically, in information services, context is coded as metadata describing both the existence of a relationship between two items and the nature of the relationship.” Two strategies have been advocated for the design and implementation of contextual search: Global As View (GAV) and Local As View (LAV). The (GAV) follows the traditional strategy developed for federated databases [1]. The global view is constructed by several layers of views on the relations exported by the sources. Queries are expressed regarding the global view and are evaluated in a conventional way [2]. The (LAV) considers that the relations exported by the sources are materialized views defined on virtual relations in the global schema. Queries are still expressed regarding the global schema. To evaluate a query, a rewriting regarding the component schemas needs to be found: this process is called Answering Queries Using Views (AQUV) [2]. The GAV and the LAV strategies [3] can be qualitatively or quantitatively compared in terms of their adequacy (1) to model a particular integration situation, (2) to cope with autonomy of the sources (sources changing their exported schemas, joining or leaving the network), and their ability (3) to answer queries. The main arguments against the GAV strategy are that (1) it may not be able to model the integration situations where sources are missing to build the complete world view; (2) it may stop to offer a complete global view as some sources become unavailable or services are disrupted [4]. In favor of GAV, it can be argued that most practical applications will require sufficiently simple global schemas (unions) to avoid such difficulties and that there might be enough economic incentives for participating in the network to convince the sources’ managers to play the game. The strength of GAV is that, if the modeling is successful, (3) all queries on global schemas are guaranteed to be answered, and a complete answer can be constructed. The LAV strategy, conversely [4], is designed to cope with a dynamic, possibly incomplete set of sources. The counterpart of this flexibility is that all queries may not be answered or only an incomplete answer can be found. It can be argued for LAV that to great information infrastructures such as the WWW, complete answers are rarely expected or needed by the users: “better some answers than no answer”. Additional semantic knowledge in the form of “context” can be taken into account in the GAV strategy to optimize the queries to the component sources. For instance, a powerful optimization consists in identifying sources that cannot produce results participating in answer to a given query or sources that would produce redundant answers. As we have discussed and illustrated in [5], there are many very concrete examples where simple context knowledge about component sources can help generating more efficient query plans. The optimization process involved is called Semantic Query Optimization (SQO). In this paper, we argue that there is a relative duality between LAV and GAV on their use of semantic knowledge expressed in the form of context. We present and discuss a variant of the algorithm for Answering Queries Using Views. (AQUV) for the LAV strategy regarding Semantic Query Optimization (SQO). We call the process of rewriting the query using semantic knowledge Contextual Search (CS). II. LITERATURE REVIEW Contextual search refers to proactively capturing the information need of a user by automatically augmenting the user query with information extracted from the search context; for example, by using terms from the web page the user is currently browsing or a file the user is currently editing [6]. Implementing contextual search involves two tasks. The first is the user interface, which has been extensively discussed within the context of Y!Q [7]. The second consists of extracting and representing the context, and using the context in conjunction with the query. Search engines, generally, return results without any regard for the concepts in which the user is interested [10]. Context Search is different from Keyword Search because Context Search consider user’s context when searching, meanwhile, keyword Search just care keyword matching without taking user’s context into account. Context Search is based on user’s context which include user’s time, location, input, needs, habits and background, therefore the information about user’s interest are to be collected before search, so that Context Search can understand users’ search intention better [11]. The ability to promote and re-purpose content in different contexts along with the facility to suggest and find related things that weren’t directly searched creates a means to surface seasonal, campaign related or tactically important information. [9]. Context is additional information about user’s interest. Context will be represented by complex context vector, which is a vector of vectors. It will have number of dimensions equal to the number of document attributes used for search. Every dimension will represent one attribute and contains a ranked vector. [11] Contextual search using ontology-based user profiles [11] consist of the following four steps. First, to represent information and knowledge in different domain, Domain Ontology is used to annotate domain information which is the basis of information representation in Context Search. Second, to extend the keyword input, with the help of Context Ontology the keyword inputted by users will be extended to Search Ontology which normalizes user’s search information and context information this time. Third, search results are represented using Faceted Ontology which is also a kind of Domain Ontology for annotating the target documents, and Finally, the core of Context Search is to implement ontology matching. The method is to filter sub-ontology which is the most similar with Search Ontology from Faceted Ontology. Methods for searching with context could be classified as [11, 10]: - **Query rewriting** – appends keywords form context to search query as a string and sends it to standard search engine. Aaunt’s each query with appropriate terms from the search context and uses an off-the-shelf web search engine to answer this augmented query [6]. - **Iterative-Filtering meta-Search** – generates many different sub-queries and sends them to multiple search engines. Results are re-ranked and aggregated into one set. [9, 8], generates multiple sub queries based on the user query and appropriate terms from the search context, uses an off-the-shelf search engine to answer these sub queries, and re-ranks the results of the sub queries using rank aggregation methods [6]. - **Rank-biasing** – Query and context of keywords are sent to modified search engine as complex query. At first, method finds documents matching query and then re-ranks them by fitness to context [11] [6], generates a representation of the context and answers queries using a custom-built search engine that exploits this representation [6]. - **Flooding algorithm** to match Search Ontology with faceted Ontology [9]. - **Inconsistency Ontology Reasoning Framework** A. LAV as a Search Strategy For our purpose, the schema of a relation is solely defined by the name of the relation and its arity (number of attributes). As we have seen, LAV models the integration situation as a database (schema) DB = (S \_EDB, S \_IDB, V, C) where S \_EDB is the global schema, S \_IDB is the union of the local schemas, V is the set of view definitions for the component relations in terms of the global relations, and C is a set of contexts on the component schemas. Despite the reality of the situation in which the global schema is a set of virtual relations and the local schema is a set of materialized relations, DB = (S \_EDB, S \_IDB, V, C) stands for a database in which: - S \_EDB is the set of schemas of the relations in the extensional database, i.e. the set of stored relations, - S \_IDB is the set of schemas of the relations in the intentional database, i.e. the set of relations defined by means of views (we call both the relation and its definition a view, when there is no ambiguity) - V is the set of view definitions in Datalog (range-restricted function free Horn clauses), - IC is the set of contexts in Datalog³ (i.e. we allow variables occurring only in the head of the clause to be existentially quantified after all other variables are universally quantified as usual). Queries are views, which are not defined a priori in the database schema. In this and the following section, we concentrate on the simplest problem of Project-Select-Join (or PSJ for short) queries and PSJ-view definitions. Therefore we consider an initial database DB such that view definitions are conjunctive, i.e. views defined by means of a single Horn clause. We also consider conjunctive queries only: the single Horn clause defining the query is also restricted to contain literals from the extensional database in its body. Given an instance of the extensional database \( I \), i.e. a set of tuples for the relation \( S_{EDB} \), an instance of the database is defined by the minimal model of \( V \cup I \). An instance of the database is consistent if it is a model of IC. In our case, we only need to verify that \( I \) is also a model of IC. Since the views are materialized only, say \( J \) is the actual instance of the intentional database, an instance of the database is an instance \( I \cup J \) such that it corresponds to the minimal model of \( I \cup V \), and \( J \) is consistent with \( C \). Notice that \( I \) may not be unique. For the sake of simplicity, we will assume that the designer of the integration solution has been careful and that a minimal instance is guaranteed to exist. B. A Dual View of the Database In order to comply with the reality of the situation in the modeling, we would need to construct a database \( DB'' = (S_{EDB}, S_{EDB}, V'', C'') \) with the same instances as \( DB \). In [12] a polynomial time algorithm is given which constructs similar database \( (DB'' = (S_{EDB}, S_{EDB}, V''', 0)) \). However, it is also shown that, in the general case, the new database is only maximally contained in the initial database \( DB'' = \text{max} DB \). In standard database terms, any set of views may not be a lossless decomposition. Using similar transformations like the one used in [4], we construct a database \( DB' = (S_{EDB} \cup S_{EDB}, 0, 0, C') \) which has the same instances as \( DB \). To construct \( C' \) we use Clark’s completion (see [13]) of the view definitions \( V \). In this way we explicitly express the Closed World Assumption. More precisely, given a view definition for \( v \): \[ v ( X ) \leftarrow B ( Y ) \] we first make all the implicit quantifications explicit: \[ \forall X v ( X ) \leftarrow \exists \bar{Y} \rightarrow \bar{X}, B ( \bar{Y} ) \] the completed axiom is \[ v ( X ) \leftrightarrow \exists \bar{Y} \rightarrow \bar{X}, B ( \bar{Y} ) \] which we transform into two integrity constraints: \[ (1) v ( X ) \rightarrow \exists \bar{Y} \rightarrow \bar{X}, B ( \bar{Y} ) \] \[ (2) B ( Y ) \rightarrow v ( X ) \] Notice that we keep a conjunction of literals in the right-hand side of the first constraint instead of separating it in as many contexts in the Horn clause as there are literals in \( B(\bar{Y}) \). If we call \( \bar{V} \) the set of contexts of type (2), the new database \( DB' = (S_{EDB} \cup S_{EDB}, 0, 0, V \cup C \cup \bar{V}) \). We claim but do not prove that the original and transformed databases have the same instances. Let us now look at the application of the transformation on an example. Example 1 Let us consider the following database schema: \[ DB = (w/3, v/2, v(X,Y) \leftarrow p(X,Y,Z), 0) \] The view \( v/2 \) is the projection of the first two attributes of the global relation \( w/3 \). According to the transformation of the database we have described, we generate the following two contexts: \[ i_1 : p (X,Y,Z) \rightarrow v(X,Y) \] \[ i_2 : v(X,Y) \rightarrow \exists Z, p (X,Y,Z) \] we are now considering the database schema: \[ DB' = (w/3, v/2), 0, 0, \{i_1, i_2\} \] Example 2 According to the transformation of the database we have described, we generate the following four contexts: \[ i_1 : w (T, I, A, S, P) \rightarrow c(T, A, S) \] \[ i_2 : c(T, A, S) \rightarrow \exists I, P, w (T, I, A, S, P) \] \[ i_3 : w (T, I, A, S, P) \rightarrow s(I, T, P) \] \[ i_4 : s(I, T, P) \rightarrow \exists A, S, w (T, I, A, S) \] We are now considering the database schema: \[ DB' = (w/5, v/3), 0, 0, \{i_1, i_2, i_3, i_4\} \] Where \( i_5 : s(I, T, P), s(I, T, P'_2) \rightarrow P_2 \) C. An Algorithm For Contextual Search Since the two databases have the same instances, we can now define AQUV as a specific SQO process: we want to transform the query into a semantically equivalent query, which uses literals from the initial intentional database or built-in literals only. Indeed, the algorithm we propose generates strong folding; although it could easily be modified to produce partial folding, we have restricted its output to complete folding. The algorithm is similar to the one in [14]. However, we highlight the relationship between this transformation and semantic query optimization as described in [12] and [13]. Given a query of the form \( q(\bar{X}) \leftarrow l_1, ..., l_n \), for each constraint whose body subsumes a subset of the body of the query, we call the head of the constraint after matching a residue. For each residue it is possible: - either to add it to the query [query expansion or join-introduction] - alternatively, to eliminate a literal \( l_i \) in the body of the query provided that certain conditions be fulfilled [query contraction or join-elimination] According to [15], join elimination is possible if [the residue] \( r \) can be resolved [unified] with any of the \( l_i \) [\( \exists \sigma ( r ) = \sigma ( l_i) \)], and the resulting resolved, after elimination, contains both [all] the original terms in all \( l_j, j \neq i \). This definition trivially extends to the case where the residue is of form \( r_1, ..., r_n \leftarrow \) and a set of literals \( l_i \) such that a subset of the literals in the residue can be resolved (unified) with the set of literals \( l_i \) in the body of the clause. The same conditions as above apply to the resulting query. The correctness of the join-introduction and join-elimination is proved in [13]: the semantic query transformation leads to semantically equivalent queries. The principle of the algorithm is to use the rules of type (2) (subsection 3.1) for expansion and the rules of type (1) for contraction. D. Algorithm and Implementation After the generation of the sets of integrity constraints used for join-introduction (I-Rule) and join-elimination (E-Rule), respectively, the algorithm can be summarized as done in Figure 1. The implementation of this algorithm in Prolog can be done in two different ways. Views, queries, and constraints can be represented by ground expressions, in which case some special notation is used to represent the variables in the query, and unification, sub assumption, or variance need to be written from scratch. Alternatively, the variables, terms, and constants in the views, queries, and constraints can be represented by variables, terms, and constants of Prolog. In this case, Prolog Unification as well as the built-in or library predicates for (term-) subsumption and variance can be used. Term decomposition using =../2 or variables/. I support can not be avoided (e.g. find the distinguished variables). The latter implementation is not necessarily simpler as it hides some important aspects of the implementation and runs the risk for the programmer to inappropriately unify terms. The rules and procedures for join-introduction and join-elimination resemble the Constraint Handling Rules. It is possible indeed to use an approach similar to the one we advocated in [12] in order to implement the above described algorithm. ``` Input: Q: query I-Rule: set of contexts E-Rule: set of context Output: Q : transformed query for each R ∈ I-Rule for each σ such that σ(Body( R )) ⊆ Body (Q) I := I ∪ Head(σ( R )) end for end for for each L ∈ I with predicate P for each R in E-Rule(P) if ∃σ such that σ (Body( R )) = L then remove any literal matching Head (σ( R )) from Body end for ``` Figure 1: CS algorithm VI. CONCLUSION AND FUTURE WORK In this paper, we presented and discussed an algorithm that makes use of the semantics of an information source in the form of context to support the intelligent search for information over the Web or a database. This final goal of the project from which this paper stemmed is to develop a search engine based on a contextual search for the translation of the Quran in the Indonesian language to improve the performance of the search engine and provide information needed by the user based on the context of the query. One approach in conducting a contextual search for the Indonesian translation of the Quran is to use ontologies to map relationships between documents so that it can do a query against a specific keyword to the overall structure of an existing document. The methodology of a Search engine with a contextual design based on Rank Biasing is as follow preprocessing, a query with context. REFERENCES
{"Source-Url": "https://api.uinjkt.ac.id/ais/AmbilFile?id=942&clazz=ais.database.model.file.FotoLampiranPegawai", "len_cl100k_base": 4567, "olmocr-version": "0.1.49", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 16533, "total-output-tokens": 5685, "length": "2e12", "weborganizer": {"__label__adult": 0.000400543212890625, "__label__art_design": 0.0005550384521484375, "__label__crime_law": 0.0008835792541503906, "__label__education_jobs": 0.004070281982421875, "__label__entertainment": 0.0002033710479736328, "__label__fashion_beauty": 0.0002372264862060547, "__label__finance_business": 0.0007886886596679688, "__label__food_dining": 0.0004968643188476562, "__label__games": 0.0008072853088378906, "__label__hardware": 0.0009870529174804688, "__label__health": 0.0009555816650390624, "__label__history": 0.0006184577941894531, "__label__home_hobbies": 0.00011485815048217772, "__label__industrial": 0.0005617141723632812, "__label__literature": 0.0023040771484375, "__label__politics": 0.0005970001220703125, "__label__religion": 0.00092315673828125, "__label__science_tech": 0.331787109375, "__label__social_life": 0.0002036094665527344, "__label__software": 0.061859130859375, "__label__software_dev": 0.58935546875, "__label__sports_fitness": 0.0002722740173339844, "__label__transportation": 0.00074005126953125, "__label__travel": 0.0002956390380859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22747, 0.01147]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22747, 0.43633]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22747, 0.88879]], "google_gemma-3-12b-it_contains_pii": [[0, 5809, false], [5809, 11493, null], [11493, 17416, null], [17416, 22747, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5809, true], [5809, 11493, null], [11493, 17416, null], [17416, 22747, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22747, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22747, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22747, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22747, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22747, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22747, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22747, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22747, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22747, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22747, null]], "pdf_page_numbers": [[0, 5809, 1], [5809, 11493, 2], [11493, 17416, 3], [17416, 22747, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22747, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
631e4915a4bc8f19eba712fa0953743a53a04e63
Tails - Feature #15287 Make it possible to reproducibly generate IUKs in CI 02/05/2018 04:13 PM - anonym Status: Resolved Priority: High Assignee: Category: Continuous Integration Target version: Tails_4.2 Feature Branch: feature/15281-single-squashfs-diff Type of work: Code Description Since we'll generate a lot more of IUKs each release uploading them will be painful for RMs with slow internet connections. For example, the following VM would do: - 4 virtual CPUs (so the IUK generation won't take too long) - 1 GB of RAM (I used to generate IUKs in such a VM two years ago but YMMV) - 10 GB /tmp for tails-iuk's needs, and where we store the generated IUKs (before uploading them to rsync.lizard) - Access to Jenkins build artifacts Related issues: Related to Tails - Feature #17262: Make the build of overlayfs-based IUKs rep... Resolved Related to Tails - Bug #17361: Streamline our release process Confirmed Related to Tails - Feature #15281: Stack one single SquashFS diff when upgrading Resolved 04/13/2016 Related to Tails - Feature #17385: Grow /home on rsync.lizard Resolved Related to Tails - Feature #17412: Drop the need for dedicated temporary stor... Resolved Blocks Tails - Feature #16052: Document post-release reproducibility verifica... Confirmed 10/15/2018 Blocks Tails - Feature #16209: Core work: Foundations Team Associated revisions Revision 150f1638 - 12/26/2019 10:34 AM - intrigeri IUK creation: delete temporary directory on success (refs: #15287) tails-create-iuk previously left temporary files behind. In particular, even on success, the "squashfs_src" directory would remain on the filesystem. It's no big deal in general, but our build_IUKs Jenkins job runs tails-create-iuk as root with sudo, so the leftover temporary files are owned by root, and then the "workspace-cleanup" step (itself run as the "jenkins" user) can't delete them, which causes 2 problems: - These temporary files would endlessly accumulate on isobuilders, without any mechanism to clean them up automatically. - Confusing output in the Jenkins console: ``` cannot remove path when cwd is /var/lib/jenkins/workspace/build_IUKs/j4gMvgS3WH for /var/lib/jenkins/workspace/build_IUKs/j4gMvgS3WH: at /usr/share/perl/5.24/File/Temp.pm line 1583. Archiving artifacts [WS-CLEANUP] Deleting project workspace... Cannot delete workspace: null ``` Option not to fail the build is turned on, so let's continue This problem has not happened since a few years, mostly because I'm now aware of it and thus careful. Moreover, we now have a build_IUKs Jenkins job, that takes a TAILS_GIT_COMMIT parameter, and can thus be used to test new IUK generation code before it’s merged and affects a RM, on real-world scenarios. I'm using this regularly when working on the IUK codebase. As it happens, this job runs on Jenkins' isobuilders, that currently run Stretch. So all things considered, I'm now confident that this note does more harm (scaring the RM) than good (providing useful and actionable information). Our isobuilders should be fine then. - Access to Jenkins build artifacts Our isobuilders have access to the ISO archive so the easiest way is to push the new ISO there before generating the IUKs. There's no chance I do all this work myself during the 3.6 cycle. I could deploy a new Jenkins job if someone else write the code that will build the needed IUKs and tells me how this job should behave (input, output, how it'll be triggered). intrigeri wrote: I could deploy a new Jenkins job [...] Is Jenkins the right tool here? As a RM, I see no benefit. To me, the ideal would be that all RMs got access to some VM on Lizard fulfilling the above criteria, and that I write some shell script in the release process that simply is copy-pasted into a terminal to do the IUK building in said VM over SSH. This way I can much more easily debug and find workarounds if there's problems. intrigeri wrote: I could deploy a new Jenkins job [...] Is Jenkins the right tool here? As a RM, I see no benefit. To me, the ideal would be that all RMs got access to some VM on Lizard fulfilling the above criteria, and that I write some shell script in the release process that simply is copy-pasted into a terminal to do the IUK building in said VM over SSH. I'm surprised that you think adapt+copy'n'pasted manual build steps can be better than automated builds. I disagree and feel the need to argue in favour of automation. With an automated build system implemented e.g. as a Jenkins job: - Better handling of failures: a Jenkins "failed" status is more obvious that a non-zero exit code that your shell may not warn you about (and even if it does, the RM may miss it when it's 2am and they're trying to finish a part of the release process before going to bed). - We have to make the whole thing truly automatic modulo some well-defined parameters. One can't tell the same with shell script snippets from our release process doc, that often need adapting, which requires the RM to reason correctly and without mistakes. In practice it follows that: - RMs tend to make mistakes, especially when they either do the task too often and thus occasionally stop thinking (you) or when they do the task not often enough to fully misunderstand the instructions (bertagaz or myself). In this case it particularly matters because the exact same build must be done twice (once locally, once on lizard). - RMs tend to adapt/fix such instructions locally without improving the doc. We've seen cases where neither you nor I ever followed the release process doc to the letter (e.g. in terms of ordering) in the real world, and then when the occasional RM tries to follow the doc, guess what, we notice it was never tested and can't possibly work. Anything that leaves room for such creativity (let's be nice with ourselves :) tends to be create a gap between theory and practice. With a Jenkins job, we can be 100% certain that the build was done as designed and documented, that it works, and this increases the chances it'll work next time the occasional RM prepares a release. - Build artifacts and logs are stored, tracked and published. This gives us an audit trail in case something goes wrong. That audit trail can be inspected by other Tails developers who can help fix the problem, compared to your terminal window. This also makes it easier to reproduce problems because we know what exact code was run when the problem happened. - We get something consistent with how we build and publish the released ISO (see MATCHING_JENKINS_BUILD_ID=XXX in the release process doc). - We're getting a little bit closer to CI. Adding manual adapt'n'copy'n'paste shell scripts does exactly the opposite. More generally, most arguments in favour of automating builds & releases (i.e. CI) work here. I guess I don't need to tell you about them :) I'm open to not block on this for the initial implementation of #15281 but I would be unhappy if it remained done manually for too long; we're too good at postponing stuff to the famous second iteration™ that never happens. So I'd like the manual solution you propose to be implemented in a way that naturally leans towards automation: e.g. a program, living in jenkins-tools.git, with a clear interface, that explicitly gets any input it cannot guess as parameters, and that exits with sensible exit codes. Even without running it on Jenkins it'll already address some of the issues with the copy'n'paste approach that I listed above. This way I can much more easily debug and find workarounds if there's problems. As long as you have write access to the code that this Jenkins job would run and it's deployed when you push without requiring a sysadmin to do anything, I don't see a huge difference but I see what you mean: there's one more level of indirection between you (as the RM) and the code that runs. My counter argument is that the manual approach you're advocating for makes it harder for anyone else to "debug and find workarounds if there's problems". #4 - 02/26/2018 08:56 PM - anonym Wow, I honestly feel dumb and embarrassed for my comment above: as you seem to have caught on to, I have recently had a few episodes of "drowning in abstractions/indirection/layers/blah" while debugging fundamentally simply problems, which I think overwhelmed me, and caused me to react defensively. Thanks for nicely articulating some timely reminders of why things are the way they are for overall good reasons! ;) intrigeri wrote: I'm open to not block on this for the initial implementation of #15281 but I would be unhappy if it remained done manually for too long: we're too good at postponing stuff to the famous second iteration™ that never happens. So I'd like the manual solution you propose to be implemented in a way that naturally leans towards automation: e.g. a program, living in jenkins-tools.git, with a clear interface, that explicitly gets any input it cannot guess as parameters, and that exits with sensible exit codes. Even without running it on Jenkins it'll already address some of the issues with the copy'n'paste approach that I listed above. Fully agreed! This way I can much more easily debug and find workarounds if there's problems. As long as you have write access to the code that this Jenkins job would run and it's deployed when you push without requiring a sysadmin to do anything, I don't see a huge difference but I see what you mean: there's one more level of indirection between you (as the RM) and the code that runs. Yes, this is an actual concern that affects me. It's another thing like the tagged/time-based APT snapshot system -- I'm able to fix about half the issues I encounter, but for the tricky stuff I often end up urgently needing your help close to release time. That's pretty stressful, which there is enough of at that point in time any way. I think a good enough remedy is to have you "on-call" for dealing with such problems for a few releases (incl. RCs, but less urgently) when deploying this -- under what terms is that possible, if at all? This way I can much more easily debug and find workarounds if there's problems. As long as you have write access to the code that this Jenkins job would run and it's deployed when you push without requiring a sysadmin to do anything, I don't see a huge difference but I see what you mean: there's one more level of indirection between you (as the RM) and the code that runs. Yes, this is an actual concern that affects me. It's another thing like the tagged/time-based APT snapshot system -- I'm able to fix about half the issues I encounter, but for the tricky stuff I often end up urgently needing your help close to release time. I have no data I could check about such situations but my feeling is that in these tricky cases, the kind of help you need is about helping you understand fine details of how the system works in corner cases so either you can workaround/fix our stuff to avoid hitting corner cases, or I will make our code handle such corner cases better. I doubt that running the code locally vs. remotely would make a big difference: without that understanding of these fine details, even if you could run/debug the code locally, you would sometimes not be in a position to decide what's a suitable fix. I think it'll be just the same for generating IUKs unless you learn enough Modern Perl and dive deep enough into our incremental upgrades design+implementation to be fully autonomous in this area, which IMO has a rather bad cost/benefit for Tails. Anyway, I don't have data to back this feeling and I suspect you don't either, so let's leave it at that given: That's pretty stressful, which there is enough of at that point in time any way. This I totally understand and I want to take it into account! I think a good enough remedy is to have you "on-call" for dealing with such problems for a few releases (incl. RCs, but less urgently) when deploying this -- under what terms is that possible, if at all? I don't understand why this would be needed specifically for the Jenkins deployment: as long as the RM can fallback to running/debugging/fixing the script locally, even if the Jenkins job does not do what the RM needs we're good, no? Or were you asking even for the case when Jenkins is not involved and the RM runs the script locally? Next step: specify the dependencies, input and output of the script. Leaving this on anonym's plate for now but I could take over this step if it's one task too many for you. Once we have this we can: - find someone to implement it (I'm thinking of our new FT colleagues) - design the Jenkins job that will run this script (e.g. it might be that the script's input includes info that's too hard for a program to guess, and then the job will need whoever runs it to fill some parameters that'll be converted to input for the script) This is not on our roadmap. What matters is not particularly that this is done on Jenkins, it's that this is done on a machine from which lizard can quickly download a big pile of IUKs. Input: - commit of tails-iuk to checkout (otherwise it'll be too hard to test things like #17262, and to validate code changes with this new CI job before merging them); if possible, make this optional and default to master - commit of tails-perl5lib to checkout (same as tails-iuk) - SOURCE_DATE_EPOCH - version of Tails the generated IUK shall upgrade to - list of Tails versions the generated IUK shall upgrade from (with #15281 these will become "initially installed versions"; until then, they are "currently running version") - We probably need the possibility to specify extra arguments that will be passed to tails-create-iuk. E.g. #9373 adds --union-type (aufs|overlayfs), and tails-create-iuk refuses running if we pass it args it does not support. I have a working PoC: https://jenkins.tails.boum.org/job/build_IUKs/. Known issue: workspace clean up fails, which breaks the next build on the same ISO builder. I think it’s because some temporary files are owned by root so the wrapper script should clean this up itself using sudo, or something. #30 - 12/08/2019 09:19 AM - intrigeri - Target version changed from Tails_4.3 to Tails_4.2 Technically, the first time we’ll really need this is during the 4.3 release process (assuming #15281 makes it into 4.2), but I’d really like to have something ready enough so I can test this during the 4.2 release process, so if anything goes wrong, I have time to fix things up. #31 - 12/08/2019 09:30 AM - intrigeri intrigeri wrote: Known issue: workspace clean up fails, which breaks the next build on the same ISO builder. I think it’s because some temporary files are owned by root so the wrapper script should clean this up itself using sudo, or something. This only happened when passing incorrect arguments to tails-create-iuk from the wrapper script, which was fixed then ⇒ case closed. Next steps: 1. Design how to transfer the CI-built IUKs to rsync.lizard and validate them 2. Implement + document the above 3. Try this out during the 4.2 release process #32 - 12/17/2019 12:02 PM - intrigeri - Related to Bug #17361: Streamline our release process added #33 - 12/18/2019 11:20 AM - intrigeri FTR, https://jenkins.tails.boum.org/job/build_IUKs/31/ reproduced 2 IUKs that kibi built locally and published earlier this week, that upgrade systems to 4.1.1! Here are the build parameters I’ve set for this Jenkins build: ``` IUK_COMMIT=3.5 PERL5LIB_COMMIT=Tails-perl5lib_2.0.2 SOURCE_DATE_EPOCH=1576450285 NEW_VERSION=4.1.1 SOURCE_VERSIONS=4.0 4.1 ``` 05/06/2020 #34 - 12/24/2019 12:33 PM - intrigeri - Parent task deleted (#15281) Let's allow ourselves to close #15281 even if this is not done in time for 4.2. #35 - 12/24/2019 12:34 PM - intrigeri - Related to Feature #15281: Stack one single SquashFS diff when upgrading added #36 - 12/25/2019 11:41 AM - intrigeri Updated next steps: 1. adjust wrap_tails_create_iuks to the fact the iuk & perl5lib code bases are moving to tails.git 2. design how to transfer the CI-built IUKs to rsync.lizard and validate them 3. implement + document the above 4. try this out during the 4.2 release process #37 - 12/26/2019 11:24 AM - intrigeri intrigeri wrote: intrigeri wrote: Known issue: workspace clean up fails, which breaks the next build on the same ISO builder. I think it's because some temporary files are owned by root so the wrapper script should clean this up itself using sudo, or something. This only happened when passing incorrect arguments to tails-create-iuk from the wrapper script, which was fixed then ⇒ case closed. Nope, this also happened on success. Fixed with 15016398d216a7f1512bab321064bcd5524db4. #38 - 12/29/2019 08:04 PM - intrigeri - Related to Feature #17385: Grow /home on rsync.lizard added #39 - 12/29/2019 08:28 PM - intrigeri - Status changed from In Progress to Needs Validation - Feature Branch set to feature/15281-single-squashfs-diff intrigeri wrote: Updated next steps: 1. adjust wrap_tails_create_iuks to the fact the iuk & perl5lib code bases are moving to tails.git 2. design how to transfer the CI-built IUKs to rsync.lizard and validate them 3. implement + document the above Done. Next steps: 1. The whole branch this is part of should be merged in time for 4.2 (so I expect segfault will do a quick review of what I did here). @anonym, if you have some time for this, I would love your opinion too on the commits that reference this ticket: you're a RM and we've designed the reproducibility verification stuff together initially :) 2. I'll try this out during the 4.2 release process and will fix the problems I notice then. It'll be a bit tricky because we will only generate IUK v1 for 4.2, but I'll manage, somehow. In any case, I've committed to be around to help @anonym when he goes through this process, during the 4.3 release process. #40 - 01/06/2020 06:45 PM - intrigeri - Status changed from Needs Validation to In Progress Applied in changeset tails|d163abc00247672b75bb92449c57c91e27c51e03. #41 - 01/06/2020 07:54 PM - intrigeri - Status changed from In Progress to Resolved - Assignee deleted (intrigeri) I just had to adjust the doc a little bit; apart of that, it went well and I could publish Jenkins’ IUKs! #42 - 01/09/2020 09:39 AM - intrigeri - Related to Feature #17412: Drop the need for dedicated temporary storage space for IUKs on rsync.lizard added
{"Source-Url": "https://redmine.tails.boum.org/code/issues/15287.pdf", "len_cl100k_base": 4513, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20035, "total-output-tokens": 5193, "length": "2e12", "weborganizer": {"__label__adult": 0.00023365020751953125, "__label__art_design": 0.00019156932830810547, "__label__crime_law": 0.0002040863037109375, "__label__education_jobs": 0.00044465065002441406, "__label__entertainment": 4.094839096069336e-05, "__label__fashion_beauty": 8.028745651245117e-05, "__label__finance_business": 0.00018656253814697263, "__label__food_dining": 0.00020420551300048828, "__label__games": 0.00037026405334472656, "__label__hardware": 0.00024580955505371094, "__label__health": 0.00016129016876220703, "__label__history": 9.685754776000977e-05, "__label__home_hobbies": 5.358457565307617e-05, "__label__industrial": 0.00014269351959228516, "__label__literature": 0.00012314319610595703, "__label__politics": 0.00012814998626708984, "__label__religion": 0.00020194053649902344, "__label__science_tech": 0.000904560089111328, "__label__social_life": 9.894371032714844e-05, "__label__software": 0.01202392578125, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.00014638900756835938, "__label__transportation": 0.00013339519500732422, "__label__travel": 0.00012189149856567384}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18420, 0.06994]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18420, 0.03571]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18420, 0.93453]], "google_gemma-3-12b-it_contains_pii": [[0, 2426, false], [2426, 3465, null], [3465, 7156, null], [7156, 10067, null], [10067, 12339, null], [12339, 12873, null], [12873, 13821, null], [13821, 15596, null], [15596, 17001, null], [17001, 18420, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2426, true], [2426, 3465, null], [3465, 7156, null], [7156, 10067, null], [10067, 12339, null], [12339, 12873, null], [12873, 13821, null], [13821, 15596, null], [15596, 17001, null], [17001, 18420, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18420, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18420, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18420, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18420, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18420, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18420, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18420, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18420, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18420, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18420, null]], "pdf_page_numbers": [[0, 2426, 1], [2426, 3465, 2], [3465, 7156, 3], [7156, 10067, 4], [10067, 12339, 5], [12339, 12873, 6], [12873, 13821, 7], [13821, 15596, 8], [15596, 17001, 9], [17001, 18420, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18420, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
6ea55d3c8399e70b85dda8d5dd4509ff0bb1ffaf
MAESTRO: An Open-source Infrastructure for Modeling Dataflows within Deep Learning Accelerators Hyoukjun Kwon* Michael Pellauer† Tushar Krishna* * Georgia Institute of Technology † NVIDIA hyoukjun@gatech.edu mpellauer@nvidia.com tushar@ece.gatech.edu Abstract—We present MAESTRO, a framework to describe and analyze CNN dataflows, and predict performance and energy-efficiency when running neural network layers across various hardware configurations. This includes two components: (i) a concise language to describe arbitrary dataflows and (ii) an analysis framework that accepts the dataflow description, hardware resource description, and DNN layer description as inputs and generates buffer requirements, buffer access counts, network-on-chip (NoC) bandwidth requirements, and roofline performance information. We demonstrate both components across several dataflows as case studies. 1 INTRODUCTION Deep learning techniques, especially convolutional neural networks (CNN), have pervaded vision applications across image classification, face recognition, video processing, and so on due to the high degree of accuracy they provide [1], [2]. Both industry and academia are exploring specialized hardware accelerator ASICs as a solution to provide low-latency and high-throughput for CNN workloads [3], [4], [5], [6]. The convolution operation is a deeply nested multiply-accumulate loop. For throughput and energy efficiency, each accelerator chooses different strategies to manipulate the loop order/tiling of the convolution operations and the spatial/temporal mapping of data on compute units, which we collectively refer to as dataflow. The throughput and energy efficiency of a dataflow changes dramatically depending on both the DNN topology (i.e., layer shapes and sizes), and accelerator hardware resources (buffer size, and network-on-chip (NoC) bandwidth). This demonstrates the importance of dataflow as a first-order consideration for deep learning accelerator ASICs, both at design-time when hardware resources (buffers and interconnects) are being allocated on-chip, and compile-time when different layers need to be optimally mapped for high utilization and energy-efficiency. We present MAESTRO (Modeling Accelerator Efficiency via Spatio-Temporal Resource Occupancy), an open-source tool for modeling and evaluating the performance and energy-efficiency of different dataflows. Fig. 1 presents an overview. Our key novelty is a methodology to formally describe dataflows using concisely defined pragmas over nested loops of convolutions. The "stationary"-based taxonomy introduced by prior work such as Eyeriss [3] is captured by one of our pragmas. We provide a concise DSL that allows users to input the neural network structure (shape/size), hardware resource description (buffer size and interconnect topology/bandwidth), and desired dataflow using our pragmas. MAESTRO computes the maximum performance (roofline throughput) and hardware resources (buffer sizes and NoC bandwidth) required to achieve this performance. It also produces buffer access and link traversal counts across the memory hierarchy, which can be plugged into an energy model. 2 MODELING DATAFLOW IN CNN ACCELERATORS 2.1 Convolutional Neural Networks (CNN) CNNs consist of convolutional, pooling, and fully-connected layers. Among these layers, convolutional layers are significant in the amount of computations and the size of required weight/input data [7]. As presented in Fig. 2, the computation in convolutional layers is often implemented as a sliding window operation with MACC (Multiplication-Accumulation). The operations can be described as six nested for-loops as shown in Code 1. “L1 buffer”. The PEs are interconnected internally to each other, and to a shared “L2 buffer” by a network-on-chip (NoC), as shown in Fig. 1. This is an abstract model - a real implementation might group multiple PE’s together to create a larger PE, the L1 could be a single latch or a FIFO or a scratchpad, the NoC could be hierarchical buses [3], systolic, tree, crossbar, and so on. 2.3 CNN Dataflows Since the multiplications in Code 1 are all independent, accelerator architectures can re-order and sub-tile the computations for efficiency and parallelism. This is necessary to limit off-chip accesses, because the size of the input feature map (upto 6.5MB in VGG16) and weight values (upto 4.5MB in VGG16) is too large to be loaded at once on to L1 buffers. These sizes also mean that strategies for sequencing the computation and splitting (tiling) data onto spatially deployed PEs is a large design space exploration problem by itself. 2.3.1 Definition of Dataflow Because of the complexity of the loop nest as shown in Code 1, choosing the best dataflow for a given network layer on a given accelerator is not intuitive to describe or design. To address this challenge, we take a *variable-centric* approach, which regards the mapping of weights/inputs as equivalent to assigning iteration variables to each PE. For example, if we assign \((k, i, y, x, r, s) = (3, 4, 5, 6, 0, 2)\) to a PE, we observe that the PE should receive nine weights \((W[3][4][0-2][0-2])\) and one input \((I[4][5][6])\). This approach provides a better abstraction than identifying the exact weights/inputs to assign to each PE (data-centric approach) because it converts the 6D array mapping problem into a six-variable assignment problem. 2.3.2 Elements of Dataflow **Loop-ordering.** Changing the order of loops in Code 1 affects the data reuse patterns in each PE. For example, if we iterate in the order of \(K \rightarrow C \rightarrow Y \rightarrow X \rightarrow R \rightarrow S\) and assign each \(X\) loop iteration across PEs, weight values within the same channel can remain in each PE until the channel iteration variable \(c\) increases. However, if we iterate in an order of \(K \rightarrow R \rightarrow S \rightarrow C \rightarrow X \rightarrow Y\) and assign each \(C\) loop iteration across PEs, input feature map values within the same channel can remain in each PE. We probably cannot keep weight values because the reuse time is over two inner loops \((X, Y, 1)\), much larger than a typical PE’s L1 buffer size. This means that the *same* weight might be read multiple times over the course of the computation from the shared L2 buffer. Thus, based on the CNN sizes, the buffer size in each PE, and the dataflow, the number of L1 (local registers) and L2 (shared buffer) read/writes of a dataflow change, as we will show later in Fig. 7. Spatial and Temporal Mapping. Spatial mapping enables individual PEs to process different sets of iteration variables at the same time. When the number of compute units or local buffer is not sufficient to cover a given spatial mapping, implicit temporal mapping is necessary— the original dataflow graph is conceptually “folded” onto a physical unit over time. For example, if the feature map width \(X = 4\) and we spatially map \(X\) over two PEs, PE0 and PE1 can process \(x_i = 0\) and \(x_i = 1\), respectively, at cycle 0. And then, PE0 and PE1 process \(x_i = 2\) and \(x_i = 3\), respectively, at cycle 1. Temporal mapping indicates that a PE can process different set of iteration variables over time, which effectively reduces the compute unit requirement for full parallel execution to the time domain. For example, a PE could first process \(ki = 0\) between cycles 0 – 9, and \(ki = 1\) between cycles 10 – 19. Temporally mapped variables can result in *stationary* data (inputs, weights, or partial outputs), using the taxonomy introduced in Eyeriss [3], which requires local buffer to store this stationary data over time. Tiling. The granularity of spatial and temporal mapping can be onto separate PEs, or coarse-grained groups of PEs, which we call a tile. For example, Flexflow [8] organizes PE into multiple rows and assigns operation for one output pixel to one of the PE rows. In this case, variables \(K, X, Y\) are temporally mapped on a PE row but variables \(C, R, S\) are spatially mapped to PEs in the PE row. The tile can be organized into higher dimensions than two. 2.3.3 Describing Dataflows Based on the elements discussed in Section 2.3.2, we describe a method to formally describe dataflows using a combination of five pragmas that includes all the elements of dataflow, loop ordering, spatial/temporal mapping and tiling, as presented in Table 1. All the pragmas are followed by loop variable to indicate their target variables. MAESTRO processes pragmas in two phases; tile construction and variable mapping. In the first phase, tile construction, MAESTRO processes Tile pragmas from inner to outer loop to recursively construct tile structure, as shown in the example of tile in Table 1. By this process, MAESTRO assigns a tile structure to each loop so that MAESTRO can map corresponding variables on the right tiles. 2.4 Data Reuse Maximizing data reuse is the prime target of many accelerators as it improves both the throughput and energy efficiency. Data reuse reduces the number of energy-consuming L2 reads and writes (which translates to fewer DRAM reads and writes), in turn translating to reduced bandwidth requirements from the L2 and the NoC implementations within the accelerator. We define three classes of data reuse. **Temporal data reuse (Stationary data):** Temporal data reuse opportunity, which is the same as the stationary taxonomy introduced by Eyeriss [3], is based on non-shared variables among data classes. For example, if the innermost spatially-mapped loop variable is \(K\), although the target weight pixel changes every loop-\(K\) iteration, the target input pixel does not because inputs do not have a \(K\) dimension. The stationary data class is determined by the loop order and *innermost spatially mapped loop*. For example, weights in a row-stationary dataflow [3] are reused in temporal dimension as illustrated in Fig. 3, in which weight is stationary in \(K\) and \(C\) dimension. Since loops \(R\) and \(S\) are merged with \(Y\) dimension, each PE has unique R/S values; thus weight is fully-stationary in each PE for each Loop \(Y\) iteration. To exploit temporal reuse, an accelerator needs a local L1 buffer in each PE. **Spatial data reuse (Multicast data):** Spatial data reuse opportunity is based on temporal mapping and sliding window halo. For example, in Fig. 3 (b), because of the halo from \(\text{Spatial}_\text{MAP}(2, 1), I_1\) is shared between PE1 and PE2 at the same time. Rather than reading the data twice from the L2 buffer, an accelerator can read only once and multicast \(I_1\) to PE1 and PE2. To exploit spatial reuse, an accelerator needs a NoC that supports multicasting (bus, tree, etc.). **Spatio-temporal data reuse (Local-forwarded data):** Also called producer/consumer parallelism, this is based on sliding window TABLE 1: Dataflow description pragmas. The elemental hardware unit is either a single PE or a group of PEs if the Tile pragma is used. <table> <thead> <tr> <th>Pragma Syntax</th> <th>Description</th> <th>Semantics (When used over loop 'A')</th> <th>Example</th> </tr> </thead> <tbody> <tr> <td>Tile (Sz)</td> <td>Recursively constructs new tile structures using the tile structure of the next inner loop. Tile structure in the innermost loop is PEs, and Tile pragma allows to construct arbitrary dimensional tiles from 1D PE array.</td> <td>Temporal_Map before Tile pragma</td> <td>Tile (2) A</td> </tr> <tr> <td>Temporal_Map (Sz, Ofs)</td> <td>Map Sz (mapping size) variables on every tile in the tile structure at the loop on which this pragma is used. When all the inner loops finishes, the index of mapped variables are increased by stride Ofs (offset).</td> <td>Temporal_Map at loop A</td> <td>Temporal_Map (2, 2) A</td> </tr> <tr> <td>Spatial_Map (Sz, Ofs)</td> <td>Similar to Temporal_Map, map Sz variables on every tile in the tile structure at the loop. However, within adjacent tiles, mapped variables are increased by stride Ofs, which maps different variables on each tile. Involves implicit temporal mapping (folding) when the number of tiles cannot cover entire loop.</td> <td>Spatial_Map before Tile pragma</td> <td>Spatial_Map (2, 1) A</td> </tr> <tr> <td>Merge</td> <td>Combine loop A and its upper loop (loop B in the semantics and example) to flatten a flattened loop.</td> <td>Merge A</td> <td>merge A</td> </tr> <tr> <td>Unroll</td> <td>Break loop A and write all the iterations of A into independent instructions of its upper loop (loop B in the semantics and example).</td> <td>Unroll A</td> <td>unroll A</td> </tr> </tbody> </table> (a) Syntax (b) Example Fig. 4: The syntax of MAESTRO DSL and an example of it. (a) The unit of size is the number of elements (e.g., L1Size 256 indicates the size of 256 elements, 512 Byte with 16-bit fixed point data). We use * as a shorthand for semicolon-terminated repetition. (b) The example is for the WS dataflow from Table 2, halo over implicit temporal mapping. For example, in the example for Spatial_Map in Table 1, a = 2 is reused over time stamp 0 and 1, in different tiles. Corresponding input/weight values can be directly forwarded to T0 from T1, instead of reading them from prefetch buffer (L2). To exploit spatio-temporal reuse, an accelerator needs local forwarding links between PEs. MAESTRO assumes all three data reuse opportunities are provided by the accelerator to evaluate the potential of the dataflow. We describe the details next. 3 MAESTRO FRAMEWORK 3.1 MAESTRO Domain-specific Language We present the syntax of the MAESTRO DSL in Fig. 4 (a), which consists of the DNN layer description, hardware description, and dataflow description. For the layer description, users can specify the shape and size of each dimension in a convolutional layer. For the hardware description, users can specify L1 (i.e., private/local) and L2 (i.e., shared/global) buffer (i.e., FIFO/scratchpad) size and the NoC bandwidth for L2 ingress/egress traffic. The dataflow description is specified using the pragmas in Section 2.3.3. Fig. 4 (b) shows an example. Fig. 5: MAESTRO analysis engine description. For simplicity, we omit edge conditions around temporal mappings. 3.2 MAESTRO Analysis Engine Fig. 5 provides an overview of how MAESTRO analyzes the given dataflow. As described in Section 2.3.3, MAESTRO first processes the given dataflow in two steps; tile construction and variable mapping. After the dataflow is parsed, the analysis mainly focuses on the innermost spatial loop. This is because the innermost spatial loop has the finest granularity of spatial processing, and the cost of a dataflow TABLE 2: A list of dataflows with description based on pragmas introduced in Table 1 and corresponding accelerators. <table> <thead> <tr> <th>Accelerator</th> <th>Dataflow Style</th> <th>Dataflow</th> </tr> </thead> <tbody> <tr> <td>Example for this work</td> <td>No Local Reuse (NLR)</td> <td>TEMPORAL_MAP (4,4) K → TEMPORAL_MAP (1,1) C → TEMPORAL_MAP (1,1) Y → TEMPORAL_MAP (1,1) R → TEMPORAL_MAP (1,1) S → TEMPORAL_MAP (1,1) X → UNROLL R → UNROLL S</td> </tr> <tr> <td>Example for this work</td> <td>Weight Stationary (WS)</td> <td>TEMPORAL_MAP (1,1) K → TEMPORAL_MAP (1,1) C → TEMPORAL_MAP (3,1) Y → SPATIAL_MAP (1,1) X → UNROLL R → UNROLL S</td> </tr> <tr> <td>ShiDuan [9]</td> <td>Output Stationary (OS)</td> <td>TEMPORAL_MAP (1,1) K → TEMPORAL_MAP (1,1) C → TEMPORAL_MAP (1,1) Y → SPATIAL_MAP (3,1) X → UNROLL R → UNROLL S</td> </tr> <tr> <td>Eyeriss [3]</td> <td>Row-stationary</td> <td>TEMPORAL_MAP (1,1) K → TEMPORAL_MAP (1,1) C → TEMPORAL_MAP (3,1) Y → SPATIAL_MAP (1,1) X → TEMPORAL_MAP (1,1) Y → TEMPORAL_MAP (1,1) X → SPATIAL_MAP (1,1) K</td> </tr> <tr> <td>NVDLA [6]</td> <td>Weight Stationary</td> <td>TEMPORAL_MAP (1,1) K → TEMPORAL_MAP (1,1) S → TEMPORAL_MAP (64,64) C → TEMPORAL_MAP (1,1) Y → TEMPORAL_MAP (1,1) X → SPATIAL_MAP (1,1) K</td> </tr> </tbody> </table> Fig. 6: Tradeoff of dataflows presented in Table 2 with 64 PEs under steady iteration state (not at edge of any direction) over VGG16's convolutional layer 1 and 11. X-axis lists up dataflow styles we evaluate. The first two Y-axis plots bandwidth (Gbps) and L1 memory (KB) requirement for entire accelerator, which are the minimum numbers to support dataflow without bottleneck from NoC and buffer. The last Y-axis plots the roof-line throughput (assuming no congestion from NoC or L1/L2 read/write). Fig. 7: The breakdown of energy consumption of MAC and L1/L2 access for the five dataflows from Table 2. The access counts generated by MAESTRO are multiplied by appropriate energy values from Cacti [10] at 28nm for 2KB L1 scratchpad in each PE and 1MB shared L2 buffer. The values are normalized to the MAC energy consumption of NLR. is closely related to the innermost spatial loop, as we show in Fig. 5. Data reuse is abstracted in UTxyz function, which identifies unique values of a loop variable. Based on the relationship between dataflow and costs defined in Fig. 5, MAESTRO lets users know if the given hardware resource is enough to run the dataflow without additional temporal folding other than specified in dataflow description. Buffer access counts, in particular, can be integrated into energy analysis tools to estimate energy consumption of a dataflow. In the following section, we present analysis results of five dataflows using MAESTRO as a comparison the original systems, which vary widely in number of PEs, buffer sizes, network topology, and so on. We gather useful insights across the dataflows and across layers. Between dataflows, we observe, as expected, that NLR has the least L1 memory requirement (as it does not perform temporal reuse at the PE), and therefore has significant L2 energy consumption. For CONV1, NVDLA has the highest energy consumption among all dataflows. However, for CONV11, this trend reverses - NVDLA's energy consumption remains similar, and is 2× lower than NLR, WS and Shi. This is because CONV1 has just 3 input channels, while CONV11 has 512; NVDLA is tuned for operating on layers with large input channels (as TEMPORAL_MAP (64,64) on variable C of NVDLA dataflow in Table 2 shows), making it inefficient for early layers since it still needs to pay the energy cost of vector reads, but is much more efficient than other dataflows in later layers. For the same reason, NVDLA requires extremely high NoC bandwidth in CONV11 (compared to CONV1), since more partial sums get mapped on each PE of NVDLA with CONV11, leading to more L1 to L2 communication for partial sums and outputs. The RS dataflow is observed to be the most energy-efficient due to very few L2 reads, especially for CONV11, demonstrating the best input and weight reuse. Compared to NVDLA, it has much worse roofline throughput in CONV1, but slightly better in CONV11. The Shi dataflow has the highest L1 buffer requirement among all dataflows, as it spatially replicates variable X across 3 PEs. 5 Conclusion Dataflow analysis is a high-dimensional problem that requires defining clear relationships among entities with different dimensions, such as the problem (6D), time (1D), PE mapping (1D or more based on tiling), buffer address space (1D), NoC bandwidth (1D), and so on. In this work, we proposed a systematic approach to deal with such a high-dimensional problem by dividing the problem into each dimension. Our dataflow pragma is designed to describe the dataflow behavior on each dimension, and our framework, MAESTRO, receives dataflow description written in the pragma along hardware/DNN layer description to analyze the efficiency of the given dataflow. Using dataflow from recent accelerators, we demonstrated that the efficiency of each dataflow varies dramatically based on the CNN layer sizes and that each dataflow is a design space of tradeoff of NoC bandwidth requirements, memory requirements, energy, and throughput. We believe that our taxonomy and analysis framework for dataflow exploration will help CNN accelerator architects to explore how much these benefits apply to their specific approach. References 1. However, for DLA and Shi, a vector read of size 16 and 4 respectively from the L1 is assumed in the energy calculations, instead of multiple expensive scalar reads, as their dataflows are tuned for such an implementation.
{"Source-Url": "https://export.arxiv.org/pdf/1805.02566", "len_cl100k_base": 4933, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 19534, "total-output-tokens": 5404, "length": "2e12", "weborganizer": {"__label__adult": 0.0006899833679199219, "__label__art_design": 0.0007205009460449219, "__label__crime_law": 0.0006175041198730469, "__label__education_jobs": 0.0006151199340820312, "__label__entertainment": 0.00019872188568115232, "__label__fashion_beauty": 0.00034427642822265625, "__label__finance_business": 0.0004150867462158203, "__label__food_dining": 0.0005316734313964844, "__label__games": 0.0009312629699707032, "__label__hardware": 0.0130462646484375, "__label__health": 0.00119781494140625, "__label__history": 0.00044846534729003906, "__label__home_hobbies": 0.0001825094223022461, "__label__industrial": 0.0014400482177734375, "__label__literature": 0.0002536773681640625, "__label__politics": 0.0005245208740234375, "__label__religion": 0.0008149147033691406, "__label__science_tech": 0.46630859375, "__label__social_life": 0.00010663270950317384, "__label__software": 0.01039886474609375, "__label__software_dev": 0.498291015625, "__label__sports_fitness": 0.0005388259887695312, "__label__transportation": 0.0011758804321289062, "__label__travel": 0.0003070831298828125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21067, 0.03114]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21067, 0.65505]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21067, 0.8549]], "google_gemma-3-12b-it_contains_pii": [[0, 3700, false], [3700, 10891, null], [10891, 14502, null], [14502, 20436, null], [20436, 21067, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3700, true], [3700, 10891, null], [10891, 14502, null], [14502, 20436, null], [20436, 21067, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21067, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21067, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21067, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21067, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21067, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21067, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21067, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21067, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21067, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21067, null]], "pdf_page_numbers": [[0, 3700, 1], [3700, 10891, 2], [10891, 14502, 3], [14502, 20436, 4], [20436, 21067, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21067, 0.18919]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
b42487f95aa63a0a822c1e246283639f8b51c5fe
Metrics Correlation and Analysis Service (MCAS) Andrew Baranovski*, Dave Dykstra, Gabriele Garzoglio, Ted Hesselroth, Parag Mhashilkar, Tanya Levshina Fermi National Accelerator Laboratory, Batavia, IL, USA {abaranov, dwd, garzoglio, tdh, parag, tlevshin}@fnal.gov Abstract. The complexity of Grid workflow activities and their associated software stacks inevitably involves multiple organizations, ownership, and deployment domains. In this setting, important and common tasks such as the correlation and display of metrics and debugging information (fundamental ingredients of troubleshooting) are challenged by the informational entropy inherent to independently maintained and operated software components. Because such an information pool is disorganized, it is a difficult environment for business intelligence analysis i.e. troubleshooting, incident investigation, and trend spotting. The mission of the MCAS project is to deliver a software solution to help with adaptation, retrieval, correlation, and display of workflow-driven data and of type-agnostic events, generated by loosely coupled or fully decoupled middleware. 1. Introduction The Grid is a common paradigm for sharing distributed resources. With the paradigm now arguably surpassing one decade of adoption by a wide variety of communities, the amount of information provided by distributed services, such as monitoring, discovery, troubleshooting, accounting, auditing, etc. is becoming increasingly difficult to manage in a coherent fashion. Because of the disjoint nature of all these data sources, the aggregation, transformation, and display of this distributed information is particularly challenging. To address this problem, the Fermi National Accelerator Laboratory has initiated the Metrics Correlation and Analysis Service (MCAS) project [1]. A core idea of this project is to factor out presentation and business analysis logic from available monitoring solutions into a standalone model, supporting common standards in data manipulation and presentation. The MCAS project prototyped several services, which rely on common techniques for data and information display aggregation. In particular, we’ve chosen portlet [2] technology to compose troubleshooting and metrics analysis dashboard, and XAware [3] to drive the data-integration model. Section 2 introduces a problem in analyzing disjoint data sources and proposes data co-display and integration as possible solutions. These solutions are discussed further in the following two sections: section 3 discusses advantages and disadvantages of available technologies for data co-display; section 4 describes the problems in data integration. Section 5 delves into the description of the MCAS * To whom any correspondence should be addressed. architecture. Sections 6 and 7 discuss respectively the current and future work on MCAS. We conclude with section 8. 2. Uniform analysis of uniform data Typically, operational problems cannot be easily visualized using any one particular display method. As a part of the decision-making process, the experts employ a variety of tools, which report data patterns that pinpoint incidents of already known problems. This variety is built on technologies that often incorporate incompatible data input and produce output that cannot be used within the same context. These incompatibilities restrict the ability to perform analysis across data domains. A solution to overcoming these limitations consists in promoting intuitive ways to process and represent underlying monitoring or diagnostic information. In short, this solution promotes a uniform mechanism for analyzing uniformed data. Phase 1 of the MCAS project has focused on a minimalist approach to describe data in XML and to build a foundation to support complex metrics analysis and presentation. In this phase, the project has focused on creating a toolkit of display and analysis tools for presentation and co-display of data. The idea behind the toolkit is to incorporate typical troubleshooting practices into components that can be hosted by a common container environment. The design and implementation of the delivered software are based on concepts of information display and data integration. 3. Data co-display A data co-display is one of the simplest ways to cross-analyze the data. In the MCAS project, data co-display is implemented by aggregating independently designed and managed frames in a single web browser window. Each frame renders a different aspect of the input data. Not only does this approach decouple development and testing of the display components, but it also allows users to choose and change the presentation layout along with configuration details of each component, in a way that best represents the state of the system. As such, we have evaluated two technologies for content aggregation: web widgets and JSR 168 Java Portlets. Both web widgets and JSR 168 Java Portlets offer a framework for developing code that can coexists in the context of a single web page. Ultimately, such framework allows developers to plug in the codes into service containers (iGoogle, Netvibe, MySpace). The function of the container is to augment all code with features that are common across all elements of the display. These features include widgets setup and display position, configuration editing, help, and persistence of configuration. Web widget containers generate a front page using the developer's scripts together with the framework code required to support the common functions. Finally, the generated code is rendered by the browser. The web widget approach is extremely scalable, as it is entirely client-based and does not require a server side context. The significant drawback of this technology is in its dependence on the provider's API, resulting in vendor lock-in for hosting and support. This problem, however, can be mitigated by thin sub-framework wrappers, which isolate project-wide widget behavior. In contrast to web widgets, JSR 168 Java Portlet specification is entirely server-side based. The specification sets forth a standard of development for java servlets, which generate independent, dynamic content. This context can be handled by the hosting environment for common management, configuration, and look and feel policies. Because the portlet specification is server-based, its major deficiency is scalability and the inherent complexity of the code. We believe that these weaknesses, however, are tolerable compared to the vendor lock-in, which is the principal disadvantage of web widgets. In addition, the specification clearly benefits from an open standard and a variety of implementations, available as community driven products (JBoss[4], Pluto[5], eXo[6]). In the MCAS project we have chosen to use java portlet specifications implemented by the JBoss portal as the primary means of composing and rendering project's user interfaces. 4. Data integration Our goal is to provide unified data analysis capabilities on disjoint, format-incompatible datasets. We approach this problem through the unification of the data itself. This unification can be accomplished via data format and semantic transformation. These transformations make the data compatible with display and analysis tools, enabling the analysis of the unified data. Uniformity of format and common semantics decouples the design of the analysis software from the details of original input data. This also allows the reuse of the transformed data in contexts not originally envisioned by the project, a major benefit in enabling the utility of the solution for perhaps several years. In general, the drawback of this approach is the high startup cost of developing transformations of several data formats, while providing a usable model for describing the transformed data. In summary, the data integration layer must be able to interface to systems that function with diverse data sets. The question remains as to what format the unification (or data description language) must adhere to. In the MCAS project we have decided against attempting to set a specific format that fits all possible cases, rather we have chosen to follow a solution that can capture common traits without unreasonably limiting future choices. In this context we have isolated two requirements: support of structured data and support for navigation of structured data. For the MCAS project, the first requirement is important for preserving the semantics of input. The second requirement is critical for enabling a formalism of transformation of that input. Among a variety of choices and available third party implementations (key-value pair, CORBA, ad hoc binary, XML, etc.), XML fits the task the best. The XML has naturally become a backbone supporting data unification as well as the only means of describing the transformation workflows necessary for that unification. 5. Architecture The MCAS workflow is organized among four players – presentation layer, content management system, data integration layer, and data sources. The presentation layer for the MCAS system is a portal or dashboard. This page has a unique address and typically offers the user a default view of the system. The content of the page is supported by the Content Management System (CMS). The CMS is responsible for composing independent interface elements called “Portlets” using 1) The address of the user request 2) Static configuration parameters of the individual portlet 3) Run time information provided by the data integration layer 5.1. The Content Management System The Content Management System (CMS) renders each portlet (P1, P2, P3, and P4 as shown in the figure) independently. Each portlet can be autonomously designed with some unique perspective of a particular system aspect. A collection of different portlets, describing different perspective of the system, can be put together to form a dashboard. The dashboard view reflects the state of the entire system at a given time. The MCAS toolkit is based on high level data presentation routines from Google, RRD [9] and lower level JavaScript code developed in-house. The toolkit accepts data represented in standard XML schema as its input. Each portlet is rendered using information provided by the data integration layer. The MCSA project uses JBoss as Content Management System. The data integration layer accesses a set of data sources and uses a collection of rules to transform and aggregate the retrieved content. This layer is responsible for providing content display to the CMS. The output is returned synchronously to the requesting party, which may be a portlet or another data integration rule set. Content display is expected to be available “immediately” following user’s request. Hence, the data integration layer URL must have access to the data in real-time. Figure 1: MCAS Architecture In our model, the data integration layer defines all endpoints providing content to be displayed. These endpoints may be a cache of existing web pages or may invoke complex rules for transforming and aggregating data from other sources. The purpose of this layer is to respond to requests for data that is temporarily unavailable or in formats not compatible to the user interface. In fact, the compatible formats will typically imply requirements to data format, data quality, and data availability. Ultimately, the data integration layer will re-use existing preconfigured resources to generate the response. In doing so, it may need to restrict the quantity and format of the data retrieved from the endpoints, to generate a response within an acceptable amount of time. A data source is a primary provider or the source of the pre-processed information. It may come in various formats and support variety of communication models. In the pilot stage of the project we focus on HTTP accessible resources. 5.2. Data integration model The data integration model uses XAware for data source access, inter component data exchange and data transformation scheduling. XAware has extensive support for structured content based on XML and fairly rich set of features for content transformation. The product offers a well defined model supported by collection of Meta-XML documents. These documents define how the data sources should be combined to transform the original data into a more usable format. This model is expressed through the following three entities – a driver, a component, and a document. An XAware driver is the meta-document which instructs how a data source can be accessed. It also specifies the data source address URL and other parameters required to retrieve content from the data source. An XAware component is a logical view of the data source. It is a meta-document which defines the XML schema of the data available from the output and input drivers. Its purpose is to bind together the XML schema of the data source with the transformations of the data. An XAware document is the ultimate product of the workflow. The document is built from the content provided by other documents or components. This model sets the structure of the output XML as well as defines the workflow of transformations for building that structure. Each document name is mapped into URL endpoint accessible to the user through the XAware hosting environment. When user contacts such URL, the data integration layer triggers a series of transformations in strict accordance to the hierarchy of Meta object references comprising the overall document. The tree starts at the user contact point and ends with the driver. The resulting document is sent back in plain text using HTTP. All Meta documents describing the workflow are part of the hierarchy of the resulting document. This design places limitations on flexibility of the output and options to dynamically influence the transformation sequence. In the initial phase of the project, these limitation are not critical. We have decided to invest into XAware features to transform informational content for presentation by the dashboard displays. However, given the weakness of the product, we are also investigating other platforms for data integration. One such alternative is Mule Enterprise Service Bus (ESB). 5.3. Mule Enterprise Service Bus (ESB) We have evaluated Mule ESB [7] as the initial prototype suite for functions of the data integration layer. Attractive features of the Mule ESB are capabilities of accessing diverse data source transports, support for message based inter component exchange, concurrency, synchronization control and staged execution scheduling. One of the major features of Mule ESB integration platform is the ability to set up data and execution flow in a transport/interface agnostic way. In particular Mule ESB offers: 1) Codes to translate, or templatize translation of data formats 2) Options to manage synchronization with choices ranging from fully synchronous to Stage Event Driven Architecture (SEDA [8]) based solutions. 3) Codes which adapt out of the box to different transports (TCP, UDP, SMTP, FTP, jdbc, etc) 5.4. Messaging Message exchange is a clever way to decouple the contexts of different programs. Messaging is a soft pattern which does not rely on a fixed db schema or file format. Rather, it is focused on data transport and synchronization issues. Consequently, within the Mule-message enabled infrastructure there are options for using opaque payloads; modeling of data access and aggregation is separated from specifics of the type-structure of the data sources. This is in contrast to the XAware model, in which transformation is assumed and all data exchanges have predetermined schema. Messaging and data integration models have been used in an evaluation phase of the MCAS project to connect a set of message-enabled services into a workflow that refactors existing information portals. Information from a set of site efficiency sensors accessible through http for the D0 experiment at Fermilab [10] has been assembled in a Content Management System. Figure 2 below depicts the data and execution flow of the implementation, which uses formal data transformations and RRD processing engine to perform splitting, rescaling, and redrawing of the site efficiency data for the D0 experiment. In our Mule workflow implementation we used a SEDA based message communication model. Mule messages communicate information and trigger component invocations. The generic Mule message consists of message envelope and enclosed opaque payload. A necessary condition for supporting opaque content of the transport message is the availability of an interface that allows adapting that content to a format supported by the Mule endpoint components – a business logic container. This functionality is a major source of flexibility that allows developers to override Mule message end point interfaces and deploy customized transformations. Examples of such end point implementations are databases, mail servers, or RRD tool rendering engines. For the prototype evaluation phase we implemented an endpoint based on the RRD tool rendering engine. The interface to this RRD tool component allows a user to formulate simple data manipulations and set the parameters of the resulting display. This module converts input data in the form of XML into a collection of RRD databases. The content of these databases is then used by the RRD tool to perform data manipulations using a command invoked from a template language similar to the example below. \[ \begin{align*} \text{ds(D0ProductionEfficiency)} \\ \text{ec=eff_code; ef=eff_fini;} \\ \text{RRD(CDEF:ec_adj=ec,0,100,} \\ \text{LIMIT CDEF:ef_adj=ef,0,100,} \\ \text{LIMIT LINE2:ec_adj#FF0000:eff_code(x100)} \\ \text{LINE2:ef_adj#0000FF:eff_fini(x100))} \\ \text{imgsize(600,300)} \end{align*} \] This meta-code directs the workflow to access D0ProductionEfficiency time series data source, split the data source content into two streams using values of data property ("eff_code", "eff_fini") and finally instantiates actual RRD command while referencing data factorized at the previous step. The data transformation engine is built by setting up a model to describe message-driven interactions between Mule ESB message endpoints. This particular schema is designed to execute a template-like language and has only one data source (the production efficiency endpoint). The embedded RRD template enables transformations over split data streams. The result of the transformation is a new document with an image, which is sent to a portlet instance specifically configured to interact with this data integration model. \[ \text{ds(D0ProductionEfficiency)} \\ \text{ec=eff_code; ef=eff_fini;} \\ \text{RRD(CDEF:ec_adj=ec,0,100,} \\ \text{LIMIT CDEF:ef_adj=ef,0,100,} \\ \text{LIMIT LINE2:ec_adj#FF0000:eff_code(x100)} \\ \text{LINE2:ef_adj#0000FF:eff_fini(x100))} \\ \text{imgsize(600,300)} \] **Figure 2 An example of data integration workflow** 5.5. Architecture Summary The Data Integration layer contains definitions of all URL endpoints that provide content for the display. These endpoints may be proxies to existing web pages or they may invoke rules for transforming and aggregating data retrieved from other sources. In all cases the data results in an XML representation of the resources accessed. The output of the proxies or rules is data of the semantics and format required by the portal Content Management System, which in this case is the JBoss portlet capability. The user interface is invoked in a web browser, displaying the portlets which have been produced from the aggregated data. Though XAware as the data integration layer is strong in transformational capabilities, the static nature of its transformation sequence has led us to evaluate Mule as the integration component. Mule’s focus on transport and synchronization issues allows separation of these factors from schema and format. We implemented a workflow in Mule to collect D0 production efficiency data, transforming it into images to be displayed via portlets. 6. Current work At this stage of the project, the MCAS team focus is not on providing sophisticated means for analysis of distributed metrics or diagnostics data. Instead, the project is focused on development of easy to use and intuitive tools for displaying data already reported by existing information portals. Following that path, we have already developed a substantial collection of tools that fit the monitoring needs of Fermilab’s USCMS facility operations. This software enhances the capabilities of existing experiments’ portals by bringing the expertise we have gained from analyzing, monitoring, and troubleshooting use cases of users and operators of largely distributed software systems from several different experiments. The current capabilities of the software are described in the sections that follow. Fig. 3 shows a screen shot of a dashboard composed by Bar graph and Image display widgets. 6.1. Table view Table view renders a data table model. Each row of the table undergoes summarization which determines the relation between columns of the model data and columns of the data displayed on the web page. The table view portlet supports custom sorting features, table size constraints, and user-assigned color coding of different weight. The example of the input data format is given below: ```xml <column name="c1"> <row>value1</row> </column> <column name="c2"> <row>value2</row> </column> </table> ``` 6.2. Bar graph Bar graph displays a collection of indicators that visualize value pairs in relative proportions. These collections are often used to show health or performance status of the system relative to predefined metrics. The idea behind these widgets is to display information collected from a variety of “status” pages into a single compact document. 6.3. Time series 6.4. Image display The image display portlet accepts an XML document that encodes a list of images as URL locations. The portlet generates HTML code that rescales those images such that they all fit inside a fixed-size portlet window. This portlet allows the user to select and combine image data generated by other services in order to increase the informational density of the dashboard page. 6.5. RRD data analysis and display tool One of the goals of this project is to build a solution that offers a formal interface to the analysis of disjoint data sets. Perhaps the simplest approach to providing this feature is to re-use existing tools and their interfaces through a thin layer of templates and known agreement on how infrastructure should instantiate the actual interface invocation. Although we do not directly address the data analysis in phase 1 of the project, we prototyped an RRD driven service for template based transformation and display of XML time series. 7. Future work One of the major obstacles to reusing the results of this work is the complexity of the underlying technologies. To overcome this problem, we now plan to focus on understanding common data integration patterns in order to isolate reusable service components. Ideally the roles of these components in a model may be changed by simple rearrangement of the workflow or reconfiguration of the parameters of the service. 8. Conclusions In this project we have addressed the problem of building monitoring and analysis portals for systems that do not support common semantics for data describing system state. Our solution is based on technologies that allow data integration, transformation, and co-display. In the project we have focused on ease of refactoring and adaptation of data through a specially provisioned integration layer. We have used an XAware engine as the driver for that layer, leveraging its support for rules to implement unification of semantics and specifying the format of the input. For addressing the problem of heuristic correlation of information, we have leveraged a data co-display idea through building HEP experiments’ dashboards using JBoss-backed portlet technology. In the future we'll focus on allowing more sophistication in how advanced data display tools can be used in conjunction with data generated by user-supplied transformation templates. The purpose of this work is to ensure a better fit of metric data analysis and correlation products to monitoring and troubleshooting requirements of users and facility operators. Acknowledgments Fermilab is operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy. This work was partially funded by the Center for Enabling Distributed Petascale Science (CEDPS) through the Scientific Discovery through Advanced Computing program (SciDAC2), Office of Science, U.S. Dept. of Energy. References
{"Source-Url": "http://cd-docdb.fnal.gov/0032/003213/002/MCAS_Chep09_Reviewed.pdf", "len_cl100k_base": 4814, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 22778, "total-output-tokens": 5688, "length": "2e12", "weborganizer": {"__label__adult": 0.0002913475036621094, "__label__art_design": 0.0005598068237304688, "__label__crime_law": 0.0003523826599121094, "__label__education_jobs": 0.0010480880737304688, "__label__entertainment": 0.00012862682342529297, "__label__fashion_beauty": 0.00014853477478027344, "__label__finance_business": 0.0005359649658203125, "__label__food_dining": 0.00033545494079589844, "__label__games": 0.0004470348358154297, "__label__hardware": 0.002468109130859375, "__label__health": 0.0004169940948486328, "__label__history": 0.00045228004455566406, "__label__home_hobbies": 9.441375732421876e-05, "__label__industrial": 0.0010814666748046875, "__label__literature": 0.00022602081298828125, "__label__politics": 0.00028896331787109375, "__label__religion": 0.0004298686981201172, "__label__science_tech": 0.290283203125, "__label__social_life": 0.0001043081283569336, "__label__software": 0.042144775390625, "__label__software_dev": 0.65673828125, "__label__sports_fitness": 0.0002524852752685547, "__label__transportation": 0.0006666183471679688, "__label__travel": 0.00022733211517333984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26299, 0.03841]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26299, 0.28548]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26299, 0.87363]], "google_gemma-3-12b-it_contains_pii": [[0, 2786, false], [2786, 6948, null], [6948, 10886, null], [10886, 12995, null], [12995, 17275, null], [17275, 19021, null], [19021, 22088, null], [22088, 23498, null], [23498, 26299, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2786, true], [2786, 6948, null], [6948, 10886, null], [10886, 12995, null], [12995, 17275, null], [17275, 19021, null], [19021, 22088, null], [22088, 23498, null], [23498, 26299, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26299, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26299, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26299, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26299, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26299, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26299, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26299, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26299, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26299, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26299, null]], "pdf_page_numbers": [[0, 2786, 1], [2786, 6948, 2], [6948, 10886, 3], [10886, 12995, 4], [12995, 17275, 5], [17275, 19021, 6], [19021, 22088, 7], [22088, 23498, 8], [23498, 26299, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26299, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
4cf1ecef1e26cff13bd2ee8cebcd7f517b1f8b80
Lessons from Deploying NLG Technology for Marine Weather Forecast Text Generation Somayajulu G. Sripada and Ehud Reiter and Ian Davy and Kristian Nilssen Abstract. SUMTIME-MOUSAM is a Natural Language Generation (NLG) system that produces textual weather forecasts for offshore oilrigs from Numerical Weather Prediction (NWP) data. It has been used for the past year by Weathernews (UK) Ltd for producing 150 draft forecasts per day, which are then post-edited by forecasters before being released to end-users. In this paper, we describe how the system works, how it is used at Weathernews, and finally some lessons we learnt from building, installing and maintaining SUMTIME-MOUSAM. One important lesson has been that using NLG technology improves maintainability although the biggest maintenance work actually involved changing data formats at the I/O interfaces. We also found our system being used by forecasters in unexpected ways for understanding and editing data. We conclude that the success of a technology owes as much to its functional superiority as to its suitability to the various stakeholders such as developers and users. 1 INTRODUCTION Modern weather service companies operate in a competitive market where quality of their forecasts must show continuous improvement. Forecasters in these organizations predict weather under the guidance of weather data generated by Numerical Weather Prediction (NWP) models. In order to produce a weather forecast for a specific end user, they carry out two tasks: 1. To compile weather prediction information fulfilling the needs of the end user. This task requires them to use NWP model along with other sources of weather data such as satellite pictures and their own forecasting experience. 2. To present the weather prediction information to the end user in a suitable medium such as graphics or text. The quality of the weather prediction is largely determined by the first task. Particularly, the knowledge that human forecasters bring to weather forecasting is very crucial for higher quality forecasts. From the quality perspective, forecasters are expected to spend more time on the first task in comparison to the second task. This is possible if the second task of presenting weather information to the end user is automated. In collaboration with Weathernews (UK) Ltd, as part of the SUMTIME project, we have studied the task of presenting weather information textually to oil company staff supporting offshore oilrig operations in the North Sea. We have used a variety of knowledge acquisition (KA) techniques developed in the expert system community to understand how humans perform weather forecasting [1]. From the KA we have identified a number of requirements that the text generation solution should fulfill. A few of these are described below: 1. Consistency of Language Use: Individual forecaster variation is one of the initial observations we have made while studying a corpus of human written forecasts [2]. For example, forecasters differed significantly in their usage of time phrases such as ‘in the evening’ to mean either 1800 hours or 2100 hours. This could cause confusion to the end user about when exactly a predicted change occurred. Using language consistently is an important requirement. 2. Sensitivity to End-User: Another key observation was that content of a forecast should depend upon the end user. Different oilrigs need different details of information in a weather forecast. An offshore oilrig in the North Sea might require different details from one in the Persian-gulf because of the differences in their structural designs. 3. Forecaster Control: It is important that the forecasters who use our system should be able to control its output without writing new code. The need for this might arise due to changes in end user requirements. Because of their long working experience, forecasters understand the end user requirements and also understand how they effect the generation of forecast texts. 4. Data Analysis: Weather data used by forecasters consists of time series of various parameters such as wind speed, and wind direction. Forecasters analyse the time series data to select important data points to include in the weather report. This data analysis needs to be integrated with text generation. Using the knowledge gathered from our KA studies, we built SUMTIME-MOUSAM to generate textual marine weather forecasts for offshore oilrig staff [3]. The system has been in use at Weathernews for the past year producing draft forecasts, which are then post-edited by their forecasters before being sent to the end users. In section 2 we briefly describe SUMTIME-MOUSAM and explain how it is used at Weathernews. In section 3 we discuss the lessons we learnt from our experience of building, installing and maintaining SUMTIME-MOUSAM. 2 SUMTIME-MOUSAM SUMTIME-MOUSAM follows the simple pipeline architecture for text generation [4] as shown in Figure 1. Input to SUMTIME-MOUSAM is obtained by sampling forecaster edited data from the NWP model prediction at the required grid point. Table 1 shows a portion of input to our system for 12 June 2002. Full data set includes approximately forty basic weather parameters predicted for 72 hours (3 days) from the issue time of the forecast. Table 2 shows the first day forecast text generated by our system. Output forecast text is organized into various fields such as Wind, Wave, Weather etc. Each of the fields describes a few basic weather parameters. For example, the wind field of the forecast has been generated using the data shown in Table 1. Next, we briefly describe the major modules. **Document Planning**: This stage is responsible for determining content and organizing (structuring) it coherently. We use Weathernews’ recommended structure. Content determination involves selecting ‘important’ or ‘significant’ data points from the underlying weather data to be included in the forecast text using the bottom-up segmentation algorithm [5][6]. **Micro-planning**: This stage is responsible for lexical selection, aggregation and ellipsis. Here we have used the rules we have collected from our corpus analysis and other KA tasks. **Realization**: Finally this stage is responsible for producing a grammatically acceptable output. We have developed a simple realize that is tuned to the sublanguage of weather forecasts. **Control Data**: This is an external data source that partially controls the text generation process in **SUMTIME-MOUSAM**. Forecasters can edit this data externally to tailor the output text. For example, error function data for controlling the segmentation process is specified here. Weathernews manages their marine forecast production with the help of an internally developed tool called Marfors. Marfors calls **SumTime-Mousam** to produce textual forecast from weather data (NWP model output or forecaster edited data). Figure 2 shows the use of **SumTime-Mousam** at Weathernews. To start with forecasters initially load the NWP data corresponding to a client’s request into Marfors. They then use their meteorological knowledge to edit the NWP data and call **SumTime-Mousam** to generate an initial draft of the textual forecast for the required location. Forecasters use this initial draft largely to understand the underlying data set. This has been an unexpected use of our system; our system output was not intended for forecasters. Yet, we find the forecasters using our output to understand data and to use that understanding for editing data. The cycle of edit-data and generate-text is carried out for a number of iterations until the forecasters are satisfied with the weather data, which is shown as ‘Edited data’ in Figure 2. **SumTime-Mousam** is once again invoked to generate a final draft textual forecast, which is shown as ‘Pre-edited text’ in Figure 2. Forecasters use Marfors to post edit the draft textual forecast to prepare the final forecast, which is shown as ‘Post-edited text’ in Figure 2. Figure 3 shows the screen shot of Marfors editors used by forecasters at Weathernews to edit data and text. In the past there have been efforts to use NLG technology for weather forecasting. For example ICWF [7], FOG [8] and MultiMetoe [9]. For a more exhaustive list of weather forecast text generators please refer to the NLG system list published by John Bateman and Michael Zock [10]. Of particular significance is the system FOG (Forecast Generator) used by the Canadian Weather Agency. FOG produced public and marine weather forecasts in two languages, English and French. The main focus of the system has been on using Meaning Text Theory to generate bilingual output. Based on the descriptions of the system, it is not clear how it performs content selection. In comparison to FOG where the focus is more on the linguistic theory for bilingual output, **SumTime-Mousam** focused more on analysis of weather data to determine ‘important’ information to be included in the weather report. MULTIMETEO is another weather forecast text generation system that follows an interactive approach to multi-lingual text generation. The system is equipped with a knowledge administration facility that allows forecasters to edit the output text generated in one language and uses that knowledge to produce forecasts in several other languages. Perhaps because of its commercial value, it is hard to find published material describing technical details of MULTIMETEO. Based on the available details it appears that our control data is similar in concept to their knowledge administration. ### 3 Lessons from SumTime-Mousam In the past one year **SumTime-Mousam** has been used by forecasters at Weathernews to produce 150 forecasts per day. We have carried out a post-edit evaluation of our system where we have counted the number of edits (add, delete, replace operations) human forecasters performed while editing ‘pre-edited text’ to produce the final ‘post-edited text’ as shown in Figure 2. For example consider the following texts: A. Pre-edit Text: SW 20-25 backing SSW 28-33 by midday, then gradually increasing 34-39 by midnight. We first divide A and B into constituent phrases such as ‘SW 20-25’ and ‘backing SSW 28-33 by midday’. Phrases from A are then aligned to phrases from B. Once the phrases are aligned, we then count all the edits performed on each aligned phrase pair such as ‘then gradually increasing 34-39 by midnight’ aligned to ‘gradually increasing SSW 34-39’. For this example pair, we count ‘then’ deletion, ‘SSW’ addition and ‘midnight’ deletion. More details of our evaluation work have been described in [11]. The results of our evaluation have been shown in the following table by classifying the mismatches: According to our evaluation our rules for performing ellipsis need to be refined. Work is currently underway to learn better ellipsis rules. In this section we present the lessons we learnt at different phases of SUMTIME-MOUSAM’s lifecycle. ### 3.1 Lessons from the development phase The very first version of SUMTIME-MOUSAM was developed based on a method suggested by one of the experts from Weathernews. This method used what can be called a template-based approach to text generation [12]. Essentially a template-based approach is based on manipulation of character strings to produce text output. There is no explicit linguistic knowledge in these systems and also there is no modularization of text generation tasks. Code belonging to a deeper level task such as selecting content also deals with a surface level task such as punctuation. Each separate output text is produced as a special case. Although this approach was simple to implement for the initial version, it was not easy to extend it to generate the full range of output texts. Particularly writing code for arranging words in the grammatical order and adding punctuation marks was a nightmare. Subsequent versions of SUMTIME-MOUSAM followed more modularised approach as described in the previous section. This allowed us to focus on each of the tasks of text generation independently. Also it facilitated our knowledge acquisition activities. ### 3.2 Lessons from the installation phase The main work at this phase was to tune the I/O interface to Weathernews database. We should have anticipated this mismatch of I/O interfaces and taken corrective measures during the development phase. One solution we have followed finally was to de-couple I/O from the rest of the processing so that any future changes will affect only the I/O interface classes as long as the new classes pass the required data to the rest of the system. Although this is a routine software engineering issue, it is quite important for the success of our system. ### 3.3 Lessons from the maintenance phase During the past one year of its operation at Weathernews, forecasters raised a number of concerns about our system output and we have carried out multiple maintenance activities on the system to fulfill these change requests. Through this work, we have made interesting observations about maintainability of NLG technology. Maintenance of a text generator, like for any other software, is an important phase of its life cycle. The FOG system discussed in section 2 has to deal with a number of sub-language issues during its maintenance phase [13]. The developers of the MULTIMETEO have designed knowledge administration station to allow forecasters to carry out routine maintenance operations on their system [14]. An important feedback from the forecasters using our system is that we should focus our future work more on simple fields we already do well rather than working with complex fields that we do only moderately well. In the context of building successful AI applications, Rob Milne [15] made a similar observation about focusing on ‘narrow, vertical application areas’ and recommended delivering ‘complete solutions to users’. 1. **Database Issues:** As discussed earlier in sub-section 3.2, many changes were made to the system to alter its database. In fact, these changes took more time than all the other changes carried out on our system. Because text generation systems often function as embedded components in a larger system, interfaces to other components need to be carefully planned. 2. **User Interface Issues:** One of the main features of SUMTIME-MOUSAM has been that forecasters (users of the system) can control the output text by changing data in an external file (shown as Control data in Figure 1). After its installation, forecasters have experimented with control data and generated three different versions of control data files. They are called ‘fine’, ‘default’ and ‘coarse’. As their names indicate these three files produce texts with different levels of detail, ‘fine’ produces a very detailed forecast while ‘default’ produces a forecast with lesser details and so on. Out of these files only ‘default’ gets used all the time understandably because it provides a balance between the other two. However, what is not clear is why forecasters do not use the control data more dynamically controlling individual forecasts. It was hard to get feedback on this from forecasters. Although control data was a feature requested by the forecasters, in practice they are not using it. Our current graphical user interface does not allow forecasters to edit control data while editing forecast text; the control data needs to be edited offline. Better user interface will perhaps make forecasters use this feature more often. 3. **Additional Fields in the Weather Report:** As discussed in section 2, weather forecasts are organised into a number of fields each describing data related to a subsystem of weather. We have been asked to add two new fields viz., ‘Swell Period’ and ‘Wind Wave Period’ (shown in Table 2). These two statements partially describe the information contained in the field ‘Wave period’ (also shown in Table 2). This change is a typical maintenance request asking for extending the system to produce additional text after its design and development. In our case, this extension did not force many changes to the system. This is because we already have all the stages of processing implemented in independent modules. We could simply call these modules to work on the required data without writing any new code that performs generation tasks such as content selection and lexical choice. On the other hand we like to contrast this with the amount of new code that would have been needed in template-based systems that we have talked about earlier in sub-section 3.1. For every new output, these systems need additional code written entirely from scratch performing tasks such as content selection and lexical choice. NLG techniques enjoy this advantage of building on the infrastructure developed initially for subsequent extensions. 4. **Improved NLG:** We have made many changes to our system based on the post-edits forecasters performed on our system output. a. Our study of post-edits revealed that the most common editing operation performed by the forecasters is to delete a word or phrase generated by our system. In other words, our rules for ellipsis require modification. In our system all the ellipsis rules are processed in a module... called micro-planner (see figure 1) which contains rules such as: - Suppress direction phrase if same as the previous direction phrase - Suppress speed phrase if same as previous speed phrase - Suppress the entire wind phrase if both speed phrase and direction phrase are suppressed Making changes to our system’s ellipsis behaviour involves making changes to one of these rules located in a single module. On the other hand, template-based systems will require changes to be made at multiple locations leading to all the different outputs. From the maintenance perspective once again we find NLG technology more beneficial as opposed to the template technology. b. One outstanding change request from forecasters requires our system to restrict the number of words contained in a statement. Human readers prefer a concise message carrying important weather information rather than a verbose statement. Such size constraints on text generation output have also been reported in [16]. Size constraints are hard to be fulfilled in the simple pipeline architecture we follow. Reiter [16] suggests that allowing multiple solutions to be passed down the pipeline or adding an additional revision module at the end are two alternative solutions to this problem. We plan to follow the multiple solution approach in our system. Multiple solutions can be useful in another context as well. When we generate text from weather data, there are multiple ways in which we can describe data. The current version of our system produces only one of these descriptions as its output. While post-editing our system output as described in section 2, we have noted some forecasters making lot more edits than others because of differences in their individual preferences. If our system generated multiple descriptions then forecasters could choose the required descriptions. c. Another outstanding change request from Weathernews is about extending the current system to produce multi-lingual output, particularly in Norwegian. Because of our approach to text generation, our processing is language independent and therefore is capable of generating multi-lingual output. d. Certain input data sets contain patterns of special interest and their description should be different from normal descriptions. For example, if wind speed shows a pattern of rise and fall many times in a forecast period, it should be described with the phrase ‘variable wind’ rather than a sequence of phrases describing wind ‘rising’ and ‘falling’. We need additional data analysis techniques for detecting these patterns. e. A related issue comes from an observation we made while studying humans writing forecast texts from weather data. Human forecasters analyze input weather data with a view to building what we call ‘overview’ of the weather. Forecasters use this overview to select content for individual fields. Because all the fields derive content consistent with the overview, the forecast as a whole conveys a consistent view of the weather. Overview is an elusive concept and we are exploring ways to incorporate it into our processing. 4 CONCLUSION Text generators like any other software operate in an ever-changing environment and therefore must be designed for maintainability. From our experience of using SUMTIME-MOUM for weather forecast generation we conclude that NLG technology (a) allows forecasters to control the text without writing code and (b) offers robust design that allows easier maintenance. We also believe that the final success of the system depends upon its suitability to various stakeholders such as developers and users. ACKNOWLEDGEMENTS Many thanks to all the staff at Weathernews (UK) Ltd. for their active co-operation throughout this work; this work would not be possible without them! This project is supported by the UK Engineering and Physical Sciences Research Council (EPSRC), under the grant GR/M76881. REFERENCES table/overview-domain.html
{"Source-Url": "http://frontiersinai.com/ecai/ecai2004/ecai04/pdf/p0760.pdf", "len_cl100k_base": 4183, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 22944, "total-output-tokens": 5265, "length": "2e12", "weborganizer": {"__label__adult": 0.0003266334533691406, "__label__art_design": 0.0006213188171386719, "__label__crime_law": 0.0004589557647705078, "__label__education_jobs": 0.00150299072265625, "__label__entertainment": 0.0001883506774902344, "__label__fashion_beauty": 0.0002332925796508789, "__label__finance_business": 0.0006227493286132812, "__label__food_dining": 0.0004758834838867187, "__label__games": 0.0005407333374023438, "__label__hardware": 0.0011577606201171875, "__label__health": 0.000843048095703125, "__label__history": 0.0004854202270507813, "__label__home_hobbies": 0.00016129016876220703, "__label__industrial": 0.001148223876953125, "__label__literature": 0.0008726119995117188, "__label__politics": 0.000560760498046875, "__label__religion": 0.0005879402160644531, "__label__science_tech": 0.3671875, "__label__social_life": 0.00017940998077392578, "__label__software": 0.042999267578125, "__label__software_dev": 0.57763671875, "__label__sports_fitness": 0.0002892017364501953, "__label__transportation": 0.0007853507995605469, "__label__travel": 0.00026535987854003906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23826, 0.0265]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23826, 0.67337]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23826, 0.92138]], "google_gemma-3-12b-it_contains_pii": [[0, 5109, false], [5109, 6696, null], [6696, 10837, null], [10837, 17401, null], [17401, 23826, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5109, true], [5109, 6696, null], [6696, 10837, null], [10837, 17401, null], [17401, 23826, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23826, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23826, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23826, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23826, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23826, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23826, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23826, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23826, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23826, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23826, null]], "pdf_page_numbers": [[0, 5109, 1], [5109, 6696, 2], [6696, 10837, 3], [10837, 17401, 4], [17401, 23826, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23826, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
6cca299dcc66284fd7d0298ebce89f4bbf462ec6
INTERACTIVE SPATIAL WEB-APPLICATIONS AS NEW MEANS OF SUPPORT FOR URBAN DECISION-MAKING PROCESSES E. Gebetsroither-Geringer 1, R. Stollnberger 1, J. Peters-Anders 1 1 AIT, Austrian Institute of Technology, Giefinggasse 4, 1210 Vienna, Austria - (ernst.gebetsroither, romana.stollnberger, jan.peters-anders)@ait.ac.at KEY WORDS: Geo-visualization, Interactive Web-Applications, Spatial Decision Support Systems, R/Shiny Framework ABSTRACT: Citizen participation, co-creation - a joint development of professionals and citizens – initiatives for urban planning processes have increased significantly during the last few years. This development has been strongly supported by the evolution of Information and Communication Technologies (ICT). E.g., it has never been easier to get information through your mobile devices wherever and whenever you want it. Public open spatial data is available in many cities around the world and web-based applications use this data to provide tools and services for many different topics such as traffic information, or the communication of health-related information (e.g. ozone, particulate matter or pollen loads). This paper presents typical problems of such web-applications in terms of application design and implementation and usability evaluation via describing three case study applications which have been developed recently. It tries to answer the question: How can this kind of geo-services be developed and used by scientists to enable public participation within data gathering and urban planning processes? All three applications have the common goal to provide interactive geo-visualization and analysis features which are tailored to support users in their urban planning processes. The innovation of those applications lies in their flexibility regarding the topics they can tackle and their capability to perform interactive analyses triggered by the user. The applications have been built with a strong focus on exploring the available data (e.g. Open Government Data - OGD). Two of the applications have been implemented using the R-Shiny framework, the third application, the smarticipate platform, has been developed using ReactJS for the front-end, running a MongoDB in the background which is fed via a micro-service framework. In the latter application, the users can configure topics, i.e. the platform enables the user to create new services for different planning issues. 1. INTRODUCTION 1.1 Background During the last few years the active involvement of citizens in urban planning processes increased significantly. The idea behind urban co-creation is to create a connection between professionals (urban planners, decision-makers) and citizens and to intervene, participate and engage with each other to transform the urban environment, regardless of the social or professional background of the participants (Dörk and Monteyne, 2011). Common participation was strongly facilitated through the evolution of ICT. (Mueller et al., 2018) present Citizen Design Science as a new strategy for cities to integrate the citizens’ ideas and wishes into the urban planning process. The idea is to combine crowdsourcing methods through modern ICT with active design tools. The development of web-based IT-tools enables citizens to easily take part in participatory city planning processes (Khan et al., 2014). Most participatory planning processes happen at a very local level, e.g. on a neighbourhood or district level thus information on this level is necessary to engage with the people. Because of this, new participatory tools often need to incorporate spatial data in high resolution and decision-making features tailored to the users. The combination of geographic information systems (GIS) functionalities such as spatial data management and cartographic display with web-based Spatial Decision Support Systems (SDSS) providing flexible user interfaces and analytical modelling capabilities enable the participation of users in planning processes (Sugumaran and Sugumaran, 2007). GIS alone cannot solve the problems of planning processes but integrating additional ICT tools seems to offer the digital infrastructure for developing decision making tools (Voss et al., 2004). Additionally, the availability of spatial Open Government Data (OGD) increased significantly during the last years. However most often these data are used by experts, only, e.g. within research projects because other stakeholders do most often not know how to use those datasets. 1.2 Problems In the past, Spatial OGD has often been used to create services to support individual spatial planning decisions. Most often these services (tools) have been developed as a static web service using HTML5/JavaScript programming and Web Map Servers to visualize different pre-processed results. Nonetheless, the development of more appropriate services, which enable the user to do their own analyses, are not very common. One problem to engage with local stakeholders are often static visualizations of pre-processed results as the stakeholders want to have more specific – i.e., tailored to their needs – results. Thus, there is a clear need to create this kind of services and ideally the created services should be self-explanatory and “all-rounder” programs, i.e. programs which are easy to use and understand. An additional problem is that these services often lack a good business model and thus the development is often done in funded research projects with time and budget constraints which allows neither for intensive development, nor for improvement cycles throughout the development process. Another crucial topic all applications, tools or services must deal with, is user engagement. Here it is important to provide the content in a way the users can understand and relate to their environment, meaning that engagement is much easier if it really tackles the stakeholders’ interests. Design guidelines for web-applications supporting user engagement e.g. exist since almost 20 years. Wrobleswky and Rantanen mentioned already in 2001 twelve specific design guidelines (Wroblewski and Rantanen, 2001). Fowler and Stanwick a few years later (Fowler and Stanwick, 2004). Web technology innovations in the last few years allow software designers with the help of feature rich frameworks such as e.g. jQuery, AngularJS, Bootstrap, React and Node.js to quickly develop responsive mobile-friendly applications (Shahzad, 2017). Several challenges in developing web-applications must be tackled, e.g., to deliver simplicity and intuitivity as the internet users are not very patient and might move on otherwise. Another challenge is the look and feel of an app to ensure a flawless user experience which goes hand in hand with another criteria for success, the performance of the application and its scalability (Prat, 2018). The following paper presents an introduction to web-based applications including use cases, target groups and software design issues as well as usability evaluation. Therefore, we will depict three case studies dealing with the above described problems and approaches for solutions with a special focus of spatial applications. 2. WEB-BASED APPLICATIONS TO SUPPORT URBAN DECISION MAKING 2.1 Use cases There are various types of goals web-based applications are addressing. Some aim to inform citizens about a particular problem, raise awareness or visualize different topics. Other applications aim at identifying current problems in local communities, providing citizens an opportunity to discuss daily issues as well as providing a platform to voice their opinion and directly get in touch with responsible city stakeholders. (Desouza C., Kevin and Bhagwatwar, Akshay, 2012). In summary, the following main types can be classified. Applications that: - inform, create awareness, increase transparency. This can be e.g. displaying information about the traffic situation, on health issues (ozone, fine dust or pollen loads) via e.g. maps showing pre-calculated results. Usually, these applications contain top-down created information, aiming at providing content related to the personal environment of the users, but often they lack specific (spatial) data the users would need to take informed decisions. - identify, report current problems. Here, problems like e.g. damaged infrastructure (e.g. pot-holes), or crammed waste bins are reported, most often to the municipal administration. Sometimes these applications are more elaborated, enabling to start a discussion about the problem and its possible solutions within a forum. This represents a bottom-up process and user engagement is rather easy as the problems are linked to the personal environment of the users. - support mid to long term planning processes. In order to enable interactive analyses supporting mid to long term decision making (planning) processes, technologies which facilitate participation through interactive analysis and mapping tools are useful in two ways: On the one hand, urban planners and city authorities better understand challenges citizens are facing in their neighbourhoods. On the other hand, users affected populations within the investigated urban area can - via the use of interactive elements - try out the effects of various scenarios for themselves and thus are motivated to change inappropriate circumstances as they wish. The diversity of topics that can be addressed through web-based applications is quite high. Possible topics are: - public transportation (real-time information on public transport) - public services (report problems/infrastructure issues, dangerous road situations, pollution in the streets, etc., example applications: “Sag’s Wien” in Vienna (Stadt Wien, 2018) or Improve My City (URENIO Research, 2018)), - public safety (e.g. create awareness about crime incidents with map visualization of any crime incidents), - spatial planning/citizen participation or urban, energy planning (e.g. the digital thematic city map “Wiener Umweltgut” (Vienna Environmental Goods (Wiener Umweltschutzabteilung, 2018) provides access to environmentally relevant information like e.g. the solar potential of roof surfaces, the waste heat potential of production sites or the position of protected landscape areas. Designing and planning urban areas is a complex task especially addressing the variety of challenges such as energy and climate topics, transport, public safety and high crime rates, social inequality, population growth in cities and health issues. Only recently, more specific tools and services for designing and planning urban areas emerge (“DeCodingSpaces Toolbox,” 2018, “UrbanFootprint,” 2018, “UrbanSim Inc,” 2018; Gebetsroither-Gerleng and Lobl, 2014) 2.2 Target users and how to mobilize them One of the greatest challenges concerning user engagement is encouraging citizens to get involved. The complexity of spatial planning problems requires the competency of experts in different fields like urban planners, mobility/climate experts as well as local stakeholders (city authorities, citizens) for defining appropriate development scenarios. Bringing relevant sources of information together must involve supportive policies and governance platforms who promote the development of applications that use open data to trigger change at the local level. Governments can promote citizen empowerment and allow citizens to frame solutions for local problems by making the right set of tools available to them. By providing an assortment of incentives, e.g. prizes and competitions, the share of participants can even be increased (Desouza C., Kevin and Bhagwatwar, Akshay, 2012). Another possible way to mobilize (motivate) users is to use the gamification approach. The idea is to use game design elements in non-game contexts (Deterding et al., 2011). Gamification is a multi-validated approach to increase motivation and willingness to participate in participative processes. E.g. citizens can collect engagement points whenever their ideas for a new urban planning area get upvoted by fellow citizens (Van Ransbeek, Wietse, 2016). Gamification has already been successfully used in various application areas to promote participation, for example in the context of civil courage (Coronado Escobar and Vasquez Urriago, 2014) civic participation (Thiel, Sarah-Kristin and Lehner, Ulrich, 2015) e-learning (Barata et al., 2013) and e- government (Al-Yafi and El-Masri, 2016). Also, in the field of mobility the application of gamification has produced positive results, e.g. in relation to the promotion of sustainable forms of mobility (Kazhamiakin et al., 2015). 2.3 Usability evaluation Usability is considered to be one of the most important quality factors for either success or failure of web-applications ([Fernandez et al., 2011], [Myung and Tossy, 2015], [Cayola and Macías, 2018]). (Nielsen, Jakob, 2012) clearly state if a website is difficult to use or the information is hard to read, people leave, because “There’s no such thing as a user reading a website manual”. The term usability was derived from the term ‘user friendly’ and is defined as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” (international standard ISO 9241, covering ergonomics of human-computer interaction (Matera et al., 2006). Due to the complexity and variety of web based applications that evolved during the last years, many methods, techniques, and tools with which to address and evaluate web usability issues have been developed. In order to ensure user satisfaction, an evaluation to assess the application functionality, to verify the effect of its interface on the user, and to identify any specific problem with the application should be conducted. Depending on the phase, in which evaluation is performed, there are two possibilities: either formative evaluation (takes place during design), or summative evaluation (to be done for final evaluation after the product has been developed). The most common methods are user testing, where the real users are studied, usability inspection, where specialists evaluate and web usage analysis, where the users’ behaviour is studied via access statistics (e.g. Google Analytics). The method of user testing is quite extensive due to a very time-consuming process and required personnel. Therefore, usability inspection methods emerged where developers and usability experts predict usability problems that could be detected through users testing. E.g. in a heuristic evaluation, experts analyse the application against a list of recognized usability principles – the heuristics. Cognitive walkthroughs, on the other hand, simulate the steps the users will take in specific situations of use and why. An expert evaluator or a group of evaluators walk through a series of tasks to be performed on the interface and discuss usability issues in order to understand if users would know what to do (Matera et al., 2006). 3. Interactive Geo-visualization and analyses 3.1 Application design One of the key questions regarding the design of interactive web services is: For whom are they developed, with what objective and which purpose? For services (tools) aiming to be used by different stakeholders (urban planners, GIS-experts, citizens etc.) with different knowledge, this would imply a huge amount of effort and a direct interaction with the users during the development phase. Professional software development companies like Google, Microsoft or Apple e.g., devote a huge amount of their effort in usability testing and beta-testing applications. Recently, a trend has emerged that more and more web services are being developed by data scientists, instead of software development experts. Development frameworks like R-Studio, the developer of Shiny. A clear focus of this approach is to use pre-developed software parts, either of Shiny or R, as much as possible, thus the developers can concentrate on their data analyses, i.e. the tailored information for the user, which can be interactively calculated for the special needs of the users. 3.2 Implementation approach using the R/Shiny Framework Shiny (R-Studio, 2018) is an R Package (a so-called library) to build interactive web-applications with a rapidly increasing user community, providing code examples via a gallery of showcases. This has the advantage that applications can be built with relatively low effort. The applications can be embedded into R-markdown documents. Shiny applications can use CSS themes, HTML widgets and pure JavaScript. What Shiny makes so powerful is that it is part of R (R Foundation, 2018) and within the last decade R became more recognized to be used for geo-computational issues (Lovelace et al., 2018). Another important aspect is, that the R/Shiny combination can be used to build web-applications, which R as well as Shiny are able to run, either on developer owned servers or on a dedicated (cloud) server owned by R-Studio, the developer of Shiny. A clear focus of this approach is to use pre-developed software parts, either of Shiny or R, as much as possible, thus the developers can concentrate on their data analyses, i.e. the tailored information for the user, which can be interactively calculated for the special needs of the users. 3.3 Implementation using a Configuration Approach via a Middleware In comparison, the smarticipate platform (1) described in chapter 4.3, has been developed using ReactJS for the frontend, running a MongoDB database in the background which is fed via micro-service framework which is handling the user management, data storage and communication channels of the platform. Deviating to the above described implementation approach, the smarticipate platform is an already developed generic platform which enables the users to create topics and new services for participatory processes and different planning questions. 3.4 Usability improvement and evaluation As already mentioned above, an application (i.e. a software) developed by data scientists cannot be developed with the same economical effort as e.g. business software or computer games, since the budget for the programming is usually much lower than in commercial developments. Nevertheless, the usability should follow standard development rules mentioned in section 1.2 and e.g. use tooltips to explain the use of certain features. Online manuals, including annotated screenshots and video tutorials, can be easily created e.g. by using tools like HelpNDoc (IBE Software, 2018). For the smarticipate case, important usability improvements have been introduced by optimizing the platform for desktop and --- 1 [https://www.smarticipate.eu/platform/](https://www.smarticipate.eu/platform/), last accessed 14.05.2018 mobile use, as well as programming the frontend in HTML5 which makes it platform independent, since it runs in any browser in any operating system. Using R/Shiny does not support mobile use out of the box, meaning the development responsive UI’s regarding the used device. 3.5 Data Quality Regarding the used datasets, the availability of OGD increased significantly in the last few years. This has also led to qualitative differences, like incompleteness of data-sets or data-sets being not up-to-date or incorrect. To work with those data, it is usually necessary to check and validate them or let them be checked and validated by experts in advance before using them in web-applications. Data validation is also a crucial point when users can enter their own data e.g. (see also 4.2.3). 3.6 Performance According to (Sugumaran and Sugumaran, 2007) performance is one of the biggest limiting factors in the development of web based SDSS. Those limitations arise mostly from the use of large raster and vector data-sets and the need to also move large spatial objects between the server and the client via network connections. Regarding performance, the programming language R also provides room for improvement (Martin, 2016). This is especially the case if no effort is spent on overcoming the performance bottleneck of R in the future, but already several techniques exist regarding those issues (Lim and Tjhi, 2015). One of these is to use external tools such as gdal (GDAL/ORG contributors, 2018) or QGIS connected to R, to increase the performance within R-scripts and Shiny web-applications. Nevertheless, it is still an important issue of many if not all web-applications that users will not spend a long time waiting for results especially if they do not get any feedback (performance bars, explanation dialogs etc.) during the processing on the server. 4. Case Study Applications The following case study applications have been developed within different projects and use the above described implementation approaches. Their main purpose is to show – as a proof of concept – how this kind of geo-services can be developed and used by data scientist to enable public participation within data gathering and urban planning processes. 4.1 PV-Potential Calculator The PV-Potential Calculator is an interactive web-based service enabling geo-visualization and interactive analyses. 4.1.1 Background and Motivation The PV-Calculator has been developed within the Syn[En]ergy (2) project. Syn[En]ergy investigated, via an inter- and transdisciplinary approach, synergy and conflict potentials of photovoltaics in open urban areas, with other use demands for the open areas. The project developed a typology and practical solutions for selected areas with regards to requirements covering economical, urban planning and design, legal as well social aspects. Those requirements were then evaluated by stakeholders from enterprises, the city administration and citizens. Regarding the history of the tool it is important to note that the tool had never been planned to be developed within the project in the first place, but the aim was to develop an application to demonstrate the photovoltaic potential of different urban open spaces like parking lots, playgrounds or parks etc. around Vienna, based on the available open spatial data from different sources. Problems arose during the detailed data analyses where the project was facing poor data quality regarding the exact geo-reference of the required data of urban open spaces. Compared to standard solar cadastre maps, this application is thus more innovative as it enables the users to calculate potentials for their own areas of interest. 4.1.2 Application Design and Implementation The UI – a standard dashboard layout - was created using the Shiny package of straight from R. With this package, a simple but quite intuitive interface – following standard web design rules – can be created without huge development effort and therefore more effort can put into the functionality of the application mainly using R as programming framework. Figure 1 shows a screenshot of the PV-Calculator. Four simple steps have been designed to enable the user to select an area, for which the PV potential should be calculated. Figure 2 shows the result after the calculation of the area. Besides the calculation of specific values for a selected area, the economic potential is roughly evaluated. ![Figure 1. PV-Potential Calculator-Edit © AIT 2018](image1) ![Figure 2. PV-Potential Calculator-Results © AIT 2018](image2) 4.1.3 Difficulties and possible solutions The following Figure 3 shows a message which is triggered by an incorrect usage meaning that the selected area contains buildings, which would lead to a wrong analysis result. Nevertheless, that this main incorrect usage (selectin areas with buildings) has been inhibited, it is clear that not all similar problems can be solved in such kind of service. Thus, an introduction how to use this service correctly is important. Annotated screenshots and short tutorial videos have been developed as an extensive documentation or long user manuals would not work in this context. Regarding the performance of the application, the most demanding step – the calculation of the PV- --- 2 http://synenergy.boku.ac.at/, last accessed 14.05.2018 potential in a resolution of 1m*1m for the entire area of Vienna – was already done within the Syn[En]ergy project. For a small selected area calculation times of a few seconds occur and a progress bar provides feedback to the user during this process. If a huge area would be selected it would take very long to get the results, but in the context of this application it is very unlikely that this is a serious PV-installation scenario. 4.2 Waste Heat Explorer This web-based service is an example for using interactive geo-visualization for data gathering and analyses. 4.2.1 Background and Motivation The “Waste Heat Explorer” is part of the “Memphis” project which aims at creating a methodology to evaluate and map the potential of waste heat from the industry, the service sector or from sewage water by using internationally available open data. The project addresses this issue through the analysis of low-grade and spatially distributed heat potentials. The project Memphis is still ongoing, but a prototype of the web application “Waste Heat Explorer” is already available. The motivation for its development was (i) to provide interested stakeholders a possibility to analyse calculated waste heat potentials from industry on a map, (ii) to provide a web-based opportunity to add new possible waste heat sources by the users (data gathering) and (iii) to evaluate the already calculated potentials with the help of local experts or members from the industry or service sector. 4.2.2 Application design and Implementation The following Figure 4 shows a screenshot of the designed web application. Figure 4 depicts in comparison to Figure 1 the similarity of the UI which is based on the development with the same framework R and Shiny. Nevertheless, this web application provides a very different feature to gather data regarding possible sources for waste heat. The user can, with five very simple steps, add a new location with company name, industrial sector affiliation, number of employees and address, together with privacy status assignment. The application uses the developed algorithm in the background to calculate the potential out of the input information and shows the results afterwards (Figure 5). The users have then the opportunity to overwrite the pre-calculated results. This application has beyond the input of new sources further features. The application records when a user overrules the pre-calculated results and this information can be used to evaluate the current calculation method developed during several projects. 4.2.3 Difficulties and possible solutions Smaller difficulties as verifying the address or to make sure that all necessary content is filled in before the calculation starts, thus the calculation does not throw an error, exist. These difficulties can be solved with rather small effort. A main problem all these user driven data gathering application have in common is how to evaluate the data entered by the user. One could just for fun indicate a source with wrong information leading e.g. to a significant waste heat potential, at a location where it does not exist. Solutions for this are not that easy. One could think of authorisation (e.g. with a login and personal identification) is a possible mean to solve this, but this most probably will decrease the willingness of people to participate in the data gathering procedure. The “Waste Heat Explorer” can find out outliers which afterwards can be further verified. The correction by the general public could be used as another possible way to find this kind of “fraud”, as in Wikipedia. Serious games used for data collection can implement such kind of control mechanism rather easily as points for players verifying the data input from other players. However, the Waste Heat Explore does not include these methods, but it is assumed that no high incentives exist to enter wrong information. Regarding the performance, this application is on the save side, but the UI also lacks usability testing and should not be used on small devices. 4.3 smarticipate 4.3.1 Background and Motivation Project smarticipate’s aim is to develop a web-platform which can be used as a participatory citizen dialogue system, meaning it should provide a means of e.g. transforming OGD into new forms of transparency during planning processes within a city. The platform should be flexible and generic to provide the basis for all kinds of different topics to be tackled within a city. In this way, the citizens should be enabled to share their ideas in the decision-making process. Consequently, they get full access to the public open data of their cities and can give feedback on neighbourhood-related and citywide ideas. The platform allows governments, NGOs, businesses and citizens alike to configure... their own topics. As a result, the citizens are empowered to play an active role in the public domain. 4.3.2 Application design and Implementation smarticipate is designed as a generic web-based application which lets the administrator of the platform configure so called Topics. Those Topics can be anything from bike rack planning questions, to finding places for urban gardening or letting citizens decide on future uses of buildings in a refurbishment area. Those Topics will then be configured in the platform’s back-end. Additionally, to the usual configuration features of a topic (name, description) smarticipate provides unique functions: (i) the definition of (generic) objects to be placed on the topic’s map (here: Tree types, Figure 6) resembles a portion of the back-end’s Topic configuration interface. (ii) the upload of user e-mail addresses who want to/shall be contacted/invited regarding the specific Topic in question (Figure 7, button on the left hand side) (iii) the upload of so called rules to be used to give direct feedback on locations of the (above configured) placeable objects when clicking the map. The rules are used for querying the above mentioned OGD datasets. (Figure 7, button in the middle) (iv) additionally, an area of interest (AOI) for the Topic in question (e.g. a district or a neighbourhood within a city) and background maps (depicting the current situation of a Topic) can be configured (Figure 7, button on the left-hand side and at the bottom of the page). These background maps are directly loaded from the city’s geo-server and could e.g. depict the current land use, legal restrictions or locations of bus stations, lamp poles etc., within the AOI. The links to those maps can be easily configured via Web Map Service (WMS) links to the city’s OGD server. The configured Topic is published on the smarticipate’s front-end main page and can be opened by the citizens, either in a desktop or a mobile browser. The configured objects can be chosen on the right-hand side of the interface (Figure 8) and can then be placed on the map as a proposal. When doing so, the system triggers a rule query to the underlying OGD dataset(s) which then leads to a feedback to the user if placing the object at this location is possible and if not, why it is not possible. It is important to notice that even if the location indicates that placing an object is not possible, the user can still propose the location for further discussion with other citizens or the city administration to discuss a change in the legal or planning prerequisites for this neighbourhood. Another feature of the platform is its commenting and voting function: The users can comment on other users’ proposals and vote for them in the right hand side application bar of the interface, in this way fostering an active discussion on neighbourhood issues. Additionally to the above described features, smarticipate also provides the possibility to depict 3D scenes of objects covered by the Topic (where CityGML(3)), a data format to store city related data including 3D information, is available). In this way it is possible to also discuss design issues of buildings, trees or any other kind of physical object in the AOI of the Topic (Figure 9). 4.3.3 Difficulties and possible solutions Since smarticipate’s unique feature is the integration of OGD with feedback functionalities based on defined rules, it is always necessary to get domain experts’ advice from within the city administration. Those experts need to provide background information to be able to describe the rules necessary for the discussion of a Topic. During the development phase of the --- 3 https://www.citygml.org/, last accessed: 15.05.2018 --- This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper. https://doi.org/10.5194/isprs-annals-IV-4-W7-59-2018 | © Authors 2018. CC BY 4.0 License. platform this proved to be an important, yet quite complex task: Planners and civil servants needed to understand the idea behind the feedback system in order to provide the necessary information. Often the rule information (coming in form of e.g. tabular matrices) were not sufficient to depict the complexity of the Topic to the citizens. Here, a lot of effort is needed to translate e.g. legal planning rules into digestible feedback for the citizens. Possible solutions for this problem have also been developed within the project: The methodology is called smartathon. This gathering of citizens with their administration, which have been attended by around 100 people during the development phase of the platform, have been used to discuss the platforms features and usability aspects. In the future, these smartathons will also be used to discuss planning topics in a physical meeting which can then be translated into rules for the system to be used by a broader public within the smarticipate platform. 5. DISCUSSION In recent years, more and more spatial web-applications have emerged, that allow the user not only to look at static, pre-calculated results, but also to perform interactive analyses that deliver exactly the results that are of interest to the user. The presented case studies thus enhance the standard approach for geo-visualization. The reflection of the three different applications shows some of the main challenges in the development of interactive spatial web-applications. For example, the advantage of using the R/Shiny framework is the possibility to develop applications that are easy to understand and relatively easy to develop with low coding effort. The interactive usage fosters urban co-creation and citizens – as local experts – can easily develop different scenarios and learn from the (simulated) effects on the urban environment. Furthermore, due to the circumstances that the results can thus be more tailored to the user’s needs – e.g., results for the local planning processes – is the public engagement easier to achieve. The two presented R/Shiny applications have in common that compared to professional software or games – the development effort is low and user experience testing is not very elaborated. Additionally due to the use of predefined frameworks as R/Shiny also drawbacks regarding performance and design have to be accepted. Nevertheless, the use of simple online Help-Systems (Tooltips, annotated screenshots and tutorial videos etc.) can reduce these negative effects. On the other hand, the smarticipate portal’s front-end is programmed in ReactJS which is a modern framework for programming device aware web-sites which adapt to different screen sizes easily. The use of maps in the platform is greatly improved by using this framework, still, even with this modern framework, the map depiction and usage of markers e.g. is not optimal if used below the screen size of a tablet. This again shows that geo-visualisations on screen sizes of mobile phones are a very challenging task which needs a lot of design considerations before the actual programming can take place. 6. CONCLUSION This paper described how interactive spatial web-applications can be used to support local stakeholders and urban planners in spatial decision making by implementing citizen’s thoughts and OGD datasets. In the future most likely more spatial interactive web-applications will be developed as their potential support for participatory urban planning processes is very high. Most of all, because the development of these applications can now also be done by non-professional developers, like data scientists who have already tools available to create useful applications for many different topics. Regarding the smarticipate platform it can be expected that more and more generic platforms will emerge in the future which are created in a single big development effort but deliver generic configuration capabilities to reduce the complexity of programming even further. This will allow also for lay people to create relevant topics for their neighbourhoods and enable them to participate more actively in the decision-making processes of their cities. As a conclusion it can be expected that participation will become more attractive for citizens since the barriers to do so will be lowered by those applications and with the next generations of younger, digitally more elaborated users the acceptance of digital tools like the ones described here will further increase. ACKNOWLEDGEMENTS Syn[En]ergy acknowledges the funds in the frame of the 4th “Stadt der Zukunft” – program, from the Austrian Ministry for Transport, Innovation and Technology coordinated by the Austrian Research Promotion Agency (FFG), the national funding agency for industrial research and development in Austria. Memphis acknowledges the funds of IEA DHC – the International Energy Agency Technology Cooperation Programme on District Heating and Cooling including Combined Heat and Power. Project smarticipate has received funding from the European Union’s Horizon 2020 research and innovation programme 2016 - 2019 under grant agreement No 693729. REFERENCES This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper. https://doi.org/10.5194/isprs-annals-IV-4-W7-59-2018 | © Authors 2018. CC BY 4.0 License. 66
{"Source-Url": "https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/IV-4-W7/59/2018/isprs-annals-IV-4-W7-59-2018.pdf", "len_cl100k_base": 7648, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26078, "total-output-tokens": 10370, "length": "2e12", "weborganizer": {"__label__adult": 0.0004401206970214844, "__label__art_design": 0.005168914794921875, "__label__crime_law": 0.0010175704956054688, "__label__education_jobs": 0.007152557373046875, "__label__entertainment": 0.00023496150970458984, "__label__fashion_beauty": 0.00030112266540527344, "__label__finance_business": 0.0010967254638671875, "__label__food_dining": 0.0004162788391113281, "__label__games": 0.0017538070678710938, "__label__hardware": 0.0017671585083007812, "__label__health": 0.0006456375122070312, "__label__history": 0.003265380859375, "__label__home_hobbies": 0.00034999847412109375, "__label__industrial": 0.0011539459228515625, "__label__literature": 0.0005488395690917969, "__label__politics": 0.0015554428100585938, "__label__religion": 0.00047087669372558594, "__label__science_tech": 0.3232421875, "__label__social_life": 0.00030112266540527344, "__label__software": 0.16064453125, "__label__software_dev": 0.485107421875, "__label__sports_fitness": 0.0003447532653808594, "__label__transportation": 0.0024547576904296875, "__label__travel": 0.0007662773132324219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44833, 0.04293]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44833, 0.25099]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44833, 0.90417]], "google_gemma-3-12b-it_contains_pii": [[0, 6083, false], [6083, 12449, null], [12449, 18903, null], [18903, 24269, null], [24269, 29082, null], [29082, 33036, null], [33036, 39514, null], [39514, 44833, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6083, true], [6083, 12449, null], [12449, 18903, null], [18903, 24269, null], [24269, 29082, null], [29082, 33036, null], [33036, 39514, null], [39514, 44833, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44833, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44833, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44833, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44833, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44833, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44833, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44833, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44833, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44833, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44833, null]], "pdf_page_numbers": [[0, 6083, 1], [6083, 12449, 2], [12449, 18903, 3], [18903, 24269, 4], [24269, 29082, 5], [29082, 33036, 6], [33036, 39514, 7], [39514, 44833, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44833, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
1b7794ac4d131a4cb0d7787ab2e1332248044d27
Thank you for the gracious welcome. Hello, everybody. Up in the cheap seats there, “Hi!” We’re going to talk about creating software, making stuff. And here are some typical makers of the earliest 20th century. They made stuff like this, but then in the late 20th century, we started to get a new class of workers and they made stuff like this that looks strange. That’s what it looked like back then. I actually was alive in those days. Now it looks more like this, but it hasn’t changed all that much. It’s still hard work; it’s just mental work rather than physical work. So what is the process? Well, we start off with a programmer. They get an idea, and then they start to work through that idea and very carefully write down all the steps of how to do it. “Under this condition, do this,” and so on, and then you have a program that can do something useful. And that’s traditional software development. But in recent years — and this goes back to the beginning of the computer industry but has really taken off in recent years — we get people like this, who’ve proposed a new way of making software. And this new way starts with a computer rather than with a human programmer, and it’s the computer that’s going to do most of the work. The computer is going to write the programs for us. How does it do that? Well, we still have a person with an idea on the side, but their idea is, “Here is a problem I want to solve, and the way I’m going to solve it is I’m going to feed the computer a bunch of examples, a bunch of data, and I’m going to tell the computer a little bit about how to process these examples, and then the computer is going to write the program for me from looking at those examples.” But normally, the computer doesn’t write the program in the same form that a human did. Instead we have different forms, and they look more like this: Sort of a black box that you can’t really understand very well, but at least I’ve shown the lid open a little bit, so that we can peek inside. This is a program. So the computer is writing a program for another computer to run. We also call it a model. It’s a model of the domain, a model of the examples that we see. Many of you are engineering students, so you are all familiar with this and some of you, way back in high school, you did this, you built these models. So you took a science class, you went out and made some measurements, and then, either using a ruler or using a calculator, you calculated the best fit to those data points — and that’s a model. Once we have the model, we can formulate it mathematically, and then we can throw away the data points because the model is a complete summary of everything we knew from the data. And what can we do with that? Well, if we have a question, “What would the value be for this point? I’ve never seen this point before,” then we can use a line or use a mathematical formula and we can get the answer. So, from some examples, we generalize to find the answer to an example we’ve never seen before. And that’s really all there is to machine learning, is coming up with a model; and the general model we had in mind was it’s going to be a line of some kind, gathering the data, fitting the data to say, “Here is the exact line that is the best fit for this data,” and then now we’ve got a model and now we can apply it to new cases. So all of machine learning is like that, and the complications are that the models are a lot more complicated than individual lines, the data is a lot more complicated than a couple of points on a graph, and the algorithms for how you compute what’s the best model, given the data, are also more complicated. “So that’s what the shift is. Rather than having the programmer say how to do it step by step, we’re now telling the machine, ‘I can’t tell you how to do it; I may not even know how to do it.’” So, this is what the field is. What’s the promise for it? Well, there’s two things. First, it allows us to go fast. Instead of having the programmer sit down, and days and weeks and months go by developing the program, you throw some data at the machine and you can instantly get a response. So you can do things much faster than you could ever do before. And, secondly, as General Patton knew, we can now change how we approach problems. Patton said, “Don’t tell people how to do things. Tell them what to do and let them surprise you with the results.” So that’s what the shift is. Rather than having the programmer say how to do it step by step, we’re now telling the machine, “I can’t tell you how to do it; I may not even know how to do it.” So, say something like recognizing faces. I can recognize the faces of my friends and family, but I have no idea how I do it. It’s in my subconscious. I couldn’t, for the life of me, write down how to do it, but I do know how to train a computer to do that task by showing them examples. So we can do things that surprise us, things that we would never be able to do as a programmer, as well as just being able to move faster. So that’s the promise. Now I want to show you some examples of how this works. We showed you the graphing of the straight line, but let’s do something more complex. So I’m going to look at the task that we call word-sense disambiguation. And here’s a task: I give you a sentence like this and then I say, “What sense of [the word] bank is that? Is that bank one, the riverbank, or bank two, the money bank?” As a human, you can answer that pretty well. I want to teach a computer to do that, and the inputs that are going to be available are these dictionary definitions, exactly as they are, and a bunch of sample sentences that happen to contain the word. And from that, I want to build a program that I can now give a sentence it’s never seen before, and it will say “bank one” or “bank two.” Now, I think about how can I solve that problem. I say, “This seems pretty complicated. When I solve that problem, I’ve gotta know the dictionary definition of all these words. I’ve gotta know how the syntax of the English language works, with noun phrases and verb phrases and all that stuff, and I have to know some facts about the world. I have to know what kinds of things are money banks, and what kinds of things are riverbanks.” It seems like it would take me forever to do this, and, if I did, now I’ve only done a word-sense disambiguator for this word “bank.” And now if I want to do all the other ambiguous words in English, that looks like a lot of work; and in every other language as well. I’m giving up. I can’t do all that. I can’t write all that stuff down — but could I possibly train a computer to do all that? Let’s think about how we could do that. I can’t train the computer to know all these things about all the facts about what a bank is and how the English language works, so I’m going to come up with the simplest possible model I can think of, and see if that model happens to be good enough. So here’s the model: I take these dictionary definitions, I get out my scissors, and I cut them up into individual words and put each of those words into a bag. I do the same thing with the other definition. So now I have two bags of words, and here are the words that they contain. And, by the way, there is a technical name for this model. It’s technically called the “bag of words” model. And what this model says is, “In bag one contains all the stuff you need to know about sentences you could say with a riverbank, and bag two contains all the stuff you need to know about sentences you could say with a money bank.” And this model claims that the way an English sentence is constructed is not by somebody thinking in their head about all the meanings, and dictionary definitions, and the noun phrases and the verb phrases. The way we construct a sentence is we reach into the bag and we pull out words, and we put them next to each other until we’re tired of pulling out words. Now, that sounds like a terrible model of how English works, right? It doesn’t work like that. It’s much more complicated. A statistician, George Box, said, “Well, it’s not so bad. All models are wrong anyway, and some of them are useful, so don’t worry so much that that model is a terrible model. Ask, is it useful?” So is it? Let’s try. I’ve got these two models. These two bags are seeded only from the words I got from the dictionary definition. Now I’m going to take all those sentences. The sentences didn’t say which was bank one or bank two, but they’re still going to be useful to me. So even though the sentences didn’t have the right answer, I can still use them to improve my model. How do I do that? I take the first sentence and I look at each of the words, and some of the words don’t appear in either bag, and some of them appear in one bag. So the one in blue, the loans, occurs in bag number two, and some of them occur in both bags. The word “the” occurs pretty evenly across both bags. So I do that, and then I do a little bit of counting up and say, “Which bag is that sentence most likely to have come from? What’s the probability that if I pick randomly from bag one, I would create that sentence, versus if I picked randomly from bag two, and I would create that sentence?” And the answer is it’s more likely to have come from bag two, because that’s the bag that contains the word “loans.” So now, I’ve made a prediction. In this case, the prediction happened to be right, but more importantly, I’m now ready to upgrade my model. I take all those words and I throw them into bag two. Now, bag two is a better model of what it means to talk about a money bank. I keep on doing that for thousands or millions of words, and pretty soon I’ve got big bags — but they’re more effective at this task of disambiguating words. So you start getting bags with counts like this of lots and lots of words, and you realize that everything I need to know about machine learning, I got on Sesame Street. All it is, is counting. We’re building this model by counting up how many times this word was relevant to sense one or sense two, and now I can apply it, and now I can take sentences I’ve never seen before and score them. Some of the times I’ll get the right answer. Occasionally, I’ll get the wrong answer — that’s okay. Let’s see how well I do. So here’s a graph. I’ve trained it on one million words of the sentences that have “bank” in it. There are four bars here because there are four details of slightly different algorithms for how you count and how many points you get for having the word or not, and so on. We get 80 to 85 percent correct. Now, in the old days, like ten years ago, if you were an academic, what you would do is, you would say, “Well, the best one of those got 85 percent, but I’m going to be very clever and I’m going to invent a new algorithm, a new way of scoring, and I’m going to try to get to 86 percent. If I do, then I can publish a paper about it. And people play that game, and that’s why there’s four of them there. Somebody did one, and three other people came along and said, “No, I can do better.” But what we soon realized is arguing over that and trying to come up with a better way is less effective than just saying, “Well, that was one million words of data. I can go back out on the web, and I can easily get up to ten million words, and look — boy, that’s a big difference. And, “Ten million words, that’s not too much. I can get 100 million words. Or I can go up to a billion words.” I’m going up and up and the differences between these algorithms, we used to think it was so significant whether you used algorithm one or algorithm two. Those differences are small, but the difference from having more data tends to be big. Now it looks like we’re pretty much asymptoting out, so if I kept going, I’d probably get diminishing returns, and so the best I can do is probably somewhere around 95 or 97 percent correct — but that’s pretty good, considering the only work I did was to go out and gather up words and throw them into bags. And the phenomenon here of saying, “Here’s an application where if you have a small amount of data, tens or thousands of words, you don’t do very well, but if you have a lot, millions or billions, you do really well” — that’s this notion of this data threshold where you pass over from doing poorly to doing well. And you could say that companies like Google are built on looking for curves that look like this and saying, “Can I find problems for which I can do much better if I have a lot of data?” So that was one task. Now I want to move on to another task, or actually a whole field, with a collection of tasks, the field of computer vision. And why is that useful? Well, you can use it as one of the components of a self-driving car to figure out where the pedestrians and other cars are, so you can drive safely. You can do things like this: This is an application of Google Translate where you point your phone at a sign in another language, and it translates the sign and puts the text in the right font right into the image on your phone. And, for me, as an amateur photographer, I spent a lot of time in the past organizing my photos, attaching keywords to them, and putting them into folders and reorganizing them when I felt I made a mistake, and then Google Photos came out with this ability to do that automatically. So I said, “Let’s try it. I’ll strip all the keywords off all my photos, I’ll throw them all in and see what happens.” This is where I get an advantage. You guys don’t get to show me any of your baby pictures or travel pictures, but I get to show them to you because it’s part of the presentation, right? And so you come out like this. It’s got categories, and the categories are pretty good. If you click into the categories, it’s done the right job, it’s put everything in the right category. And beyond the categories, I can search for things. So “birds” was a category. I can search for “pelicans,” and I get them. “Flowers” was a category. I can search for “clematis” and — oops — I only got four out of five. So the top right one, that’s not actually clematis. So it’s not perfect, but it’s doing a really good job. And beyond just doing binary classification of “Is this in the category, or is it not?” it’s also doing graded classification. So if you ask for a keyword, it will show you the best examples. So I ask for “nose,” and I get a bunch of really prominent noses, even though I have lots and lots of other pictures that happen to have noses in them. Okay, so that’s pretty transformative, that all this can be done, these millions of categories, all done automatically just by looking at examples and having some clever work of dealing with them. Now, I want to give you some idea of how this type of stuff is done, but there is a cliché that every time you show a math formula, you lose half the audience — and I’d have to show so many formulas that we’d be out of audience members. So I won’t do that, and instead I’ll use an analogy. Here’s the analogy: Suppose you run a company that sells these kind of tile displays, and you have a catalog where people can order — this looks like a nice scene of Tuscany, or something like that. You’ve got a catalog with a bunch of different tiles and people “...this was a true result, but not a useful one. But we didn’t give up there. And the field as a whole said, ‘Well, maybe the problem is we’re only asking for components at the lowest level.’” can go for them, but then you say, “You know what? This is the internet economy, so we should have something where people can upload their own picture and then we’ll make the tiles and ship them out to them.” It turns out that is really popular, but there is a problem, that making custom tiles one at a time is really expensive, so you’re not making a lot of profit on this. So you get the idea that what we should do is, we should have inventory of tiles, and when people upload their picture, we should go into that inventory, get out the tiles that could be pieced together to make their picture, and ship that out to them. Now the question is, what tiles, what little pieces should you put into inventory? That’s the question. How do you answer that? Well, one way is to say, “What pictures am I going to get in the future?” and that’s probably representative of pictures I’ve seen in the past. Take pictures that people have already had, or wherever you can find them, get a bunch of pictures, and then there’s some fancy math that I’m going to omit here, which says, “Out of all those pictures, what are the most important pieces?” You probably want to have a piece that is all blue because there’s a lot of sky and lot of ocean and you need some blue for that. It doesn’t matter if it’s exactly the right shade, because we’re allowed to be a little bit off. You probably want some noses and eyes and faces because lots of pictures have people in them, and so on. I’m just thinking off the top of my head, but there is a mathematically precise answer to “What are the component pieces that best make up all the pictures in this collection?” So that was tried. Bruno Olshausen at Berkeley was one of the first to try it. He did it, and then he looked at the results, and it was very exciting to see. What do you see? This is like the primitive set of all things that can make up all pictures. And what he discovered is, pictures are made out of lines. So that’s kind of disappointing. It’s true, but we already knew that in kindergarten when they first handed us a pencil or a crayon. We knew you can make anything out of straight lines. And here there are some that are horizontal, some that are diagonal, some that are diagonal the other way. So this was a true result, but not a useful one. But we didn’t give up there. And the field as a whole said, “Well, maybe the problem is we’re only asking for components at the lowest level. What if we had three levels of inventory, and so the lines make up small pieces, and then those can make up bigger pieces, and those make up bigger pieces still?” And that was the transformative step. That happened around 2012 — there were some precursors before that — and it actually worked! So when you do that, at the first level, you still get the lines, but at the next level, you get things like eyes and noses, and at the next level, you get things like faces. So this system automatically said, “A face is an important thing, and I’m going to allocate some of my inventory to that.” We’ve gone on from there. We build systems now, not with just three levels, but like with 20 or more levels, and there are lots of complications to it, but it all comes down to saying, “Find the most important pieces, combine them together, and now when you get something new, you can choose from that inventory to create a new one.” And what can you do with that? I want to show another example. We’ve shown a little bit of language stuff, and we’ve shown a little bit of computer vision stuff. One of the tasks is to put those two together. So here’s the task of caption creation. So given a picture, can you make a caption for it? So here’s a picture, and we ask a human to come up with a caption for it. They said, “Three different pizzas on top of a stove.” Then we trained the system by showing it lots and lots of pictures and corresponding captions and it learned a model of how to go from picture to caption. And then we showed it this picture, and it came up with two pizzas sitting on top of a stove-top oven. That’s pretty good, because you have to look really carefully to see that on the left there is a tomato, and it looks like a spinach or a pesto or something, so it’s doing a good job. I can go on like that, but it’s no fun to show the good jobs. So let’s show some of the mistakes, both because they’re fun and because it gives you some idea of where this type of system breaks down. So here’s a rather unusual picture, and the caption it came up with is, “A couple of giraffes standing next to each other.” So the system had no experience with horses in purple pajamas, but if you look at the pattern on the pajamas, it looks a little bit like the reticulated giraffe pattern. I don’t know why it came up with a couple rather than one, and that’s part of the problem with these systems — they make mistakes, and it’s hard to understand what the mistakes are from. Here’s another one, “A reflection of a dog in a side-view mirror.” Objects in mirror may be larger than they appear, and here dogs are just a lot more common than elephants. It didn’t have any experience with elephants in mirrors. Here’s one, “A man riding a skateboard.” I don’t know if you can see it from the back, but there are horizontal marks in the floor boards there that look maybe a little bit like the deck of a skateboard. There aren’t any wheels on the skateboard, but the system just somehow imagined that nobody, not even the King, could be in that position unless they were balancing on a skateboard. Okay, so this is the promise: We can do amazing things. It’s not quite perfect, but what are some of the challenges? We said we could go really fast, and that’s great, but sometimes when you go fast, things happen. What do we have to watch out for? Now, one of the fascinating things that’s come up in the last few years is this notion of adversaries. I can do pretty well on most queries. When I did the disambiguation, I got up to 97 percent, but what if there is somebody out there who is specifically giving me an example that's especially hard for my program? It understands what my program is doing, and it's trying to actively defeat it, rather than just observing nature. Here what I have is, schematically, the decision boundary of pictures, and all of the pictures on one side are labeled as pandas and all the pictures on the other side are labeled as gibbons. So it's done a pretty good job of that. You can see there are a couple of red and green that are across the wrong side, so it means those are ones that it got wrong, but mostly it gets them all right. But now, I can ask, "What if I took this picture of a panda and that dot represents this place in space where that picture occurs, and what if I just moved it, ever so slightly, until it went over the boundary and into gibbon territory?" I can do that because I have an understanding of my model, and I can do some math, and I can ask, "What's the minimum number of pixels that I have to just tweak a little bit in order to shift it over and make the computer give the wrong answer?" Or is it even the wrong answer — right? So here's the picture of the panda. Here's the computation of how much I have to move each of the pixels in each direction so some of the pixels get a little bit more red, some get a little bit more blue, and this .007 here means I'm adding mostly pandas and only 7.10 of a percent of the little bit of shift. It's a very small shift. And what do I get? Is the picture going to look like a half-panda, half-gibbon that is slightly more gibbon? What's it going to look like? What do you think? The answer is, it not only looks like a panda, it looks like the exact same picture — right? So I've made this change, which totally fooled my program but has absolutely no effect on fooling your visual system. Clearly, the program is doing something very different than you are, and by looking at the fuzziness of the noise we added in the middle, it seems that it has something to do with very high-frequency changes that your visual system just doesn't detect. So the aggregate accuracy, overall, might be very good, but it's making some really bizarre mistakes, and we want to understand that. And this really changed my way of thinking about this. Where I used to think that we had divided up the world pretty well, and maybe around the boundaries between two different types of images, we would make some small mistakes — but now I think it's not like that at all. Now I think there are “with a machine learning system...There is no one place where you can say, ‘Everything was good flowing in, and it was bad flowing out.’ Rather, everything affects everything now. So we can’t debug it, we can’t change it, and we can’t really ask it to explain itself. There’s a lot of work going on to make that better, but we haven’t got there yet.” Dr. Peter Norvig - Creating Software with Machine Learning: Challenges and Promise • 9 these very narrow corridors that we get right, and in between them, we have no idea whatsoever. That there’s just zillions and zillions of possible pictures, and most of them, we have no idea. And the only reason we get a lot right statistically is because everybody takes almost exactly the same picture. So there’s a problem in debuggability and explainability. If I take a traditional program and it has a bug in it, there are techniques I can use and I can say, “Aha, I know that the bug must be in this module right here, because coming into it everything was good, and coming out of it, everything was bad, so, therefore, I have isolated the bug. It has to be here.” But with a machine learning system, it’s just not like that. There is no one place where you can say, “Everything was good flowing in, and it was bad flowing out.” Rather, everything affects everything now. So we can’t debug it, we can’t change it, and we can’t really ask it to explain itself. There’s a lot of work going on to make that better, but we haven’t got there yet. There are also a lot of issues around privacy, security and fairness. Privacy — if we’re going to base systems on data and data is valuable, then it would be nice if we could collect data from lots of people, but we don’t want that data to leak out. We want your data to be secure, and there are some nice systems for being able to train multiple systems, aggregate them together in a way that you can never get out individuals’ data, but that is a continuing research field. Fairness, as well, is a big issue, where we have to say, “What is it that we really want, and are we achieving that properly?” So I’ve said we’ve changed the way we programmed from saying, “This is how we’re going to do something step by step,” to just saying, “Hey, here are some examples, and here’s what I’m telling you you’re trying to achieve.” But that puts a lot more emphasis on this decision for what is it that we’re trying to achieve. Let’s give you an example. Let’s say I’m doing a speech recognition system, and I achieve a certain level of accuracy, and I want to increase that. So I’m asking, “What’s my measure? Where am I at now?” and I say, “My measure is the total number of correct recognitions of words over all my users.” I’d say, “I’m at 95 percent, and I want to get to 96 percent.” Now, say I come up with an idea that says, “Hey, we’ve been trying to optimize across all of our users, but if we split the users up into groups, depending on the way they speak, then maybe I could optimize one of those groups at a time and I could increase those.” You’d say, “Great! That sounds like it’ll work.” Now, if I’m going to do that and I have a limited amount of engineering resources and there are five or ten different groups of speakers, who am I going to work on? Will I work on the large group with millions of people, or will I work on the small group with only thousands of people? Well, if my goal is to increase the overall score, then I’ve got to work on the big group, because then I’ll get a big return for a small amount of work. So that means, without me having any ill intentions, I’ve said my system is encouraging me to help the majority group get richer and the minority group get poorer. For speech recognition, this group could have to do with the shape of their larynx, or something like that, and in other applications, it could have to do with other demographic-type factors. If we’re okay with that, if we’re okay with saying, “The people who speak in this kind of voice will do better and we don’t have time to improve it for these small minority group speakers,” then that’s what we’ll get. If we think that’s not fair, then we have to change our objective. Then we have to say, “I thought I was trying to optimize over everybody, over the average, over all people, but instead, maybe I want to optimize over each group individually, maybe optimize over the score per group rather than the score per person.” And then I have to decide which groups are worthy of that kind of protection and which groups aren’t. So right-handers versus left-handers — does that count? People who are wearing blue shirts versus purple shirts — does that count? Or which groups are deserving of fair treatment, and which groups are just sort of contingent and we don’t have to worry about? So that’s a big question to deal with. And, finally, there is this problem about the sands of time, or change. We call it nonstationarity, which is just a fancy word, and it means that we train these systems with data that we have. But the world doesn’t stop — the world goes on, and new data is out there, and the world changes. And so a system that was trained to do one thing can degrade very easily over time as circumstances change, and now we have a whole new problem in saying, “How are we going to deal with that?” Are we going to update the system continuously? If we do that, are we going to lose old wisdom? Do we keep stuff from exactly one year ago because everything is different during the Christmas holiday and we don’t want to forget what we did last Christmas? In the winter, can we forget what we did in the summer? How do we deal with that, and how do we make systems that are secure, safe and verifiable, because we built the whole software engineering system on the idea of when do we release new systems? It used to be software companies would release only once a year, you got a new update. Now it’s much more frequent, because we can distribute over the web rather than by putting things in boxes and shipping them out, but we’re not yet to the point where we can reliably say, “We’re going to have a new update every second as new data comes in.” We don’t have the systems in place to do that kind of reliability. So these are the types of issues that we’re dealing with. I see a lot of promise. I also see a lot of challenges. It’s exciting to work on these challenges, and there are a lot of opportunities. A video of Dr. Peter Norvig’s lecture, which includes his slide presentation, is available at stevens.edu/lecture.
{"Source-Url": "https://www.stevens.edu/sites/stevens_edu/files/files/PresidentsOffice/Norvig-Transcript.pdf", "len_cl100k_base": 7057, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25170, "total-output-tokens": 7528, "length": "2e12", "weborganizer": {"__label__adult": 0.0003383159637451172, "__label__art_design": 0.0007033348083496094, "__label__crime_law": 0.00028896331787109375, "__label__education_jobs": 0.00482177734375, "__label__entertainment": 0.00014412403106689453, "__label__fashion_beauty": 0.00019693374633789065, "__label__finance_business": 0.0004341602325439453, "__label__food_dining": 0.0005154609680175781, "__label__games": 0.00047206878662109375, "__label__hardware": 0.0013065338134765625, "__label__health": 0.0006036758422851562, "__label__history": 0.00033164024353027344, "__label__home_hobbies": 0.00019657611846923828, "__label__industrial": 0.0005745887756347656, "__label__literature": 0.0005125999450683594, "__label__politics": 0.00021791458129882812, "__label__religion": 0.0005717277526855469, "__label__science_tech": 0.10888671875, "__label__social_life": 0.0002129077911376953, "__label__software": 0.0177764892578125, "__label__software_dev": 0.85986328125, "__label__sports_fitness": 0.00024080276489257812, "__label__transportation": 0.0004341602325439453, "__label__travel": 0.00024437904357910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30564, 0.00163]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30564, 0.13156]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30564, 0.96949]], "google_gemma-3-12b-it_contains_pii": [[0, 968, false], [968, 3858, null], [3858, 7134, null], [7134, 10223, null], [10223, 12639, null], [12639, 15490, null], [15490, 18414, null], [18414, 21502, null], [21502, 24462, null], [24462, 26530, null], [26530, 29497, null], [29497, 30564, null]], "google_gemma-3-12b-it_is_public_document": [[0, 968, true], [968, 3858, null], [3858, 7134, null], [7134, 10223, null], [10223, 12639, null], [12639, 15490, null], [15490, 18414, null], [18414, 21502, null], [21502, 24462, null], [24462, 26530, null], [26530, 29497, null], [29497, 30564, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30564, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30564, null]], "pdf_page_numbers": [[0, 968, 1], [968, 3858, 2], [3858, 7134, 3], [7134, 10223, 4], [10223, 12639, 5], [12639, 15490, 6], [15490, 18414, 7], [18414, 21502, 8], [21502, 24462, 9], [24462, 26530, 10], [26530, 29497, 11], [29497, 30564, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30564, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
a3558c2f470c05a7760f883a7d7ca8a07059f787
Second Examination CS 225 Data Structures and Software Principles Spring 2014 7-10p, Tuesday, April 8 Name: NetID: Lab Section (Day/Time): - This is a closed book and closed notes exam. No electronic aids are allowed, either. - You should have 5 problems total on 15 pages. The last sheet is scratch paper; you may detach it while taking the exam, but must turn it in with the exam when you leave. The first two questions should be answered on your scantron sheet. Please be sure that your netid is accurately entered on the scantron. - Unless otherwise stated in a problem, assume the best possible design of a particular implementation is being used. - Unless the problem specifically says otherwise, assume the code compiles, and thus any compiler error is an exam typo (though hopefully there are not any typos). - We will be grading your code by first reading your comments to see if your plan is good, and then reading the code to make sure it does exactly what the comments promise. - Please put your name at the top of each page. <table> <thead> <tr> <th>Problem</th> <th>Points</th> <th>Score</th> <th>Grader</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>35</td> <td>scantron</td> <td></td> </tr> <tr> <td>2</td> <td>20</td> <td>scantron</td> <td></td> </tr> <tr> <td>3</td> <td>10</td> <td></td> <td></td> </tr> <tr> <td>4</td> <td>20</td> <td></td> <td></td> </tr> <tr> <td>5</td> <td>15</td> <td></td> <td></td> </tr> <tr> <td>Total</td> <td>100</td> <td></td> <td></td> </tr> </tbody> </table> 1. [Miscellaneous – 35 points]. **MC1 (2.5pts)** Suppose you implement a queue using a singly linked list with head and tail pointers so that the front of the queue is at the tail of the list, and the rear of the queue is at the head of the list. What is the best possible worst-case running time for enqueue and dequeue in this situation? (As a reminder, enqueue occurs at the rear of the queue.) (a) O(1) for both functions. (b) O(1) for enqueue and O(n) for dequeue. (c) O(n) for enqueue and O(1) for dequeue. (d) O(n) for both functions. (e) None of these is the correct response. **MC2 (2.5pts)** Think of an algorithm that uses a Stack to efficiently check for unbalanced brackets. What is the maximum number of characters that will appear on the stack at any time when the algorithm analyzes the string (]\() [()])? (a) 3 (b) 4 (c) 5 (d) 6 (e) None of these is correct. **MC3 (2.5pts)** Consider a sequence of push and pop operations used to push the integers 0 through 9 on a stack. The numbers will be pushed in order, however the pop operations can be interleaved with the push operations, and can occur any time there is at least one item on the stack. When an item is popped, it is printed to the terminal. Which of the following could NOT be the output from such a sequence of operations? (a) 0 1 2 3 4 5 6 7 8 9 (b) 4 3 2 1 0 5 6 7 8 9 (c) 5 6 7 8 9 0 1 2 3 4 (d) 4 3 2 1 0 9 8 7 6 5 (e) All of these output sequences are possible. MC4 (2.5pts) Consider an array based implementation of a stack, and suppose that it is initially empty. Upon \( n \) push operations the array will be resized in such a way that the running time per push is \( O(1) \) per operation, on average. How many times is the array resized over the \( n \) pushes, using this scheme? (a) \( O(1) \) (b) \( O(\log n) \) (c) \( O(n) \) (d) \( O(n \log n) \) (e) \( O(n^2) \) MC5 (2.5pts) Fill in the blanks so that the following sentence is true: If you have a complete tree with 17 nodes, the height \( (h) \) of the tree is _____ and there are _____ nodes on level \( h \). (a) First blank is 4, second is 1. (b) First blank is 5, second is 2. (c) First blank is 8, second is 2. (d) First blank is 8, second is 9. (e) None of the above options makes the sentence true. MC6 (2.5pts) Consider a level order traversal of the following binary tree. Which node is the last node enqueued before the node containing y is dequeued? (a) The node containing c. (b) The node containing o. (c) The node containing m. (d) The node containing s. (e) None of these is the correct answer. MC7 (2.5pts) How many data structures in this list can used to implement a Dictionary so that all of its functions have strictly better than $O(n)$ running time (worst case)? linked list stack queue binary search tree AVL tree (a) 1 (b) 2 (c) 3 (d) 4 (e) 5 MC8 (2.5pts) Suppose that we have numbers between 1 and 1000 in a binary search tree and we want to search for the number 363. Which of the following sequences can not be the sequence of nodes visited in the search? (a) 2, 252, 401, 398, 330, 344, 397, 363 (b) 924, 220, 911, 244, 898, 258, 362, 363 (c) 2, 399, 387, 219, 266, 382, 381, 278, 363 (d) 925, 202, 911, 240, 912, 245, 363 (e) 935, 278, 347, 621, 399, 392, 358, 363 MC9 (2.5pts) Consider the nearly balanced Binary Search Tree in the figure below. ![Binary Search Tree](image) Perform the appropriate rotation about R to restore the height balance of the tree. What is the level order traversal of the tree after it has been balanced? (a) R E X C M S Y A H P F (b) R M X E P S Y C H A F (c) M E R C H P X A F S Y (d) E C M A H P F R X S Y (e) None of these is the correct level order traversal. MC10 (2.5pts) Consider the BTree in the figure below. ``` 12 31 / \ 20 25 33 37 42 50 ``` How many disk seeks are required during the execution of `Find(42)`? Please assume that none of the data exists in memory when the function call is made. (a) 1 (b) 2 (c) 4 (d) 5 (e) The number of disk seeks cannot be determined because we do not know the order of the tree. MC11 (2.5pts) In general, for an order $m$ BTree containing $n$ keys, the number of disk seeks is __________. (a) $O(1)$ (b) $O(\log n)$ (c) $O(n)$ (d) $O(n \log n)$ (e) None of these is accurate because they ignore the order of the tree. MC12 (2.5pts) Which of the following trees is a Huffman Tree for the following string of characters? ``` b a b a c a d a c a b a b ``` ``` (b) d c a b c a ``` (a) (b) (c) (d) (e) None of these. MC13 (2.5pts) Suppose we would like to build a dictionary that maps a set of student names (short strings) to a study group identifier. Which of the following would work as a key function for our dictionary? Hint: the ordering of the students in the original set should not matter. (a) Concatenate the names. (b) Sort the students’ names and then sum the values of the characters in their names. (c) Sort each name by character, then form a concatenation of all the sorted names. (d) Sort and then concatenate the first letters of the students’ names. (e) None of the above is correct. MC14 (2.5pts) Suppose a hash table has size 10, and that the search keys are strings consisting of 3 lower case letters. We want to hash 7 unknown values from this keyspace. In the hash function, when we refer to the alphabet positions of the letters, we mean: “a” = 1, “b” = 2, …, “z” = 26. \[ h(k) = (\text{product of the alphabet positions of } k\text{'s letters})^4 \mod 10 \] Which of these ideal hash function characteristics are violated by this hash function? (i) A good hash function distributes the keys uniformly over the array. (ii) A good hash function is deterministic. (iii) A good hash function is computed in constant time. (a) Only (i) is violated. (b) Only (ii) is violated. (c) Only (iii) is violated. (d) At least two of (i), (ii) and (iii) are violated. (e) None of these characteristics are violated–our hash function is a good one! 2. **[Efficiency – 20 points].** Each item below is a description of a data structure, its implementation, and an operation on the structure. In each case, choose the appropriate running time from the list below. The variable \( n \) represents the number of items (keys, data, or key/data pairs) in the structure. In answering this question you should assume the best possible implementation given the constraints, and also assume that every array is sufficiently large to handle all items (unless otherwise stated). (a) \( O(1) \) (b) \( O(\log n) \) (c) \( O(n) \) (d) \( O(n \log n) \) (e) \( O(n^2) \) (MC 15) ___ The slower of *Enqueue* or *Dequeue* for a *Queue* implemented with an array. (MC 16) ___ Find the maximum key in a Binary Tree (not necessarily BST). (MC 17) ___ Find the *In Order Predecessor* of a given key in a Binary Tree (if it exists). (MC 18) ___ Find the *In Order Predecessor* of a given key in an AVL Tree (if it exists). (MC 19) ___ Perform *rightLeftRotate* around a given node in an AVL Tree. (MC 20) ___ Determine if a given Binary Search Tree is height balanced. (MC 21) ___ Build a binary search tree (not AVL) with keys that are the numbers between 0 and \( n \), in that order, by repeated insertions into the tree. (MC 22) ___ Remove the right subtree from the root of an AVL tree, and restore the height balance of the structure. 3. [MP4ish – 10 points]. (a) (4 points) Suppose we execute slight modifications of MP4 functions \texttt{BFSfillSolid}, and \texttt{DFSfillSolid} on the hexagonal grid above, beginning at the “Start” cell, and changing white pixels to red. If the functions are executed simultaneously, which function changes the cell marked “End” to red, first? Assume that we start the algorithm by adding the “Start” cell to the ordering structure, and that we add the six neighboring cells to the structure clockwise beginning on the top. As a reminder, the fill should change the color when a cell is \textit{removed} from the ordering structure. \texttt{BFSfillSolid} \hspace{1cm} \texttt{DFSfillSolid} (b) (2 points) What ordering structure did you use in your answer to part (a)? \texttt{Queue} \hspace{1cm} \texttt{Stack} (c) (4 points) Suppose we want to fill some part of an arbitrary grid containing \(n\) cells. What is the worst-case running time of \texttt{BFSfillSolid} if we start from an arbitrary location? \(O(1)\) \hspace{0.5cm} \(O(\log n)\) \hspace{0.5cm} \(O(n)\) \hspace{0.5cm} \(O(n \log n)\) \hspace{0.5cm} \(O(n^2)\) 4. [Quadtrees – 20 points]. For this question, consider the following partial class definition for the Quadtree class, which uses a quadtree to represent a square PNG image as in MP5. ```cpp class Quadtree { public: // ctors and dtor and all of the public methods from MP5, including: void buildTree(PNG const & source, int resolution); RGBApixel getPixel(int x, int y) const; PNG decompress() const; void prune(int tolerance); ... // a NEW function for you to implement void prunish(int tolerance, double percent); private: class QuadtreeNode { QuadtreeNode* nwChild; // pointer to northwest child QuadtreeNode* neChild; // pointer to northeast child QuadtreeNode* swChild; // pointer to southwest child QuadtreeNode* seChild; // pointer to southeast child RGBApixel element; // the pixel stored as this node's "data" }; QuadtreeNode* root; // pointer to root of quadtree, NULL if tree is empty int resolution; // init to be the resolution of the quadtree NEW int distance(RGBApixel const & a, RGBApixel const & b); // returns sq dist between colors void clear(QuadtreeNode * & cRoot); // free memory and set cRoot to null // a couple of private helpers are omitted here. }; ``` You may assume that the quadtree is perfect and that it has been built from an image that has size $2^k \times 2^k$. As in MP5, the element field of each leaf of the quadtree stores the color of a square block of the underlying PNG image; for this question, you may assume, if you like, that each non-leaf node contains the component-wise average of the colors of its children. You may not use any methods or member data of the Quadtree or QuadtreeNode classes which are not explicitly listed in the partial class declaration above. You may assume that each child pointer in each leaf of the Quadtree is NULL. (a) (4 points) Write a **private** member function `int Quadtree::tallyNear(RGBApixel const & target, QuadtreeNode const * curNode, int tolerance)`, which calculates the number of leaves in the tree rooted at `curNode` with element less than or equal to `tolerance` distance from `target`. You may assume that you are working on a perfect (unpruned), non-empty Quadtree. Write the method as it would appear in the quadtree.cpp file for the `Quadtree` class. We have included a skeleton for your code below—just fill in the blanks to complete it. ```cpp int Quadtree::tallyNear(RGBApixel const & target, QuadtreeNode const * curNode, int tolerance) { // function not called with curNode == NULL; if (curNode->__________ == __________) { // check for leaf RGBApixel current = curNode->element; if (distance(current, target) __________ tolerance) return __________; else return 0; } // otherwise...recurse! int devTotal = _____________________ return __________; } ``` (b) (6 points) Our next task is to write a private member function declared as `void Quadtree::prunish(QuadtreeNode * curNode, int tolerance, int res, double percent)` whose functionality is very similar to the `prune` function you wrote for MP5. Rather than prune a subtree if ALL leaves fall within a tolerance of the current node’s pixel value, `prunish` will prune if at least `percent` of them do. Parameter `res` is intended to represent the number of pixels on one side of the square represented by the subtree rooted at `curNode`. All the constraints on pruning from the `prune` function apply here, as well. That is, you should prune as high up in the tree as you can, and once a subtree is pruned, its ancestors should not be re-evaluated for pruning. As before, we’ve given you most of the code below. Just fill in the blanks on the next page. void Quadtree::prunish(QuadtreeNode * curNode, int tolerance, int res, double percent) { if (curNode == NULL) return; // count the number of leaves more than tolerance distance from curNode int nearNodes = _________________; // (1 points) double percentNear = _______________; // (1 points) // prune conditions if ( __________________________) { // (2 points) clear(curNode->neChild); clear(curNode->nwChild); clear(curNode->seChild); clear(curNode->swChild); return; } // can’t prune here :( so recurse! _________________ // (2 points) _________________ _________________ _________________ _________________ return; } (c) (2 points) Next, write the public member function void Quadtree::prunish(int tolerance, double percent) that prunes from the Quadtree any subtree with more than percent leaves within tolerance color distance of the subtree’s root. void Quadtree::prunish(int tolerance, double percent) { _________________ return; } (d) In this part of the problem we will derive an expression for the maximum number of nodes in a Quadtree of height \( h \), and prove that our solution is correct. Let \( N(h) \) denote the maximum number of nodes in a Quadtree of height \( h \). i. (3 points) Give a recurrence for \( N(h) \). (Don’t forget appropriate base case(s).) We solved the recurrence and found a closed form solution for \( N(h) \) to be: \[ N(h) = \frac{4^{h+1} - 1}{3}, \quad h \geq -1 \] ii. (3 points) Prove that our solution to your recurrence from part (i) is correct by induction: Consider a maximally sized Quadtree of arbitrary height \( h \). - If \( h = -1 \) then the expression above gives: ________ which is the maximum number of nodes in a tree of height -1 (briefly explain). - otherwise, if \( h > -1 \) then by an inductive hypothesis that says: we have \( N(____) = \) __________ nodes. so that \( N(h) = \) _______________ = ________________, which was what we wanted to prove. iii. (2 points) Use your result from part (d) to give a lower bound for the height of a quad tree containing \( n \) nodes. 5. [Stacks and Queues – 15 points]. In this problem you will write a function reverseOdd that takes a queue of integers as a parameter, and that modifies that queue, reversing the order of the odd integers in the queue while leaving the even integers in place. For example given this queue (back to front): ``` < 14 13 17 8 4 10 11 4 15 18 19 > ``` calling the function would change it to: ``` < 14 19 15 8 4 10 11 4 17 18 13 > ``` We have given you the Stack and Queue interfaces below. You may also assume the existence of a helper function isOdd() that returns true for odd integers and false for even integers. ```cpp template <T> class Stack { public: // ctors and dtor and all of the public methods, including: T pop(); void push(T data); bool isEmpty(); private: ...} template <T> class Queue { public: // ctors and dtor and all of the public methods, including: T dequeue(); void enqueue(T data); bool isEmpty(); private: ...} ``` (a) (3 points) Write a function that returns the size of a queue. ```c int Qsize(queue q) { } ``` (b) (8 points) Write the function `reverseOdd` as described above. We have provided you with a skeleton. Just fill in the blanks! ```c void reverseOdd(queue & input) { stack s; int n = Qsize(input); int counter = 0; while (__________) { int temp = input.dequeue(); if __________ __________; __________; counter++; } __________; __________; counter = 0; while (__________) { int temp = input.dequeue(); if __________ __________; __________; counter++; } } ``` (c) (2 points) Suppose the queue contains $O(n)$ even integers, and $O(\log n)$ odd integers. What is the worst case total running time of the algorithm? Give the tightest bound you can. (d) (2 points) Suppose the queue contains $O(n)$ even integers, and $O(\log n)$ odd integers. How much memory does the algorithm use? Give the tightest bound you can. scratch paper
{"Source-Url": "https://courses.engr.illinois.edu/cs225/fa2014/exams/mt2sp14.pdf", "len_cl100k_base": 5002, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 30629, "total-output-tokens": 5937, "length": "2e12", "weborganizer": {"__label__adult": 0.0007185935974121094, "__label__art_design": 0.0006260871887207031, "__label__crime_law": 0.0007157325744628906, "__label__education_jobs": 0.0269317626953125, "__label__entertainment": 0.0001608133316040039, "__label__fashion_beauty": 0.0003695487976074219, "__label__finance_business": 0.00028705596923828125, "__label__food_dining": 0.0010824203491210938, "__label__games": 0.0021495819091796875, "__label__hardware": 0.0017938613891601562, "__label__health": 0.0006852149963378906, "__label__history": 0.0006265640258789062, "__label__home_hobbies": 0.00030231475830078125, "__label__industrial": 0.0008034706115722656, "__label__literature": 0.0006895065307617188, "__label__politics": 0.0005269050598144531, "__label__religion": 0.0009622573852539062, "__label__science_tech": 0.0162811279296875, "__label__social_life": 0.0003631114959716797, "__label__software": 0.004596710205078125, "__label__software_dev": 0.9365234375, "__label__sports_fitness": 0.0008206367492675781, "__label__transportation": 0.001300811767578125, "__label__travel": 0.0004646778106689453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17877, 0.05138]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17877, 0.52372]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17877, 0.84315]], "google_gemma-3-12b-it_contains_pii": [[0, 1362, false], [1362, 2822, null], [2822, 3637, null], [3637, 4208, null], [4208, 5071, null], [5071, 5912, null], [5912, 7362, null], [7362, 8742, null], [8742, 9876, null], [9876, 11777, null], [11777, 13659, null], [13659, 14724, null], [14724, 15836, null], [15836, 16821, null], [16821, 17509, null], [17509, 17864, null], [17864, 17877, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1362, true], [1362, 2822, null], [2822, 3637, null], [3637, 4208, null], [4208, 5071, null], [5071, 5912, null], [5912, 7362, null], [7362, 8742, null], [8742, 9876, null], [9876, 11777, null], [11777, 13659, null], [13659, 14724, null], [14724, 15836, null], [15836, 16821, null], [16821, 17509, null], [17509, 17864, null], [17864, 17877, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17877, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17877, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17877, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17877, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17877, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17877, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17877, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17877, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 17877, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17877, null]], "pdf_page_numbers": [[0, 1362, 1], [1362, 2822, 2], [2822, 3637, 3], [3637, 4208, 4], [4208, 5071, 5], [5071, 5912, 6], [5912, 7362, 7], [7362, 8742, 8], [8742, 9876, 9], [9876, 11777, 10], [11777, 13659, 11], [13659, 14724, 12], [14724, 15836, 13], [15836, 16821, 14], [16821, 17509, 15], [17509, 17864, 16], [17864, 17877, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17877, 0.02548]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
fed4ae83d3fbc10b96fd942e4524315d1f3c776c
The univie-ling-thesis class Jürgen Spitzmüller Version 2.0, 2022/10/02 Abstract The univie-ling-thesis class provides a \LaTeX\ class suitable for Bachelor's, Master's, Diploma and Doctoral theses in (Applied) Linguistics at the Department of Linguistics, University of Vienna. The class implements some standards required for those theses (such as a suitable title page) and pre-loads some packages that are considered useful in the context of Applied Linguistics. The class might also be used for General and Historical Linguistics as well as for other fields of study at Vienna University. In this case, however, some settings might have to be adjusted. This manual documents the class as well as the configuration possibilities. Contents 1 Aims and scope 2 2 Requirements of univie-ling-thesis 2 3 Fonts 3 4 Class Options 4 4.1 Font selection 4 4.2 Polyglossia 5 4.3 Package loading 5 4.4 Draft mode 6 4.5 Further options 6 5 General settings 6 5.1 Author-related data 6 5.2 Thesis-related data 6 6 Declaration 7 7 Semantic markup 7 8 Linguistic examples and glosses 7 9 Bibliography 7 9.1 Default bibliography style (Unified Style for Linguistics) 7 9.2 Using APA/DGPs style 8 9.3 Using a different style 9 10 Further instructions 9 Aims and scope The univie-ling-thesis class has been written mainly with my own field in mind: Applied Linguistics. Therefore, the defaults are closely tied to the requirements in this field. This particularly concerns the preloaded bibliography style, which conforms to the standards that are proposed by the Viennese Applied Linguistics staff (see sec. 9). Furthermore, some frequently used packages (such as covington) are pre-loaded. As documented later, however, you can disable this (and other) default(s), use a bibliography style of your choice and load alternative packages. The design matches as closely as necessary the standards set up by the university. This particularly concerns the title page, which follows the recommendations specified by the StudienServiceCenter.\footnote{See \url{http://ssc-philkultur.univie.ac.at/studium/masterstudien/abgabe-der-masterarbeit} for master’s theses, \url{http://ssc-philkultur.univie.ac.at/studium/doktoratsstudium-neu-792-xxx/dissertation} for doctoral theses <25.01.2017>.} Requirements of univie-ling-thesis The following class and packages are required and loaded by univie-ling-thesis: - \texttt{scrreprt}: KOMA-Script report class (base class). - \texttt{array}: Tabular extensions. - \texttt{csquotes}: Context sensitive quotations. - \texttt{geometry}: Page layout settings. - \texttt{graphicx}: Graphic support. - \texttt{l3keys}: Key-value interface for class options. - \texttt{scrlayer-scrpage}: Page header settings. - \texttt{setspace}: Line spacing adjustments. - \texttt{translator}: Localization machinery. - \texttt{url}: Support for typesetting URLs. The following packages are required for specific features and loaded by default. However, the loading can be individually and generally omitted (see sec. 4): - \texttt{mathptmx}: Default serif font (\textit{Palatino}). - \texttt{urw-arial} or \texttt{helvet}: Default sans serif font (\textit{Arial} or \textit{Helvetica}). - \texttt{sourcecodepro}: Default monospaced font (\textit{Source Code Pro}). • biblatex: Contemporary bibliography support. • caption: Caption layout adjustments. • covington: Support for linguistic examples/glosses. • fontenc: Set the font encoding for PostScript fonts. Loaded with option \texttt{t1}. • microtype: Micro-typographic adjustments. • prettysref: Verbose cross-references. • varioref: Context-sensitive cross references. The following packages are required for optional features (not used by default): • biblatex-apa: APA style for biblatex. • draftwatermark: Create a draft mark. • fontspec: Load OpenType fonts (with LuaTeX or XeTeX). • polyglossia: Multi-language and script support. • pdfa: Create PDF/A compliant files. 3 Fonts The class uses, by default, PostScript (a. k. a. Type 1) fonts and thus requires classic (PDF)\LaTeX. Optionally, however, you can also use OpenType fonts via the \texttt{fontspec} package and the XeTeX or LuaTeX engine instead. In order to do this, use the class option \texttt{font=otf} (see sec. 4 for details). In both cases, the class uses by default \textit{Palatino} as a serif font, \textit{Arial} (or, alternatively, \textit{Helvetica}) as a sans serif font, and \textit{Source Code Pro} as a monospaced (typewriter) font. Note that \textit{Arial} (PostScript) font support is not included in most \texttt{\LaTeX} distributions by default, due to license reasons. You can install it easily via the \texttt{getnonfreefonts} script.\footnote{https://www.tug.org/fonts/getnonfreefonts <25.01.2017>} If \textit{Arial} is not installed, however, \textit{Helvetica} (which is part of the \texttt{\LaTeX} core packages) is used as a fallback. This is usually a suitable alternative, since \textit{Arial} and \textit{Helvetica} only differ in subtle details. If you use \texttt{font=otf}, you just have to make sure that you have the fonts \textit{Arial}, \textit{Palatino} and \textit{Source Code Pro} installed on your operating system (with exactly these names, i. e., fonts named \textit{Arial Unicode MS} or \textit{Linotype Palatino} will not be recognized!). As to font selection: \textit{Arial} is used in the official title page template\footnote{See https://ssc-philkultur.univie.ac.at/en/degree-programmes/doctoral-programme/doctoral-thesis/}, so I used that (with the mentioned fallback, \textit{Helvetica}). A serif font recommendation is not given. The corporate design manual of the university suggests \textit{Source Serif Pro}, but this font is too heavy for a thesis (and we cannot use the suggested sans, \textit{Source Sans Pro}, anyway). A somewhat standard for academic papers and theses is \textit{Times New Roman}, but I think this runs too small for the text block size we have (as the name suggests, this font has been developed for the small columns of newspapers), and it is also pretty overused. The selected font, \textit{Palatino}, in contrast, suits well to wider text lines, it is well supported... in \LaTeX, and it fits with Helvetica. The monospaced font, Source Code Pro, is a good companion to this pair. If you (or your instructor) prefer(s) Times New Roman over Palatino nonetheless, write to your preamble \texttt{\usepackage{mathptmx}} if you use PostScriptFonts, or \texttt{\setmainfont{T\em\times New Roman}} if you use \texttt{fonts=otf}, respectively. If you do neither like Palatino nor Times: A recommendable serif font (and actually the former ‘professional’ house font of Vienna university) is MinionPro, supported by the excellent FontPro package.\footnote{https://github.com/sebschub/FontPro \texttt{<25.01.2017>}.} However, some effort is needed to install the package and fonts. Please refer to the package’s documentation in case you are interested. A more accessible alternative, with a similar style than MinionPro, is the CrimsonPro font, which is available in modern \LaTeX\ distributions via the cochineal package. Another good option is the font this manual is typeset in, Linux Libertine (via the libertine package). Note that by default, with PostScript fonts, univie-ling-thesis also loads the fontenc package with T1 font encoding, but this can be customized (see sec. 4 for details). If you want (or need) to load all fonts and font encodings manually, you can switch off all automatic loading of fonts and font encodings by the class option \texttt{fonts=none} (see sec. 4). 4 \hspace{1em} \textbf{Class Options} The univie-ling-thesis class provides a range of key=value type options to control the font handling, package loading and some specific behavior. These are documented in this section. 4.1 \hspace{1em} \textbf{Font selection} As elaborated above, the package supports PostScript fonts (via \LaTeX\ and PDF\LaTeX) as well as OpenType fonts (via \TeX\ and \LaTeX). PostScript is the traditional \LaTeX\ font format. Specific \LaTeX\ packages and metrics files are needed to use the fonts (but all fonts needed to use this class should be included in your \LaTeX\ distribution and thus ready to use). OpenType fonts, by contrast, are taken directly from the operating system. They usually provide a wider range of glyphs, which might be a crucial factor for a linguistic thesis. However, they can only be used by newer, partly still experimental \TeX\ engines such as \TeX\ and \LaTeX. The class provides the following option to set the font handling: \texttt{fonts=ps|otf|none}: if \texttt{ps} is selected, PostScript fonts are used (this is the default and the correct choice if you use \LaTeX\ or PDF\LaTeX); if \texttt{otf} is selected, OpenType fonts are used, the class loads the fontspec package, sets *Palatino* as main font and *Arial* as sans serif font (this is the correct choice if you use XeTeX or LuaTeX; make sure you have the respective fonts installed on your system); if none is selected, finally, the class will not load any font package at all, and neither fontenc (this choice is useful if you want to control the font handling completely yourself). With PostScript fonts, univie-ling-thesis also loads the fontenc package with T1 font encoding, which is suitable for most Western European (and some Eastern European) writing systems. In order to load different, or more, encodings, the class option ``` fontenc=encoding(s)> ``` ...can be used (e.g., `fontenc={T1,X2}`). With `fontenc=none`, the loading of the fontenc package can be prevented. The package is also not loaded with `fonts=none`. ### 4.2 Polyglossia If you need polyglossia rather than babel for language support, please do not use the package yourself, but rather use the package option `polyglossia=true`. This assures correct loading order. This also presets `fonts=otf`. ### 4.3 Package loading Most of the extra features provided by the class can be switched off. This might be useful if you do not need the respective feature anyway, and crucial if you need an alternative package that conflicts with one of the preloaded package. All following options are `true` by default. They can be switched off one-by-one via the value `false`, or altogether, by means of the special option `all=false`. You can also switch selected packages on/off again after this option (e.g., `all=false,microtype=true` will switch off all packages except microtype). - **apa=true|false**: If `true`, the biblatex-apa style is used when biblatex is loaded. By default, the included univie-ling style is loaded, instead. See sec. 9 for details. - **biblatex=true|false**: If `false`, biblatex is not loaded. This is useful if you prefer Bib\TeX over biblatex, but also if you neither want to use the preloaded univie-ling style nor the alternative biblatex-apa style (i.e., if you want to load biblatex manually with different options). See sec. 9 for details. - **caption=true|false**: If `false`, the caption package is not loaded. This affects the caption layout. - **covington=true|false**: If `false`, covington is not loaded. Covington is used for numbered examples. - **microtype=true|false**: If `false`, microtype is not loaded. - **ref=true|false**: If `false`, both prettyref and varioref are not loaded and the string (re)definitions of the class (concerning verbose cross references) are omitted. 4.4 Draft mode The option `draftmark=true|false|firstpage` allows you to mark your document as a draft, which is indicated by a watermark (including the current date). This might be useful when sharing preliminary versions with your supervisor. With `draftmark=true`, this mark is printed on top of each page. With `draftmark=firstpage`, the draft mark appears on the title page only. 4.5 Further options `fdegree=true|false`: Prefer feminine forms for the targeted degree on the title page (such as Magistra, Doktorin). Default: `false`. `pdfa=true|false`: Generate PDF/A compliant output (see sec. 10.2 for details). Default: `false`. The class builds on `scrreprt` (KOMA report), which provides many more options to tweak the appearance of your document. You can use all these options via the `\KOMAoptions` macro. Please refer to the KOMA-Script manual [4] for details. 5 General settings In this section, it is explained how you can enter some general settings, particular the information that must be given on the title page. The title page, following the university guidelines, is automatically set up by the `\maketitle` command, given that you have specified the following data in the preamble. 5.1 Author-related data `\author{<name>}`: Name of the thesis author. Separate multiple authors by `\and`. `\studienkennzahl{<code>}`: The degree programme code (`Studienkennzahl`) as it appears on the student record sheet, e.g. A 792 327. `\studienrichtung{<field of study>}`: Your degree programme (`Studienrichtung`) or field of study (`Dissertationsgebiet`) as it appears on the student record sheet, e.g. Sprachwissenschaft. 5.2 Thesis-related data `\thesistype{<type>}`: The type of your thesis. Possible values include `magister` (Magisterarbeit), `diplom` (Diploma Thesis), `bachelor` (Bachelor’s Thesis), `master` (Master’s Thesis) and `diss` (Doctoral Thesis). `\title{<title>}`: Title of the thesis. `\subtitle{<subtitle>}`: Subtitle of the thesis. `\volume{<current>}{<total>}`: If your thesis consists of multiple volumes, specify the current volume as well as the total number of volumes. \supervisor{<name>}: Title and name of your (main) supervisor. \cosupervisor{<name>}: Title and name of your co-supervisor. The suitable degree (Angestrebter akademischer Grad in German) is automatically set by the \thesistype command, but you can override it with the optional command \degree{<custom degree>}. Note that female forms of degrees, where appropriate, are used if you use the class option fdegree=true (see sec. 4). 6 Declaration It is possible to automatically generate a page with a declaration where you declare and sign that you follow research ethics/anti-plagiarism rules (Selbständigkeitserklärung) by means of the command \makedeclaration Such a declaration is needed for BA theses. 7 Semantic markup The class defines some basic semantic markup common in linguistics: \Expression{<text>}: To mark expressions (object language). Typeset in italics. \Concept{<text>}: To mark concepts. Typeset in SMALL CAPITALS. \Meaning{<text>}: To mark meaning. Typeset in ‘single quotation marks’. You can redefine each of these commands, if needed, like this: \renewcommand\Expression[1]{\textit{#1}} \renewcommand\Concept[1]{\textsc{#1}} \renewcommand\Meaning[1]{\enquote{#1}} 8 Linguistic examples and glosses The class automatically loads the covington package which provides macros for examples and glosses. Please refer to the covington manual [1] for details. 9 Bibliography 9.1 Default bibliography style (Unified Style for Linguistics) By default, the univie-ling-thesis class loads a bibliography style which matches the conventions that are recommended by the Applied Linguistics staff of the department.\footnote{See http://www.spitzmueller.org/docs/Zitierkonventionen.pdf} These conventions draw on the *Unified Style Sheet for Linguistics* of the LSA (*Linguistic Society of America*), a style that is also quite common in General Linguistics nowadays. In order to conform to this style, the univie-ling-thesis class uses the biblatex package with the univie-ling style that is included in the univie-ling-thesis package. If you are in Applied Linguistics, using the default style is highly recommended. The style recommended until 2017, namely APA/DGPs, is also still supported, but its use is no longer encouraged; see sec. 9.2 for details. If you want/need to use a different style, please refer to section 9.3 for instructions. ### 9.2 Using APA/DGPs style Until 2017, rather than the Unified Style, the Applied Linguistics staff recommended conventions that drew on the citation style guide of the APA (*American Psychological Association*) and its adaptation for German by the DGPs (*Deutsche Gesellschaft für Psychologie*). For backwards compatibility reasons, this style is still supported (though not recommended). You can enable it with the package option `apa=true`. If you want to use APA/DGPs style, consider the following caveats. - For full conformance with the APA/DGPs conventions (particularly with regard to the rather tricky handling of “and” vs. “&” in- and outside of parentheses), it is mandatory that you adequately use the respective biblatex(-apa) citation commands: Use `\textcite` for all inline citations and `\parencite` for all parenthesized citations (instead of manually wrapping `\cite` in parentheses). If you cannot avoid manually set parentheses that contain citations, use `\nptextcite` (a biblatex-apa-specific command) inside them.\(^6\) For quotations, it is recommended to use the quotation macros/environments provided by the csquotes package (which is preloaded by univie-ling-thesis anyway); the univie-ling-thesis class assures that citations are correct if you use the optional arguments of those commands/macros in order to insert references. - The biblatex-apa style automatically lowercases English titles. This conforms to the APA (and DGPs) conventions, which favour “sentence casing” over “title casing”. English titles, from biblatex’s point of view, are titles of bibliographic entries that are either coded as `english` via the `LangID` entry field or that have no `LangID` coding but appear in an English document (i.e., a document with main language English). Consequently, if the document’s main language is English, all non-English entries need to be linguistically coded (via `LangID`) in order to prevent erroneous lowercasing, since biblatex assumes that non-identified entries use the main language (hence, such a classification is also important for correct hyphenation of the entries). Note that up to biblatex 3.3, the document language was not taken into account by the lowercasing automatism and all non-classified entries were treated like English entries (and thus lowercased), notwithstanding the main language; therefore, any entry needed to be coded. Even if this misbehaviour is fixed as of --- biblatex 3.4, it is still advisable to systematically set the proper \texttt{LangID}, since this is a prerequisite for a correct multilingual bibliography. - The lowercasing automatism described above cannot deal properly with manual punctuation inside titles. Hence, a title such as \textit{Main title. A subtitle} will come out as \textit{Main title. a subtitle}. There are several ways to avoid that. The most proper one is to use the title and subtitle fields rather than adding everything to title. Alternatively, everything that is nested inside braces will not get lowercased, i.e. \textit{Main title. \{A\} subtitle} will produce the correct result. This trick is also needed for names and other elements that should not get lowercased (\textit{Introduction to {Germanic} linguistics}). However, please do not configure your \TeX{} editor to generally embrace titles (this is a feature provided by many editors) since this will prevent biblatex-apa from lowercasing at places where it should be done. - The biblatex-apa style requires that you use biber as a bibliography processor instead of \TeX{} (the program). See [3] for details. ### 9.3 Using a different style If you do not want or are not supposed to use neither the default Unified nor the APA/DGPs style, you can disable automatic biblatex loading via the class option \texttt{biblatex=false} (see sec. 4.3). In this case, you will need to load your own style manually, by entering the respective \TeX{} commands. One case where you need to do that is if you prefer classic \TeX{} over biblatex. If you want to follow the Applied Linguistics conventions, but prefer classic \TeX{} over biblatex, a \TeX{} style file \texttt{unified.bst} that implements the \textit{Unified Style Sheet for Linguistics} is available on the Internet. ootnote{http://celxj.org/downloads/unified.bst} Note, though, that this package does not have specific support for German, so it is only really suitable if you write in English. Thus, if you want to follow the Applied Linguistics conventions, it is strongly recommended that you use biblatex with the preloaded univie-ling style. ### 10 Further instructions #### 10.1 Commands and environments Since the class draws on \scrreprt, you can use all commands and environments provided by KOMA report in order to structure and typeset your document. Please refer to the comprehensive KOMA-Script manual \cite{kos} for information. Please also refer to the template files included in the package for some further usage instructions and hints. #### 10.2 Generating PDF/A In order to submit your thesis to the \textit{Hochschulschriften-Server}, it must follow the PDF/A-1 or PDF/A-2 standard (a specific form of PDF aimed at long-time archiving). ootnote{http://e-theses.univie.ac.at/pdf-erstellung.html} With PDFLaTeX or LuaTeX, you can achieve PDF/A-1b or PDF/A-2b compliant files by means of the pdfx package.\footnote{XeTeX also works with recent versions of the pdfx package, if you use the command line option -shell-escape -output-driver="xdvipdfmx -z 0" of xelatex. Please refer to the pdfx manual for details.} If you do not use specific color profiles in your document and provided that all your graphics follow the requirements of the PDF/A standard (all fonts embedded, no transparency groups etc.), producing a PDF/A-1b compliant file is straightforward: 1. Use the class option `pdfa=true`. 2. Create a text file called `<name>.xmpdata` (where `<name>` is to be replaced by the name of your [master] TeX file or the produced PDF file, respectively), which contains some metadata of your document (author's name, title, keywords, etc.). This file needs to be stored next to your TeX file(s). Example `.xmpdata` files are included in the univie-ling-thesis bundle (and used in the accompanying templates); you can use them as a model and adapt them to your needs. Note that pdfx does not verify whether the produced output really conforms to the standard. You need to use a PDF/A verification tool to ensure that. If you own Adobe Acrobat Pro, you can use its preflight tool for this task. Alternatively, a number of free validation programs and online services are available on the Internet. For detailed instructions, please refer to the pdfx package manual [6]. Furthermore, a step-by-step guide to PDF/A generation with PDFLaTeX and pdfx is available at http://www.mathstat.dal.ca/~selinger/pdfa/. \section*{10.3 \LaTeX\ layouts and templates} A layout for \LaTeX\ can be retrieved from https://github.com/jspitz/univie-ling/raw/master/lyx/layouts/univie-ling-thesis.layout. Templates are provided as well: - English template: - German template: \section*{11 Release History} - 2022/10/02 (v. 2.0) - Use l3keys rather than xkeyval for key-value option handling. - Fix some varioref definitions. - Use translator instead of translations for localization. \footnote{XeTeX also works with recent versions of the pdfx package, if you use the command line option -shell-escape -output-driver="xdvipdfmx -z 0" of xelatex. Please refer to the pdfx manual for details.} \footnote{See https://www.lyx.org. - Various small class cleanups. • 2022/09/08 (v. 1.20) - Load varioref \AtBeginDocument. • 2022/06/18 (v. 1.19) - Add option \fontenc. - Fix translation of English example string. - Add monospaced font. • 2022/05/11 (v. 1.18) - Fix error with subtitles. • 2022/02/05 (v. 1.17) - Fix option \apa. - Omit quotation marks when title is empty. • 2021/11/03 (v. 1.16) - Add \makedeclaration (for BA theses). See sec. 6. - Adjust font size and margins of BA thesis to dept. standards. - Add option \draftmark. See sec. 4.4. - Fix grouping in \maketitle. • 2021/10/19 (v. 1.15) No change to this class. • 2021/09/01 (v. 1.14) No change to this class. • 2020/11/11 (v. 1.13) No change to this class. • 2020/06/25 (v. 1.12) No change to this class. • 2020/05/05 (v. 1.11) - New option \polyglossia. - New option \pdfa. • 2020/05/01 (v. 1.10) No change to this class. • 2019/01/21 (v. 1.9) No change to this class. • 2019/01/15 (v. 1.8) Fix title abbreviations (MA, BA). • 2018/11/07 (v. 1.7) No change to this class. • 2018/11/04 (v. 1.6) Remove \subexamples environment as this is now provided by covington. • 2018/09/03 (v. 1.5) Introduce \texttt{subexamples} environment. • 2018/04/26 (v. 1.4) Fix full date issue in \texttt{biblatex} bibliography style. • 2018/03/02 (v. 1.3) No change to this class. • 2018/02/13 (v. 1.2) No change to this class. • 2018/02/11 (v. 1.1) No change to this class. • 2018/02/08 (v. 1.0) – Switch default bibliography style (from APA to Unified). – Initial release to CTAN. • 2016/05/05 (v. 0.9) – Fix comma after \textit{et al.} with \texttt{biblatex-apa}. • 2016/04/30 (v. 0.8) – Set proper citation command for \texttt{csquotes}’ integrated environments. – Improve templates. • 2016/04/28 (v. 0.7) – Fix leading of thesis type on title page. • 2016/03/23 (v. 0.6) – Fix the output of German multi-name citations (DGPs guidelines). – Extend documentation of bibliographic features. • 2016/02/08 (v. 0.5) – Fix the PDF logo for PDF/A output. – Extend documentation of PDF/A production. • 2016/01/25 (v. 0.4) First (documented) release. – Title page layout has been adapted to the new (bilingual) standards. – add \texttt{bachelor} thesis type. – Possibility to disable some pre-loaded packages. – Use key=value option format. References
{"Source-Url": "https://ctan.math.washington.edu/tex-archive/macros/latex/contrib/univie-ling/doc/univie-ling-thesis.pdf", "len_cl100k_base": 6694, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 32259, "total-output-tokens": 7925, "length": "2e12", "weborganizer": {"__label__adult": 0.0010528564453125, "__label__art_design": 0.00904083251953125, "__label__crime_law": 0.001064300537109375, "__label__education_jobs": 0.3193359375, "__label__entertainment": 0.0008568763732910156, "__label__fashion_beauty": 0.0007810592651367188, "__label__finance_business": 0.0010395050048828125, "__label__food_dining": 0.0006060600280761719, "__label__games": 0.0025005340576171875, "__label__hardware": 0.0010843276977539062, "__label__health": 0.0007710456848144531, "__label__history": 0.0027332305908203125, "__label__home_hobbies": 0.0005674362182617188, "__label__industrial": 0.0006704330444335938, "__label__literature": 0.03143310546875, "__label__politics": 0.0009870529174804688, "__label__religion": 0.0018444061279296875, "__label__science_tech": 0.024383544921875, "__label__social_life": 0.0012359619140625, "__label__software": 0.19677734375, "__label__software_dev": 0.399169921875, "__label__sports_fitness": 0.00063323974609375, "__label__transportation": 0.0007534027099609375, "__label__travel": 0.000705718994140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27045, 0.06234]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27045, 0.20913]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27045, 0.80074]], "google_gemma-3-12b-it_contains_pii": [[0, 1309, false], [1309, 3342, null], [3342, 6253, null], [6253, 8861, null], [8861, 11499, null], [11499, 13622, null], [13622, 15331, null], [15331, 18499, null], [18499, 21309, null], [21309, 23846, null], [23846, 24987, null], [24987, 26166, null], [26166, 27045, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1309, true], [1309, 3342, null], [3342, 6253, null], [6253, 8861, null], [8861, 11499, null], [11499, 13622, null], [13622, 15331, null], [15331, 18499, null], [18499, 21309, null], [21309, 23846, null], [23846, 24987, null], [24987, 26166, null], [26166, 27045, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27045, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27045, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27045, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27045, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27045, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27045, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27045, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27045, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27045, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27045, null]], "pdf_page_numbers": [[0, 1309, 1], [1309, 3342, 2], [3342, 6253, 3], [6253, 8861, 4], [8861, 11499, 5], [11499, 13622, 6], [13622, 15331, 7], [15331, 18499, 8], [18499, 21309, 9], [21309, 23846, 10], [23846, 24987, 11], [24987, 26166, 12], [26166, 27045, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27045, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
53f056c238a710a0360c224ac5d9115c1f16150f
An Exploration of Tool Support for Categorical Coding Anjo Anjewierden and Hannie Gijlers, Department of Instructional Technology, Faculty of Behavioural Sciences, University of Twente, PO Box 217, 7500 AE Enschede, The Netherlands Email: a.a.anjewierden@utwente.nl, a.h.gijlers@utwente.nl Abstract: In this paper we explore tool support for categorical coding of verbal and chat data. We consider tool support for manual coding, automatic coding by learning algorithms, and derive a socio-technical approach in which human coders and learning algorithms together address the coding task. Given that a literature study suggests researchers devise, adapt and refine a wide variety of coding schemes, a categorization support system should handle and accommodate user defined coding schemes. Based on these ideas a prototype of the ChatIC tool was developed and evaluated with three coding schemes. For two coding schemes a sufficient inter-rater agreement between a human coder and the learning algorithms was reached. Introduction This paper explores the opportunities of tool support for categorical coding of verbal data and messages obtained from collaborative chats. Our main motivation is of a practical nature. More and more interaction protocols obtained from computer mediated communication (CMC) become available, from various (on-line) sources and on various domains. Behavioral scientists who want to study this data often devise and apply their own coding schemes, and given the lack of special purpose coding tools they are often faced with the time-consuming task of manual coding with general tools like Excel. Recently, several papers have made the learning research community aware of the possibility of applying text analysis technology to semi-automatically code verbal data (e.g. Dönmez et al., 2005). These papers study the coding problem from a language technology perspective, mainly focusing on the machine learning algorithms used. We propose a slightly broader view by considering that going from manual coding to the application of fully automated coding technology is perhaps too big a step to take in one go. The principal point is that a behavioral scientist will generally define a coding system that is deemed suitable for the data and study at hand, and it is not at all clear that the categorical distinctions made in an arbitrary coding scheme can be picked up by existing language technology. The approach used by automatic coding tools is to first manually code part of the data (this is called the training phase) and then to determine the inter-rater reliability between the human and the algorithm (the test phase). The training stage is necessary as all algorithms learn from a set of coded data. For automatic coding to be feasible it is suggested to develop techniques that predict when the training set is large enough and it can be determined statistically that the remainder of the data can be coded automatically (Forsyth et al., 2007). In the subsequent sections we look at categorization in practice, review existing tool support for categorization and finally describe ChatIC our socio-technical tool for categorization support. Categorization in Practice Analysis of interaction protocols is one of the main methods researchers use to gain understanding of the processes that contribute to meaningful and effective collaborative learning (Naidu & Jarvela, 2006). The analysis of chat data often starts with the choice for, or development of a coding scheme (Strijbos & Stahl, 2007). Next to the choice of analysis scheme, the segmentation of the discourse is an important aspect of the categorization process. Segmentation refers to the process that divides the text into units. Coding schemes frequently use utterances as their unit of analysis, which can be defined as individual message units, that can be distinguished from other units by a pause, comma, or stop (van Boxtel, van der Linden, & Kanselaar, 2000). One can argue that in chat communication utterances are partly defined by the learners themselves. However, learners’ chat messages might contain inadequate punctuation, or even distinct messages within one chat line. Categorization on the utterance level is not sufficient to capture learners’ knowledge construction-process or cognitive processing (van Boxtel, van der Linden, & Kanselaar, 2000; Weinberger & Fischer, 2006). In order to capture these processes a coarser grained unit of analysis is needed. Collaborative learning is a multi-faceted activity, learners engage in communicative activities as well as domain and task-related activities, and also have to regulate and monitor their progress. Differences in the learning context and the questions guiding the research resulted in the development of a wide variety of coding schemes (De Wever, Schellens, Valcke, & Van Keer, 2006). Coding schemes may focus on one aspect or the collaborative learning process, combine different aspects of the collaborative process in one coding scheme, or distinguish between different dimensions that each focus on a specific aspect of the collaborative learning process (Strijbos & Stahl, 2007). Jonassen and Kwon (2001) compared the problem solving behavior of learners working in a face-to-face setting with the problem solving behavior of learners communicating through a CMC tool. They used a coding scheme based on the functional category system, developed by Poole and Holmes (2005) to study interaction in problem solving contexts and distinguish categories that are related to phases in the problem solving process. A similar learning process oriented approach was followed by Saab et al. (2005) who developed a coding scheme that included a number of categories that were based on inquiry learning processes. The math moves dimension in the analytical framework developed by Strijbos and Stahl (2007) is an example of a domain specific dimension, which distinguishes mathematical procedures like counting, using geometric expressions, and using algebraic functions. Krystyniak and Heikkinen (2007) developed a coding scheme for the analysis of verbal interactions of learners’ completing an inquiry assignment in a chemistry laboratory. Their coding scheme distinguishes domain and task specific categories like the use of chemistry concepts and laboratory equipment. The widely used, and often adapted, coding scheme for the classification of interaction developed by Bales (1970) is an example of a coding scheme focusing on communicative functions. The main categories include positive and mixed actions, attempted answers, questions and negative and mixed actions. Each main category consists of a number of sub-categories. Kumpulainen and Mutanen (1999) included a coding scheme for language functions in their analytical framework. The language functions dimension defined by Kumpulainen and Mutanen includes categories like reasoning and evaluation that are not available in the analytical framework of Bales. Arvaja (2007) further adjusted the coding scheme of Kumpulainen and Mutanen and included a code for exemplification. The coding scheme as adjusted by Arvaja is specifically suited to study communicative activities in a knowledge construction context. Recently, more and more coding schemes use a multi-dimensional approach (Saab, Van Joolingen, & Van Hout-Wolters, 2005; Weinberger & Fischer, 2006) that incorporate communicative functions as well as more cognitive, task or domain related categories. This overview of coding practices shows that within the behavioural sciences a variety of coding schemes for interaction are used. Researchers adjust each others coding schemes, based on their own research questions, and combine schemes or dimensions from different coding schemes in their analytical framework (Strijbos & Stahl, 2007). Categorization Support Based on the problems identified in the introduction we argue that a categorization support system has to address at least the following: handle a wide range of data, support coding schemes defined by a researcher, and make the user aware of the opportunity of automatic coding when the combination of data and coding scheme allows this. In an environment dedicated to categorical coding the user should expect functions like a simple interactive coding interface, browsing previous codes, and basic statistical information, as well as more advanced functions for example a query like “show me messages containing this term or syntactic pattern” and the ability to code all matching messages using a single directive. Parallel to active support for manual coding an automated component can learn from the codes assigned through the interactive interface. It can suggest a category for the next message, and provide indicators, for example, the kappa between itself and the coder. Such an environment is typical of what is commonly referred to as a socio-technical tool: user and system together solve the task of categorical coding. The environment sketched above deviates from machine learning approaches in a fundamental way as it removes the distinction between the separated stages of training and testing. The user is continuously and unobtrusively informed about what the system learns. In order to determine whether the environment is realistic we scanned the literature for papers on tool support for categorical coding with the objective of finding ideas and techniques that could be usefully integrated. MEPA (Erkens, 2002), developed at Utrecht University, is a tool for the analysis of sequences after categorical coding has been applied. MEPA does not address the coding problem in general, although it provides some automatic support for a particular coding scheme and semi-automatic support for segmentation through a large set of handcrafted rules (Erkens et al., 2005). We consider segmentation a pre-processing step and MEPA might play a role there. Segmentation, despite its importance, is not further discussed in the remainder of this paper. The goal of text categorization is to classify a document and assign it a theme or topic (Manning & Schütze, 1999). Superficially, text categorization or document classification, and categorization of chats appear similar tasks: given a document assign it a label or code. Automatic text categorization is successfully applied to self-contained documents, for example newspaper articles, web pages and weblog posts, with a high degree of accuracy. Self-contained documents can be described as being on a particular topic, often evidenced by the nouns they contain, and because of the length (several hundred words or more) machine learning algorithms have sufficient information to learn to classify correctly. This contrasts with messages from chats or verbal data. These messages are short (often just a few words) and topic classification is less relevant. categorization algorithms view a document as a set of features and classification is performed on the features rather than the original text. The most common features are word or character n-grams (a sequence of n consecutive words or characters), and patterns involving information about the syntactic structure (e.g. a personal pronoun followed by a verb). Based on the features, a statistical or machine learning method (e.g. Naive Bayes, Markov models) is applied to derive the probability that a certain document (reduced to a set of features) belongs to one of the possible categorizations. Algorithms derive the relevance of the features from a manually coded training set. This general framework, including the underlying algorithms, has also been implemented for the categorization of verbal data (CodeLearner; Forsyth et al., 2007, Ainsworth et al., 2007), educational chat messages (Anjewierden, Kollöffel & Hulshof, 2007) and content-rich sentences (TagHelper; Dönmez et al., 2005). TagHelper (Rosé, 2007), developed at Carnegie Mellon, is the most visible tool for semi-automatic coding support. The website provides download, extensive documentation and an on-line course. As input, TagHelper takes a training set in Excel. The tool implements a large number of algorithms for text categorization and has flexible mechanisms to select which features to use, e.g. words, bigrams, and part-of-speech (POS) bigrams. We conducted a number of experiments using TagHelper with our coded data. On one of our test sets, and experimenting with algorithms and settings, TagHelper obtained a reasonable kappa of 0.58 on Dutch chats (TagHelper only supports English and German). TagHelper reports kappa, and for each feature, the probability it belongs to any given category. Both the kappa and the feature probability are useful in the context of our environment. TagHelper is founded on traditional machine learning practices, including extensive cross validation. Running it can take extremely long. An important aspect we learned from the experiments with TagHelper is that the environment should prefer learning algorithms that can be computed incrementally to obtain interactive performance. CodeLearner is a tool under development at the University of Nottingham. The objectives and approach are very similar to TagHelper. In Forsyth et al. (2007) a technique is developed “that [CodeLearner] be able to predict from a relatively small training set what its accuracy is likely to be on a much larger testing set.” Given that CodeLearner is also an off-line tool, and uses time consuming cross validation for the prediction of future coding accuracy, an interactive environment can only incorporate the idea of predicting future accuracy and not the implementation. ChatIC In this section we describe our implementation of a dedicated socio-technical categorical coding environment called ChatIC. Figure 1 provides an overview of the user interface. The UI is modeled around the idea of a cockpit or Intensive Care (IC) unit. On the left is a window with the messages, with small indicators in front of each individual message. On the right are two windows with aggregation indicators that provide an overview of the overall state of the coding process. Coding itself is very simple. The next message to be coded is highlighted (middle of the left window) and the user types a single character. After entering the code, all indicators visible on the screen are updated, and the next uncoded message is highlighted. ChatIC takes two inputs: a specification of the coding scheme and the messages to be coded. The latter is a file (Excel or XML) with information about the sender and the text of the message, optionally a time-stamp, recipient and channel over which the message was conveyed. Output is a file with the coded messages that can be read by, for example, TagHelper. The coding scheme is also specified in either Excel or XML and lists the possible categories (or classes), the keystroke and a mnemonic for each category. In Figure 1, the lower right window shows the coding scheme, in this case derived from (Bales, 1970), and the special keyboard short cuts. For each chat there are five columns (left window of Figure 1). The first column is a colored (gray to blue) icon that indicates the state. The possible states are coded (a checkmark icon), uncoded (bullet), message with same features as seen before (plus), ambiguous (squiggle) and ignore (cross). The color scheme for the icon is a statistic about how certain the tool is that the coding is correct, the statistic is translated to a color ranging from gray (low confidence) to bright blue (high confidence). The second column is an alert indicator. If ChatIC would have assigned the same code the alert indicator is empty, if it disagrees the indicator is red. There are two examples of the alert indicator in Figure 1. The third column shows the assigned code. For coded messages this is the code assigned by the user, for uncoded chats this is the suggested code by the underlying algorithm. The fourth column displays the sender and the last column the text of the message. Turbo Coding Taking advantage of the functionality of the underlying infrastructure, a text analysis toolkit (Anjewierden et al., 2008), there are a number of “turbo” coding functions. One is based on entering a message and then listing all messages in the data set that contain the same features. For example, entering “temperature increases” shows all messages that contain the features “temperature” and “increase”. The user can then code all resulting messages with a single action. Another turbo coding function is the application of a pattern search mechanism based on the natural language syntax of the messages. The POS-pattern “<pp> <vb>” finds all messages that contain a personal pronoun (pp) immediately followed by a verb (vb), e.g. “I think the answer is 4”. These functions can speed up coding significantly if the user is able to relate her coding scheme to such natural language patterns. Experience with Coding Schemes We evaluated the relation between the coding scheme and the accuracy of automatic coding. For this evaluation we selected three coding schemes. The first coding scheme we selected is derived from the communicative functions of Bales. It was selected because coding schemes based on communicative functions are commonly used. Figure 1 shows both the coding scheme and the resulting kappa table, achieving a more than reasonable $k=0.80$ using single words as features. The second coding scheme makes a distinction between messages about the domain (“when the blinds are down the temperature decreases”), regulatory (“shall we try the next exercise”), technical (“could you move the window, I cannot read it”), and off-task or social messages (“well done partner”). Our experience is that humans mainly look at the terms to correctly apply this coding scheme. Learning algorithms should pick up the terms related to one of the four categories relatively easily. The third coding scheme refines the domain-oriented messages into two categories: observations and interpretations. An observation is when the learner sees something in the learning environment and says: “the temperature increases”. An interpretation occurs when the learner draws a conclusion or presents a hypothesis: “when the blinds are down the temperature decreases”. This coding scheme should present a challenge for algorithms, because many of the terms (e.g. temperature) are as likely to occur in an observation as in an interpretation. In general, humans use the syntactic structure to make the distinction, for example “if ... then” is a strong pattern for an interpretation. The messages for the second and third coding schemes were obtained by transcribing face-to-face interaction between 16 groups of three learners trying to solve a learning task. A student, who had experience with coding similar protocols using Excel, coded all 22,536 messages manually with the first version of ChatIC. She reported that the user interface is far superior compared to Excel, and spontaneously stated that after a while the tool predicted how to code the next message. Figure 2 shows the inter-rater agreement as a kappa table between the human coder (rows) and a Naive Bayes classifier with single words as features (columns). ![Figure 2. Kappa table for the second coding scheme.](image) The messages classified as domain were thereafter manually coded following the idea of a multi-dimensional coding scheme as observation, interpretation and other domain. Figure 3a shows the kappa table when we take the binary classification observation vs. non-observation using the same algorithm as above. Although $\kappa=0.42$ might be reasonable in some contexts the large number of false positives (482) compared to the number of true positives (266) indicates the algorithm does not pick up the distinction between observations and other domain-oriented messages. The results for interpretations are similar ($\kappa=0.27$). In order to test our hypothesis that interpretations can be recognized using syntactic patterns (if ... then ...) we implemented a feature extractor which defines a feature for each occurrence of [A, B] when word A occurs before word B in a message. The most frequent of these word collocations were selected. Figure 3b displays the kappa table and in this case, interestingly, the algorithm misses only 30 of the 542 interpretations found by the user, although it also incorrectly classifies a large number of messages as interpretations. The overall conclusion is that this coding scheme is too subtle for machine learning algorithms to classify correctly. **Advanced Indicators** The development of ChatIC as an interactive socio-technical tool provided two major challenges: defining useful indicators and developing algorithms to compute them in real-time. One of the most used indicators for behavioral researchers is Cohen's kappa and the underlying algorithms have been programmed such that they can compute kappa for Naive Bayes (unigram, bigram, POS-bigrams, and collocations) in about 0.2 seconds for our largest test set (22,500 messages). Cross validation and extensive test/train splits are not possible in real-time and likewise are the techniques proposed by CodeLearner to predict future accuracy. ChatIC is interactive and the position on the screen the user is watching most is the next message to be coded. Given that ChatIC predicts the category of the next message a graph with the moving average of the accuracy of this prediction is a useful indicator of future automatic coding accuracy. We have designated this the peek-one indicator, an example of the resulting graph is shown in Figure 4 (this indicator replaces the kappa table when the user selects the Peek-one tab in the top right window of Figure 1). The peek-one indicator displays the accuracy on the y-axis (0...1). The solid horizontal line is the estimate, which is derived from the kappa table using Pearson's chi-square. On the far right of Figure 4 the peek-one indicator is about 0.7 which suggests that ChatIC correctly predicts 7 out of 10 codings. Although peek-one is not as rigorous as the approach of CodeLearner the idea of a leveling off of automated coding accuracy is still helpful. In this particular case, the indicator reveals an insight in the underlying data. The vertical lines in the figure demarcate the start and the end of protocols. Reading from left to right, the peek-one indicator goes up slowly to approximately 0.55 and once the second protocol is taken into account drops to about 0.40. After a while it goes up again, reaching its maximum peek-one accuracy of about 0.75. The explanation, provided by the coder, is that the first protocol is derived from management discussions about a certain task, while the second and third protocol are from field workers on the same task. Apparently, managers and field workers use different terminology, and it takes the learning algorithm a little while to adapt. The significance in the context of ChatIC as a socio-technical tool is that a researcher gets enough information to detect and, possibly, explain the behavior of the tool in relation to the data. The peek-one statistic for the second coding scheme is shown in Figure 5. It illustrates that the algorithm quickly reaches an accuracy of between 0.8 and 0.9 and does not improve with more messages being coded. The combination of a high enough kappa and a stable peek-one can be a reason for the researcher to stop coding and perhaps decide to automatically code the remainder of the messages. Trust and Interactive Feature Feedback An important aspect of a socio-technical tool is trust: the user should feel confident about how the system behaves. In ChatIC this is translated at the macro-level through the kappa and peek-one indicators. The user can see how these indicators change over time and whether the performance improves. In addition, the kappa table is clickable. If the user selects a cell, for example the cell with manual=domain and algorithm=regulative, ChatIC lists all corresponding messages. At the micro-level trust is supported by predicting the category for uncoded messages and through the alert indicators for discrepancies. <table> <thead> <tr> <th>DOM A</th> <th>door zonlicht wordt het toch niet warmer</th> </tr> </thead> <tbody> <tr> <td>DOM C</td> <td>ja maar als het koud is als het winter is en je hebt het dicht dan wordt het warmer</td> </tr> </tbody> </table> Figure 6. Intra-message feature highlighting. We are experimenting with a very detailed indicator of the relation between the features and coding. Figure 6 gives an example. All words in the message are highlighted as either bold, normal or gray. If a word is displayed as bold, the conditional probability of the word belonging to the assigned category is higher than the prior probability for the same category, and there is no other category for which this holds. If there are multiple categories for which the condition holds the word is in the normal typeface and if the condition does not hold it is gray. The idea is to emphasize the words that cause the algorithm to assign the code. The first line in Figure 6, “because of sunlight it does not become warmer”, for example, suggests a semantic relation between the terms sunlight and warmer if we ignore the function words. We hope that techniques like this can be used to identify semantic patterns which go beyond n-gram statistics. Conclusions In this paper we have explored software tool support for categorical coding. Currently, tool support is minimal. TagHelper is about the only tool that is readily available and provides a useful, albeit time consuming, service to behavioral scientists. We developed the concept of viewing categorical coding as a socio-technical system in which the human researcher and learning algorithms together address the task of categorical coding. After developing this concept we implemented a first prototype of the idea as a tool called ChatIC, and evaluated it on coding schemes in common use by behavioral scientists. For some coding schemes, the inter-rater agreement between the human coder and learning algorithms was sufficient to consider the application of automatic coding, for another coding scheme the algorithm performed poorly. An important aspect of ChatIC is the notion of single message and aggregated indicators. We illustrated the code next message, feature highlight, the kappa table and peek-one indicators. It is our feeling that a tool like ChatIC is of potential use to everybody involved in categorical coding and an open source version is available through http://edm.gw.utwente.nl. One possible scenario is to use ChatIC for initial coding and TagHelper, or other traditional machine learning tools, for off-line cross validation and experimentation with a variety of learning algorithms and settings if (part of) the data has to be categorized automatically. References **Acknowledgments** The authors thank Guido Bruinsma who generously provided a data set, Carolyn Rosé who provided information about TagHelper, Natalia Szargiej for her assistance during the coding process, and three anonymous reviewers for their helpful comments.
{"Source-Url": "https://repository.isls.org/bitstream/1/3115/1/35-42.pdf", "len_cl100k_base": 5383, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 19335, "total-output-tokens": 6877, "length": "2e12", "weborganizer": {"__label__adult": 0.0005664825439453125, "__label__art_design": 0.0012054443359375, "__label__crime_law": 0.0008387565612792969, "__label__education_jobs": 0.1444091796875, "__label__entertainment": 0.0002627372741699219, "__label__fashion_beauty": 0.0004229545593261719, "__label__finance_business": 0.0008993148803710938, "__label__food_dining": 0.0007915496826171875, "__label__games": 0.0010290145874023438, "__label__hardware": 0.001201629638671875, "__label__health": 0.0017337799072265625, "__label__history": 0.0007781982421875, "__label__home_hobbies": 0.000377655029296875, "__label__industrial": 0.001026153564453125, "__label__literature": 0.0027332305908203125, "__label__politics": 0.0006937980651855469, "__label__religion": 0.0011243820190429688, "__label__science_tech": 0.206787109375, "__label__social_life": 0.0007925033569335938, "__label__software": 0.04815673828125, "__label__software_dev": 0.58203125, "__label__sports_fitness": 0.0005917549133300781, "__label__transportation": 0.000957489013671875, "__label__travel": 0.00045108795166015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30830, 0.02001]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30830, 0.81007]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30830, 0.88473]], "google_gemma-3-12b-it_contains_pii": [[0, 5136, false], [5136, 10893, null], [10893, 15076, null], [15076, 18167, null], [18167, 21824, null], [21824, 24513, null], [24513, 29590, null], [29590, 30830, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5136, true], [5136, 10893, null], [10893, 15076, null], [15076, 18167, null], [18167, 21824, null], [21824, 24513, null], [24513, 29590, null], [29590, 30830, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30830, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30830, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30830, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30830, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30830, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30830, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30830, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30830, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30830, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30830, null]], "pdf_page_numbers": [[0, 5136, 1], [5136, 10893, 2], [10893, 15076, 3], [15076, 18167, 4], [18167, 21824, 5], [21824, 24513, 6], [24513, 29590, 7], [29590, 30830, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30830, 0.03896]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
2c7049ae3f2d2710af4de829cdedb5244a2289bf
LINGO 8.0 TUTORIAL Created by: Kris Thornburg Anne Hummel Table of Contents Introduction to LINGO 8.0.................................................................2 Creating a LINGO Model.................................................................3 Solving a LINGO Model.................................................................4 Using Sets in LINGO.................................................................6 The LINGO Data Section.............................................................8 Variable Types in LINGO..............................................................10 Navigating the LINGO Interface..................................................11 LINGO Operators and Functions...............................................14 Common LINGO Error Messages..............................................16 LINGO Programming Examples..............................................17 Introduction to LINGO 8.0 LINGO is a software tool designed to efficiently build and solve linear, nonlinear, and integer optimization models. LINGO 8.0 includes several new features, including: - A new global solver to confirm that the solution found is the global optimum, - Multistart capability to solve problems more quickly, - Quadratic recognition and solver to identify quadratic programming (QP) problems, - A faster and more robust Dual Simplex solver, - An improved integer solver to enhance performance in solving many types of problems, - Linearization capability to transform common nonsmooth functions to a series of linear functions, - Infeasible and unbounded analytical tools to help identify model definition problems, - A decomposition feature to identify if a model contains independent submodels, - A threadsafe DLL for various classes of models, and - More fun than ever before! Creating a LINGO Model An optimization model consists of three parts: - **Objective function** – This is a single formula that describes exactly what the model should optimize. A general manufacturing example of an objective function would be to minimize the cycle time for a given product. - **Variables** – These are the quantities that can be changed to produce the optimal value of the objective function. For example, when driving a car, the duration of the trip \( t \) and the speed at which it is taken \( v \) determine the distance \( d \) that can be traveled. - **Constraints** – These are formulas that define the limits on the values of the variables. If an ice cream store is determining how many flavors it should offer, only a positive number of flavors is feasible. This constraint could be expressed as \( \text{Flavors} \geq 0 \). A sample model for cookie production by two bakers at a bakery is given by: ! A cookie store can produce drop cookies and decorated cookies, which sell for $1 and $1.50 apiece, respectively. The two bakers each work 8 hours per day and can produce up to 400 drop cookies and 200 decorated cookies. It takes 1 minute to produce each drop cookie and 3 minutes to produce each decorated cookie. What combination of cookies produced will maximize the baker's profit? ; \[ \text{MAX} = 1 \times \text{Drop} + 1.5 \times \text{Deco}; \] \[ \text{Drop} \leq 400; \] \[ \text{Deco} \leq 200; \] \[ \frac{1}{60} \times \text{Drop} + \frac{3}{60} \times \text{Deco} \leq 16; \] Several other items must be noted about this model: - Comments in the model are initiated with an exclamation point (!) and appear in green text. - LINGO specified operators and functions appear in blue text. - All other text is shown in black. - Each LINGO statement must end in a semi-colon (;). - Variable names are not case-sensitive and must begin with a letter (A-Z). Other characters in the variable name may be letters, numbers (0-9), or the underscore character (_). Variable names can be up to 32 characters in length. Solving a LINGO Model Once the LINGO model has been entered into the LINGO Model window, the model can be solved by clicking the *Solve* button on the toolbar, by selecting LINGO | Solve from the menus, or by using the ctrl+s keyboard shortcut. LINGO will notify you of any errors it has encountered. The best way to get information about these errors is to consult the *Error Messages* section in the software’s proprietary tutorial. If no errors are found, then the LINGO Solver Status window appears. This window provides information on the number of nonlinear, integer, and total variables in the model; the nonlinear and total number of constraints used in the model; and the number of nonlinear and total nonzero variable coefficients used. The Solver Status box in this window details the model classification (LP, QP, ILP, IQP, NLP, etc.), state of the current solution (local or global optimum, feasible or infeasible, etc.), the value of the objective function, the infeasibility of the model (amount constraints are violated by), and the number of iterations required to solve the model. The Extended Solver Status box details similar information for the more advanced branch-and-bound, global, and multistart solvers. By closing this window, you can then view the Solution Report window. This window shows the values of each variable that will produce the optimal value of the objective function. The reduced cost for any variable that is included in the optimal solution is always zero. For variables not included in the optimal solution, the reduced cost shows how much the value of the objective function would decrease (for a MAX problem) or increase (for a MIN problem) if one unit of that variable were to be included in the solution. For example, if the reduced cost of a certain variable was 5, then the optimal value of the MAX problem would decrease by 5 units if 1 unit of the variable were to be added. The Slack or Surplus column in the Solution Report shows how tight the constraint is. If a constraint is completely satisfied as an equality, then slack/surplus is zero. If slack/surplus is positive, then this tells you how many more units of the variable could be added to the optimal solution before the constraint becomes an equality. If slack/surplus is negative, then the constraint has been violated. The Dual Price column describes the amount to which the value of the objective function would improve if the constraining value is increased by one unit. Using Sets in LINGO LINGO allows you to group many instances of the same variable into sets. For example, if a model involved 27 delivery trucks, then these 27 trucks could be described more simply as a single set. Sets may also include attributes for each member, such as the hauling capacity for each delivery truck. Sets may be either primitive or derived. A primitive set is one that contains distinct members. A derived set, however, contains other sets as its members. To use sets in a model, the special section called the SETS section must be defined before any of the set members are used in the model’s constraints. This section begins with the tag SETS: and ends with the tag ENDSETS. A primitive set could be defined as follows: ``` SETS: Trucks/TR1..TR27/:Capacity; ENDSETS ``` This set is given the setname “Trucks” and contains 27 members, identified by TR1 – TR27. The attributes for each member are called “Capacity.” The derived set is defined similarly, but must also include the parent set list. An example of a derived set could be: ``` SETS: Product/X Y/; Machine/L M/; Make(Product Machine)/X L, X M, Y M/; ENDSETS ``` This set declaration defines two primitive sets, Product and Machine, and one derived set called Make. The Make set is derived from the parent sets Product and Machine. Members are specified as shown. Notice that a fourth Product-Machine combination, Y L, could be theoretically possible. This example does not allow for such a combination. If all combinations of the parent sets are possible, then no member set need be defined. An attribute list for the derived set can also be included in the same way as for a primitive set. Several set looping functions are also available for use in LINGO. These functions are as follows: - \(@\text{FOR}\) – generates constraints over members of a set. - \(@\text{SUM}\) – sums an expression over all members of the set. - \(@\text{MIN}\) – computes the minimum of an expression over all members of the set. - \(@\text{MAX}\) – computes the maximum of an expression over all members of the set. Each of the above looping functions has a similar form of syntax and the looping functions can even be nested. Examples of expressions using each type of looping function are as follows: - This \(\text{@FOR}\) statement sets the hauling capacity for all 27 delivery trucks in the Trucks set to at most 3000 pounds: \[ \text{\@FOR(Trucks(T): Capacity(T)\leq 3000));} \] - This \(\text{@SUM}\) statement calculates the total hauling capacity from the individual trucks: \[ \text{TOTAL_HAUL=}\text{@SUM(Trucks(J): Capacity(J));} \] - These \(\text{@MIN}\) and \(\text{@MAX}\) statements find the extreme hauling capacity levels from the individual delivery trucks: \[ \text{MIN_HAUL = @MIN(Trucks(J): Capacity(J));} \] \[ \text{MAX_HAUL = @MAX(Trucks(J): Capacity(J));} \] The LINGO Data Section LINGO provides a separate section called the DATA section in which values can be defined for different variables. Set members can be initialized in this section, attributes of the sets can be defined, or scalar variable parameters can be assigned values as well. The DATA section is defined after the SETS section is defined in the model. The section begins with the tag DATA: and ends with the tag ENDDATA. Statements within the DATA section follow the syntax: object_list = value_list; The object list contains the names of the attributes or of the set whose values are being initialized. The value list assigns the values to the specified members of the object list. The following examples show two ways to use the DATA section in LINGO. In each example, the X and Y attributes of SET1 are being initialized to [1, 2, 3] and [4, 5, 6], respectively. The first example defines values for each attribute separately: SETS: SET1 /A, B, C/: X, Y; ENDSETS DATA: X = 1, 2, 3; Y = 4, 5, 6; ENDDATA The next example shows how one statement can be used to assign values to the two attributes simultaneously. Each row assigns different values to the X, Y pair: SETS: SET1 /A, B, C/: X, Y; ENDSETS DATA: X, Y = 1, 4, 2, 5, 3, 6; ENDDATA When parameters or attributes are defined in the DATA section of a model, a feature called What-if Analysis can be used to examine the effects of varying the value of the parameter or attribute. For example, if the inflation rate is most likely going to fall between 2% and 6%, the parameter can be defined as follows in the DATA section: ``` DATA: INFLATION_RATE = ?; ENDDATA ``` When LINGO encounters the ? in the DATA section, it will prompt the user to enter a value for the parameter. The user can then enter values between 2% and 6%, and LINGO will solve the model using that “what-if” value. All the elements of an attribute can be initialized to a single value using the DATA section as well. The following example shows how to assign the value of 20 to all seven members of the NEEDS attribute and 100 to all seven members of the COST attribute: ``` SETS: DAYS / MO, TU, WE, TH, FR, SA, SU/: NEEDS, COST; ENDSETS DATA: NEEDS, COST = 20, 100; ENDDATA ``` Data values can also be omitted from the DATA section of a LINGO model to indicate that LINGO is free to determine the values of those attributes itself. The following example shows that the first two values of the attribute CAPACITY have been initialized to 34, but the last three variables have not been defined: ``` SETS: YEARS /1..5/: CAPACITY; ENDSETS DATA: CAPACITY = 34, 34, , , ; ENDDATA ``` Variable Types in LINGO All variables in a LINGO model are considered to be non-negative and continuous unless otherwise specified. LINGO’s four variable domain functions can be used to override the default domain for given variables. These variable domain functions are: - @GIN – any positive integer value - @BIN – a binary value (ie, 0 or 1) - @FREE – any positive or negative real value - @BND – any value within the specified bounds Similar syntax is used for the @GIN, @BIN, and @FREE variable domain functions. The general form for the declaration of a variable x using any of these functions is @FUNCTION(X); The @BND function has a slightly modified syntax, which includes the upper and lower bounds for the acceptable variable values. The general form for the declaration of a variable x between a lower bound and an upper bound is given by @BND(lowerBound, X, upperBound); Navigating the LINGO Interface Operations in LINGO can be carried out using commands from the menus, toolbars, or shortcut keys. There are five menus in the main LINGO window. They are the File, Edit, LINGO, Window, and Help menus. The following list details the commands in the File menu. Shortcut keys are included in parentheses when available: - **New (F2)**: Open a new model window - **Open (F3)**: Open a saved file - **Save (F4)**: Save a current model - **Save As (F5)**: Save a current model to a new filename - **Close (F6)**: Close the current model - **Print (F7)**: Prints the current window’s content - **Print Setup (F8)**: Configures printer preferences - **Print Preview (Shift+F8)**: Displays the window content as it would look if printed - **Log Output (F9)**: Opens a log file to log output to the command window - **Take Commands (F11)**: Runs a command script contained in a file - **Import LINDO File (F12)**: Converts a LINDO file into a LINGO model - **Export File**: - **License**: - **Database User Info**: - **Exit (F10)**: Closes LINGO The Edit menu contains the following commands: - **Undo (ctrl+z)**: Undoes the last action - **Redo (ctrl+y)**: Redoes the last undo command - **Cut (ctrl+x)**: Copies and deletes highlighted text - **Copy (ctrl+c)**: Copies highlighted text to the clipboard - **Paste (ctrl+v)**: Pastes the clipboard’s contents into the document - **Paste Special**: - **Select All (ctrl+a)**: Selects all of the contents of the current window - **Find (ctrl+f)**: Searches the document for a specific text string - **Find Next (ctrl+n)**: Searches the document for the next occurrence of a specific text string - **Replace (ctrl+h)**: Replaces a specific text string with a new string - **Go To Line (ctrl+t)**: Moves the cursor to a certain line number - **Match Parenthesis (ctrl+p)**: Finds the selected parenthesis’s mate **Paste Function** Paste a syntax template for a specific LINGO @function **Select Font (ctrl+j)** Configures the font for a selected portion of text **Insert New Object** Puts an OLE object in the document **Links** Creates links to external objects **Object Properties** Defines the properties of an embedded object The LINGO menu contains the following commands: **Solve (ctrl+s)** Solves the model **Solution (ctrl+o)** Makes a solution report window for the current model **Range (ctrl+r)** Creates a range analysis report for the current window **Options (ctrl+i)** Sets system options **Generate** Generates the algebraic form of the model **Picture (ctrl+k)** Displays a graphic of the model’s matrix **Debug (ctrl+d)** Identifies errors in infeasible and unbounded models **Model Statistics (ctrl+e)** Reports the technical details the model **Look (ctrl+l)** Creates a formulation report for the current window The Window menu contains the following commands: **Command Window (ctrl+1)** Opens a window for command-line operation of LINGO **Status Window (ctrl+2)** Opens the solver's status window **Send to Back (ctrl+b)** Places the current window behind all other open windows **Close All (ctrl+3)** Closes all open windows **Tile (ctrl+4)** Places open windows in a tiled arrangement **Cascade (ctrl+5)** Places all open windows in a cascaded arrangement **Arrange Icons (ctrl+6)** Aligns icon windows at the bottom of the main LINGO window The Help window contains the following commands: **Help Topics** Opens LINGO’s manual **Register** Registers your version of LINGO online **AutoUpdate** Provides prompts to download software updates **About LINGO** Displays software information A single toolbar located at the top of the main LINGO window contains many of the same commands as listed above. These commands can be accessed simply by using the mouse to click on the icon representing them. The following pictures detail which icons correspond to which commands. LINGO Operators and Functions LINGO provides a vast array of operators and functions, making it a useful problem-solving tool. A selection of the primary operators and functions is given below. There are three types of operators that LINGO uses: arithmetic, logical, and relational operators. The arithmetic operators are as follows: - Exponentiation: ^ - Multiplication: * - Division: / - Addition: + - Subtraction: - The logical operators are used in set looping functions to define true/false conditions: - #LT#: TRUE if the left argument is strictly less than the right argument, else FALSE - #LE#: TRUE if the left argument is less-than-or-equal-to the right argument, else FALSE - #GT#: TRUE if the left argument is strictly greater than the right argument, else FALSE - #GE#: TRUE if the left argument is greater-than-or-equal-to the right argument, else FALSE - #EQ#: TRUE if both arguments are equal, else FALSE - #NE#: TRUE if both arguments are not equal, else FALSE - #AND#: TRUE only if both arguments are TRUE, else FALSE - #OR#: FALSE only if both arguments are FALSE, else TRUE - #NOT#: TRUE if the argument immediately to the right is FALSE, else FALSE The relational operators are used when defining the constraints for a model. They are as follows: - The expression is equal: = - The left side of the expression is less than or equal to the right side: <= - The left side of the expression is greater than or equal to the right side: >= The following list contains a sampling of mathematical functions that can be used in LINGO: - $@\text{ABS}(X)$ – returns the absolute value of $X$ - $@\text{SIGN}(X)$ – returns -1 if $X$ is negative and +1 if $X$ is positive - $@\text{EXP}(X)$ – calculates $e^X$ - $@\text{LOG}(X)$ – calculates the natural log of $X$ - $@\text{SIN}(X)$ – returns the sine of $X$, where $X$ is the angle in radians - $@\text{COS}(X)$ – returns the cosine of $X$ - $@\text{TAN}(X)$ – returns the tangent of $X$ LINGO also contains a plethora of financial, probability, and import/export functions. These are commonly used in more advanced models, which are beyond the intended scope of this tutorial. Common LINGO Error Messages LINGO provides a variety of error messages useful for debugging a developing model. The most common errors include the following: - **7: Unable to open file: filename** - Retype filename correctly - **11: Invalid input: A syntax error has occurred** - Check the line LINGO suggests for missing semi-colons, etc. - **12: Unmatched parenthesis** - Close the parenthesis set - **15: No relational operator found** - Make sure all constraints contain =, <=, >= - **44: Unterminated condition** - Put a colon at the end of each conditional statement in a set operator - **50: Improper use of the @FOR() function** - @FOR() functions cannot be nested inside other set operators - **68: Multiple objective functions in model** - Only one is allowed, please - **71: Improper use of a variable domain function (eg, @GIN, @BIN, @FREE, @BND)** - Check the syntax - **81: No feasible solution found** - Check model’s consistency and constraints - **82: Unbounded solution** - Add constraints - **102: Unrecognized variable name: variable name** - Check spelling - **108: The model’s dimensions exceed the capacity of this version** - Upgrade to full version or use Excel - **164: Invalid LINGO name** - Create a name to conform to LINGO’s naming conventions LINGO Programming Examples A common programming model is the Knapsack problem, which deals with maximizing the utility of loading capacity. This example shows how to properly set up a knapsack problem. SETS: ITEMS / ANT_REPEL, BEER, BLANKET, BRATWURST, BROWNIES, FRISBEE, SALAD, WATERMELON/: INCLUDE, WEIGHT, RATING; ENDSETS DATA: WEIGHT RATING = 1 2 3 9 4 3 3 8 3 10 1 6 5 4 10 10; KNAPSACK_CAPACITY = 15; ENDDATA MAX = @SUM( ITEMS: RATING * INCLUDE); @SUM( ITEMS: WEIGHT * INCLUDE) <= KNAPSACK_CAPACITY; @FOR( ITEMS: @BIN( INCLUDE)); Another common programming model is the transportation problem. Transportation problems deal with transporting goods from one location to another at minimal cost. This example shows how to model a simple transportation problem. MODEL: ! A 6 Warehouse 8 Vendor Transportation Problem; SETS: WAREHOUSES / WH1 WH2 WH3 WH4 WH5 WH6/ : CAPACITY; VENDORS / V1 V2 V3 V4 V5 V6 V7 V8/ : DEMAND; LINKS(WAREHOUSES, VENDORS): COST, VOLUME; ENDSETS ! The objective; MIN = @SUM( LINKS( I, J): COST( I, J) * VOLUME( I, J)); ! The demand constraints; @FOR( VENDORS( J): @SUM( WAREHOUSES( I): VOLUME( I, J)) = DEMAND( J)); ! The capacity constraints; @FOR( WAREHOUSES( I): @SUM( VENDORS( J): VOLUME( I, J)) <= CAPACITY( I)); ! Here is the data; DATA: CAPACITY = 60 55 51 43 41 52; DEMAND = 35 37 22 32 41 32 43 38; COST = 6 2 6 7 4 2 5 9 4 9 5 3 8 5 8 2 5 2 1 9 7 4 3 3 7 6 7 3 9 2 7 1 2 3 9 5 7 2 6 5 5 5 2 2 8 1 4 3; ENDDATA END
{"Source-Url": "http://www.cfm.brown.edu/people/dobrush/am121/Lingo8.0Tutorial.pdf", "len_cl100k_base": 5370, "olmocr-version": "0.1.49", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 32991, "total-output-tokens": 6357, "length": "2e12", "weborganizer": {"__label__adult": 0.000396728515625, "__label__art_design": 0.0006327629089355469, "__label__crime_law": 0.0005092620849609375, "__label__education_jobs": 0.0028018951416015625, "__label__entertainment": 0.00018453598022460935, "__label__fashion_beauty": 0.0001785755157470703, "__label__finance_business": 0.0011444091796875, "__label__food_dining": 0.0003993511199951172, "__label__games": 0.002071380615234375, "__label__hardware": 0.00102996826171875, "__label__health": 0.0004131793975830078, "__label__history": 0.0004115104675292969, "__label__home_hobbies": 0.0002570152282714844, "__label__industrial": 0.0012836456298828125, "__label__literature": 0.0003993511199951172, "__label__politics": 0.00023627281188964844, "__label__religion": 0.0004854202270507813, "__label__science_tech": 0.08575439453125, "__label__social_life": 0.0001977682113647461, "__label__software": 0.176025390625, "__label__software_dev": 0.7236328125, "__label__sports_fitness": 0.0003921985626220703, "__label__transportation": 0.0007205009460449219, "__label__travel": 0.00024235248565673828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21759, 0.02648]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21759, 0.72866]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21759, 0.81401]], "google_gemma-3-12b-it_contains_pii": [[0, 920, false], [920, 1825, null], [1825, 3883, null], [3883, 5188, null], [5188, 6379, null], [6379, 8071, null], [8071, 9255, null], [9255, 10551, null], [10551, 11942, null], [11942, 12832, null], [12832, 14717, null], [14717, 16445, null], [16445, 16727, null], [16727, 18190, null], [18190, 18875, null], [18875, 20177, null], [20177, 20785, null], [20785, 21759, null]], "google_gemma-3-12b-it_is_public_document": [[0, 920, true], [920, 1825, null], [1825, 3883, null], [3883, 5188, null], [5188, 6379, null], [6379, 8071, null], [8071, 9255, null], [9255, 10551, null], [10551, 11942, null], [11942, 12832, null], [12832, 14717, null], [14717, 16445, null], [16445, 16727, null], [16727, 18190, null], [18190, 18875, null], [18875, 20177, null], [20177, 20785, null], [20785, 21759, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21759, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21759, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21759, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21759, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 21759, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21759, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21759, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21759, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21759, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21759, null]], "pdf_page_numbers": [[0, 920, 1], [920, 1825, 2], [1825, 3883, 3], [3883, 5188, 4], [5188, 6379, 5], [6379, 8071, 6], [8071, 9255, 7], [9255, 10551, 8], [10551, 11942, 9], [11942, 12832, 10], [12832, 14717, 11], [14717, 16445, 12], [16445, 16727, 13], [16727, 18190, 14], [18190, 18875, 15], [18875, 20177, 16], [20177, 20785, 17], [20785, 21759, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21759, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
ecbe193d7a4a61839e4e73c5afe014daec088f5b
Investigating Type Declaration Mismatches in Python Pascarella, Luca; Keshav Ram, Achyudh; Nadeem, Azqa ; Bisesser, Dinesh; Knyazev, Norman; Bacchelli, Alberto DOI 10.1109/MALTESQUE.2018.8368458 Publication date 2018 Document Version Peer reviewed version Published in 2018 IEEE Workshop on Machine Learning Techniques for Software Quality Evaluation (MaLTeSQuE) Citation (APA) Important note To cite this publication, please use the final published version (if applicable). Please check the document version above. Copyright Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons. Takedown policy Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim. Investigating Type Declaration Mismatches in Python Luca Pascarella, Achyudh Ram Delft University of Technology The Netherlands {L.Pascarella, A.R.Keshavram-1}@tudelft.nl Azqa Nadeem, Dinesh Bisesser, Norman Knyazev Delft University of Technology The Netherlands {A.Nadeem, S.P.D.Bisesser, N.Knyazev}@student.tudelft.nl Alberto Bacchelli University of Zurich Switzerland bacchelli@ifi.uzh.ch Abstract—Past research provided evidence that developers making code changes sometimes omit to update the related documentation, thus creating inconsistencies that may contribute to faults and crashes. In dynamically typed languages, such as Python, an inconsistency in the documentation may lead to a mismatch in type declarations only visible at runtime. With our study, we investigate how often the documentation is inconsistent in a sample of 239 methods from five Python open-source software projects. Our results highlight that more than 20% of the comments are either partially defined or entirely missing and that almost 1% of the methods in the analyzed projects contain type inconsistencies. Based on these results, we create a tool, PyID, to early detect type mismatches in Python documentation and we evaluate its performance with our oracle. I. INTRODUCTION Creating a proper software system is a big challenge [3], [16]. To support, quicken, and ease development, software engineers often rely on the work of external developers who write software, such as libraries and remote services, also known as Application Programming Interfaces (API). These APIs are often provided with as an aid in understanding how to use them properly [21]. Several researchers conducted interviews, surveys, and experiments to define how much of this documentation is enough [28]. Recently, de Souza et al. investigated [4] the impact of the agile product development method on software documentation, confirming that source code and annexes code comments are the most important artifacts used by developers in maintainability processes. Forward and Lethbridge [13] conducted a survey discovering that also dated documentation may be relevant, nevertheless, referring to a not up to date documentation may be dangerous for developers. In fact, developers are found to be sometimes dangerously careless when it comes to keeping this documentation updated [13], [8]. This behavior leads to poor or unaligned documentation, which may create delays in the software development or, even worse, faults in software artifacts [20], [25]. The problems created by unaligned documentation become even more significant both (1) for dynamically typed languages, such as Python, where code comments provide valuable information regarding method specification for both internal and external developers [29], and (2) for API documentation where the source code is not available (e.g., web APIs). Shi et al. explored the co-evolution of the API and related documentation of big libraries finding that the code of two nearby releases may evolve dramatically, thus requiring also a crucial evolution of the annexed documentation, which underlines the relevance of the topic. Zhou et al. proposed one of the most recent investigations on the frequent inconsistencies between source code and API documentation [31]. They proposed an automated approach based on program comprehension and natural language processing to address the inconsistencies in method’s parameters by creating constraints and exception throwing declarations. Such solution becomes particularly useful when integrated into IDEs, as it creates timely alerts asking developers to handle the mismatches between types declared in the documentation and types referred in the source code. Despite the innovative technique proposed by Zhou et al., their model is limited to statically typed languages, such as JAVA. Nevertheless, developing code in dynamically typed languages makes the code even more prone to hidden vulnerabilities stemming from code-comment inconsistencies [26]. In fact, a type mismatch may trigger an error far away from where the type mismatch initially occurred or, even worse, it may never trigger a runtime error while failing silently with serious consequences. In the work we present in this paper, we conducted a first step in investigating and supporting the alignment between documentation and source code in dynamically typed languages. In particular, we investigated the alignment between methods and comments in five popular OSS Python projects. We started with an empirical analysis of how careful open-source software (OSS) projects developers are about maintaining aligned documentations. For this purpose, we manually inspected the alignment between methods’ body and docstring of 239 methods from the aforementioned OSS Python projects. Our results show that the Python developers of the five OSS systems we sampled do care about documentation. In fact, even though developers left incomplete or totally pending more than 20% of the analyzed public methods, less than 1% of the analyzed methods contains mismatches between declared and used types. This finding empirically underlines how important documentation is deemed to be by developers of dynamically typed languages. Since even a 1% of unaligned methods may become problematic, we designed PyID, an OSS tool based on machine learning aimed at helping developers to early detect type mismatches in documentation. In a well-documented project, different kind of source code comments can support developers who are performing different tasks, such as understanding a method’s behavior, being aware of authors’ rationales, and finding additional references [18]. Source code comments can even be used to automatically generate external documentation, as it is often the case for external APIs, which rely on documentation generated from comments using automatic tools such as Sphinx⁴ [31]. However, comments may not always be present, complete, or updated to support the developers in their tasks, thus hindering the fluency of the tasks’ execution. In particular, suboptimal comments (and, consequently the automatically generated documentation) may become very problematic when the source code is not visible (e.g., for certain web APIs), because it may lead a developer to write code that is prone to hidden issues. In this paper, we focus on misaligned documentation and at type declaration mismatches between code and comments. A. Type mismatch in statically typed languages. In a statically typed language (e.g., Java, C, C++), the type of a variable is usually explicitly defined by the author and checked statically at compile time. Even though a less restricting version of statically typed languages (e.g., Scala) offer a type inference mechanism to deduce types from variables assignment, a compile-time check may still prevent erroneous operations. In such scenario, the author is forced to respect the variable type during the math operations, comparisons, assignments etc.. However, this should not be considered a limitation because almost every language offers a workaround solution to explicitly force not allowed operations (generally known as cast). For example, an expert developer may force the assignment of a floating point value to an integer variable knowing the consequence of such operation. The practical benefit for a developer, that programs in a statically typed language, is that the compiler cares of types checking creating appropriate warnings in case of problems, thus classes of bugs may be caught in advance during the compilation phase. In the case of statically typed languages, an automatic tool, such as the one proposed by Zhou et al. [31], is able to conduct advanced static analysis to detect type mismatches between code and documentation. B. Type mismatch in dynamically typed languages In a dynamically typed language (e.g., Python) variables types are usually not explicitly declared in the code, but they are evaluated at runtime offering to developers the possibilities to produce compact, flexible code. However, this advantage becomes a source of mistake when, relying only on the documentation, a developer reuses a third-party code in form of library or API. In that scenario, a type mismatch in the API documentation may create an unexpected condition in the latest deployed software creating potential faults. Listing 1 shows an ad hoc method and a relative Docstring that provides an explanation of the required parameters. A Doctstring is a special kind of comment in Python language usually used to document methods, classes, or, in general, Python code. Similarly to the Javadoc⁵ [7] Docstring is used to generate API documentation in Python code. The main difference between the two languages is that the first refers to the strict Doc Comments format to generate the API documentation, instead, the latter supports many formats i.e., Numpydoc, Epytext, or reStructuredText, which look inherently different from each other (this may create confusion when multiple formats are chosen for the same project). The example in Listing 1 collects issues that may be present in a real scenario. In particular, method requires 3 parameters: param1, param2, and param3. All parameters are string types, however, only the first has a correct description, while the second is missed and the third is declared as boolean but used as string. An API derived by such example may lead to a wrong usage, thus creating an unexpected crash. The method nonisomorphic_tree⁶ of the project NETWORKX represents a realistic example of the confusion that can be inducted in an external developer: The parameter create is used as string, but this information is not clear by reading only the natural language documentation. Even though documentation misalignment is a critical issue for dynamically typed languages, recent studies have only focused so far on investigating types mismatches in statically typed languages [31]. Overall, with our work, we specifically focus on devising an empirical evaluation of how much developers care about the importance of creating aligned documentation. Our aim is to get a comprehensive view of the developers’ trend in open source Python projects by focusing on mismatches between method code and documentation. Moreover, we create a tool to help automatically finding certain cases of misaligned documentation. Besides improving our scientific understanding of this type of artifacts, it is our hope that our work could also be beneficial, for example, to developers that want an automated support to recognize misalignments and improve their documentation quality [13]. Listing 1. Example of a Python method. ```python def method(param1, param2='', param3='True '): """Returns a concatenated string of given elements.""" Parameters ---------- param1 : string first parameter to concatenate param2 : string add a dot termination to the concatenated string param3 : boolean "False" inclusions according to the right case Returns ------- R : Concatenated object """ temp = param1 + param2 if param3 == 'True ': temp = temp + ' ' """ return temp ``` ²https://docs.oracle.com/javase/1.5.0/docs/tooldocs/solaris/javadoc.html ³https://google.github.io/ProtocolBuffer III. METHODOLOGY A. Research Questions The goal of our work is understanding and quantifying misaligned software documentation in a dynamically typed language, i.e., Python. In addition, we investigate the performance of an automatic tool aimed at preventing type declaration mismatches in Python. Despite the importance of a correctly updated documentation is widely recognized, time pressure and strict deadlines may lead developers to sometimes forget to update documentation. As argued in the previous sections, this behavior may be particularly problematic for dynamically typed languages as it may lead to hidden issues only discovered at runtime. We start with an exploratory study aimed at understanding how frequently the documentation and the corresponding source code are not aligned for popular Python projects; this leads to our first research question: **RQ1.** How often are inconsistencies present in OSS Python documentation? The results to RQ1 support the claim that having aligned documentation is important for Python developers: Even though developers left incomplete or totally pending more than 20% of the analyzed public methods, less than 1% of the analyzed methods contains mismatches between declared and used types. However, these misalignments are still present and may conceal a runtime crash; this leads to our second research question: **RQ2.** How effective is an automated tool in discovering mismatches in type definitions between comments and source code? B. Selection of subject systems To conduct our analysis, we focused on a single dynamically typed programming language (i.e., Python, one of the most popular dynamically typed languages) and on OSS projects whose source code is publicly available. In the selection phase of a representative subset of five open source projects (selected from 150, 000 active Python repositories hosted by GitHub), we introduced two constraints to filter out undesired projects. The first filter discards every project that does not use a single documentation format and the second keeps only projects that adopt Numpydoc technical documentation format, based on the reStructuredText syntax elements. To perform this selection we combined the results of a manual inspection conducted by considering the most popular Python projects hosted by GitHub and the list of open source projects that adopt the Numpydoc style accordingly to the Sphinx website. With this process, we selected five heterogeneous software systems: Scikit-Learn, Scikit-Image, Matplotlib, NetworkX, and Neumpy. Successively, we downloaded the latest snapshot for each of them and we used S. Cloc to retrieve the basic statistic information such as number of code lines, number of comments, and number of methods. <table> <thead> <tr> <th>Project</th> <th>Commits</th> <th>Contributors</th> <th>Code</th> <th>Comments</th> <th>Methods</th> <th>Samples</th> </tr> </thead> <tbody> <tr> <td>SciKit–Learn</td> <td>22257</td> <td>954</td> <td>104843</td> <td>62667</td> <td>3391</td> <td>96</td> </tr> <tr> <td>SciKit–Image</td> <td>9467</td> <td>233</td> <td>40591</td> <td>24229</td> <td>1563</td> <td>44</td> </tr> <tr> <td>Matplotlib</td> <td>22759</td> <td>637</td> <td>120820</td> <td>54598</td> <td>1758</td> <td>50</td> </tr> <tr> <td>NetworkX</td> <td>5366</td> <td>193</td> <td>51620</td> <td>40591</td> <td>1563</td> <td>44</td> </tr> <tr> <td>Neumpy</td> <td>713</td> <td>4</td> <td>19162</td> <td>8612</td> <td>168</td> <td>5</td> </tr> <tr> <td>Overall</td> <td>61K</td> <td>2K</td> <td>339K</td> <td>191K</td> <td>9K</td> <td>259</td> </tr> </tbody> </table> C. Documentation definitions To answer the first research question, we need to recognize different categories of comments that may express the correctness of declared types. In dynamically typed language the documentation of a method contains the high-level description and may or may not contain the type of declarations of the required parameters. In Python, the Numpydoc (a widely adopted variant of docstring format style) encourages developers to explicitly declare parameter types aimed at reducing the usage confusion. In such scenario, an ad hoc reStructuredText parser could be used to catch the declaration of variable types, if present. Consequently, we could run into comments of three types: - **Complete** – A docstring that describes all the parameters of a method. - **Partial** – A docstring that describes at least one parameter (but not all) of a method. - **Missing** – A docstring that does not describe any parameter in a method. In the first two cases (Complete and Partial), a docstring that follows Numpydoc format style may or may not have type declaration inconsistencies; we distinguish between two alternative cases: - **Valid** – A Numpydoc docstring, with or without minor variations from the Numpydoc specification (such as missing or additional white-spaces), which represents a complete and exhaustive documentation. - **Inconsistent** – Any type inconsistency in describing the data type of a parameter in a docstring. This may range from completely incorrect data types (e.g., comment stating that param is an int while it is used as a str in the code) to describing the data type in a language that may be ambiguous to a reader (e.g., stating that param is tensor or color raises questions about whether they are strings, sequences or objects). 6This can either happen when the developers have really not stated the parameters for the method, or if the comment is written in a format other than Numpydoc in which case the docstring parser fails and assumes that the comment is absent. D. A dataset of categorized methods Sampling approach. We used random sampling to produce a statistically significant set of code comments from each one of the five subject OSS projects. To calculate the size of such a statistically significant sample set, we use simple random sampling without replacement, according to the formula: \[ n = \frac{N \cdot \hat{p} \cdot (z_{\alpha/2})^2}{(N - 1) E^2 + \hat{p} \cdot (z_{\alpha/2})^2} \] where: \( \hat{p} \) is a value between 0 and 1 that represents the proportion of methods with valid docstrings associated, \( \hat{q} \) is the proportion of methods not containing such kind of documentation, \( N \) is the size of the population, \( \alpha \) is the confidence interval (which we took as 95%) and, \( E \) is the error allowed (a 5% margin). Combining all the methods with associated docstrings for the given projects we get \( N = 8,443 \) methods. Out of \( N \), 80\% \( (\hat{p} = 0.8) \) of the docstrings were valid, meaning that they could be further processed on, while 20\% \( (\hat{q} = 0.2) \) were invalid docstrings. The rest of the parameters were set to \( E = 0.05 \) and \( z_{\alpha/2} = 1.96 \). The calculated \( n \) value is 239. Manual classification. Once the sample of methods with internal documentation was selected, each of them had to be manually classified according to our definitions. For each method taken from the sample set, we proceeded with a manual inspection by: (1) counting the number of parameters in both comment and code and (2) inspecting the code to find the data type associated to each argument in the method. Four authors of this paper conducted a multiphase iterative content analysis session \([14]\) by manually inspecting the sample set composed of 239 Python methods. In the first iteration, a small number of 5 files was randomly picked to conduct a preliminary analysis aimed at understanding how correctly proceed with the classification of the docstrings documentation. In the second step, three authors of this paper independently annotated all sample methods and in the third phase, a fourth author verified the correctness of such classification. This third step was necessary to verify their agreement highlighted only negligible differences. E. Automated detector of type mismatches In the second research question, we investigate to what extent and with which accuracy an automatic tool can recognize parameter type mismatches in Python methods. To accomplish this task, we adopted a combination of classification techniques (e.g., based on deep learning approaches \([10]\)) that led to the design of an open source tool. Python Inconsistency Detector. The dynamic-type nature of Python makes type checking a difficult problem \([11]\). For this aim, we used MYPY \([15]\) that is a Python library that statically checks Python data types. MYPY builds an AST-like structure and verifies if the intended types (expressed in his specific format) match up with the true types in the source code. We built an open source tool named PyID as a chain of scripts to determine whether a given pair code-comment is aligned. To this aim, we followed these logical steps: 1. A parser reads the docstrings of each method and collects the data type of each parameter; 2. A script identifies the methods with complete docstrings filtering out undesired docstrings; 3. For complete cases a script generated and inserts MYPY-supported comments at the beginning of each method stating the name and data type of each parameter; 4. The updated source code with MYPY-supported comments is then fed to MYPY to run the static type checking; 5. Finally, the output of MYPY is collected and checked for any detected type inconsistencies. MYPY comment generator. The output of the docstrings parser is a list of tuples. Each tuple refers to a method. It contains the source file’s path, the method name, the line number of the method declaration, the list of parameter types as read from the docstrings, and a tag stating whether it is a complete, partial or missing comment. To infer the data type of each parameter (mostly described in natural language) we used an approximate sentence matching technique otherwise known as fuzzy sentence matching \([30]\). This technique is a machine learning information retrieval approach used to assign a label to a natural language sentence by evaluating the meaning of recurrent terms. The technique relies on NLTK Python library \([2]\) and uses several word-transformation such as: - **tokens:** divide text in to words, numbers, or other “discrete” unit of text. - **stems:** words that have had their “inflected” pieces removed based on simple rules, approximating their core meaning. - **lemmas:** words that have had their “inflected” pieces removed based on complex databases, providing accurate, context-aware meaning. - **stopwords:** low-information, repetitive, grammatical, or auxiliary words that are removed from a corpus before performing approximate matching. After this pre-processing step is completed and the data type is derived by the documentation, an additional script transforms the Python source file in a format compatible with MYPY input by adding the detected type near to each method. Classification evaluation. To evaluate the effectiveness of our automated technique to classification code comments into our taxonomy, we measured three well-known Information Retrieval (IR) metrics for the quality of results \([22]\), named precision, recall, and F-measure. IV. RESULTS Table II <table> <thead> <tr> <th>Library</th> <th>Complete</th> <th>Partial</th> <th>Missing</th> <th>Other</th> </tr> </thead> <tbody> <tr> <td>Scikit-Learn</td> <td>87</td> <td>3</td> <td>1</td> <td>1</td> </tr> <tr> <td>Scikit-Image</td> <td>43</td> <td>1</td> <td>0</td> <td>0</td> </tr> <tr> <td>Matplotlib</td> <td>30</td> <td>8</td> <td>5</td> <td>7</td> </tr> <tr> <td>NetworkX</td> <td>40</td> <td>3</td> <td>1</td> <td>0</td> </tr> <tr> <td>NeuPy</td> <td>4</td> <td>1</td> <td>0</td> <td>0</td> </tr> <tr> <td>Overall</td> <td>204</td> <td>18</td> <td>7</td> <td>10</td> </tr> </tbody> </table> **RQ1 - Inconsistencies in OSS Python documentation** The first question investigates the frequency of inconsistencies in type declaration in the sampled Python projects. Table **II** contains the results of the manual inspection reporting the absolute number of Complete, Partial, Missing, and Other methods. We observe that the developers tend to produce software with well-formatted documentation, indeed, only 1 analyzed project MATPLOTLIB lacks of documentation for 5 methods. In addition, only 2 projects SCIKIT–LEARN and MATPLOTLIB have a generic format inconsistency in method’s documentation. These observations suggest that, against previous research [13], open source developers of the selected Python projects follow good practices when creating and update their documentation. Going more in-depth into the achieved results by measuring the amount of partial comments in the Complete and Partial methods, we found that less than 1% of the analyzed methods are classified as Inconsistent. In addition, for 4 projects the amount of partial comments is proportional to the number of missing comments, ranging between 5 – 20%, with the exception of MATPLOTLIB. SCIKIT-IMAGE, together with NEUPY, exhibited a high number of complete comments at approximately 75%. Overall, the high number of comments classified as Missing may be attributed to developers intentionally writing comments not adhering to Numpydoc standard, while the docstring may still provide all required information. The non-negligible occurrence frequency of Partial comments may be important for developers to prevent misunderstanding. **Result 1:** The manual inspection of 239 public methods highlights that, overall, code and comments in the sampled methods are well aligned, with less than 1% of the methods’ documentation being Inconsistent and less than 20% being either Partial or Missing. | Table **III** | |---------------|---|---|---| | **PERFORMANCE EVALUATION OF PYID** | | | Precision | Recall | F-measure | | Scikit-Learn | 0.38 | 0.71 | 0.50 | | Scikit-Image | 0.80 | 0.80 | 0.80 | | Matplotlib | 0.75 | 0.60 | 0.67 | | NetworkX | 0.67 | 0.67 | 0.67 | | NEUPY | 1.00 | 1.00 | 1.00 | | Overall | 0.58 | 0.71 | 0.64 | **RQ2 - Automatically detecting type mismatches** PyID has two main modules where the detection performance can be measured: Docstring parser and inconsistency detection. In order to evaluate the performance of PyID, we ran it on the methods sampled from the dataset as described in Section **III-D**. Due to space limitations, we only report the average percentage of precision and recall achieved by the first part of PyID that are 99% and 99%, respectively. These values are achieved because PyID uses both an AST and a less stringent regular expression to parse the docstring, indeed, for SCIKIT–IMAGE, NETWORKX, and NEUPY, the docstring parser successfully parses all complete, Partial and Missing comments, hence achieving a precision and recall of 100%. Table **III** shows the performance of the second part of PyID aimed at detecting type mismatches. The results highlight that there is a significant variation between projects, which is due to MYPY not being able to properly detect some derived types. However, the performance is promising for projects such as SCIKIT-IMAGE and NEUPY where F-measure is between 80 – 100%. **Result 2:** The results achieved executing PyID on a real dataset manual classified shows promising performance with an overall F-measure up to 64%. **V. Threats to validity** **Sample validity.** One potential criticism of a scientific study conducted on a small sample of projects is that it could deliver little knowledge. This study highlights the characteristics and distributions of five open source projects focusing from an external perspective. Historical evidence shows otherwise: Flyvbjerg gave many examples of individual cases contributing to discoveries in physics, economics, and social science [7]. To answer our research questions, we read and inspected more than 8,000 lines of code and comments have been written by more than 500 contributors (see Section Section **III-D**). **Taxonomy validity.** To proceed with manual classification we defined a tree-based taxonomy (see Section Section **III-C**). To ensure that defined categories were exhaustive to classify every type of documentation status we proceeded with a multi-phase content analysis session were we iteratively explored every condition of documentation. We obtained an agreement of 100% on the 139 methods considered where we used the provided categories to cover all inspected cases. **External Validity.** This category refers to the generalizability of our findings. While in the context of this work we analyze software projects having different size and scope, we limit our focus to Python systems because the tool that we used in our analysis infer type check only for this programming language. Thus, we cannot claim generalizability with respect to systems written in different languages as well as to projects belonging to industrial environments. Future work can be devoted to improving these aspects of our study. **VI. Related work** Tools exist to type check code written in statically typed languages and detect code comments inconsistencies. Tan et al. present @tComment to test method properties about null values and related exceptions [27]. The authors evaluated the tool on seven open-source projects and found 29 inconsistencies between Javadoc comments and method bodies. Khamis et al. introduce JavaDocMiner, a heuristic based approach for assessing the quality of in-line documentation, targeting both the quality of language and consistency between... Java source code and its comments. Further, generalized frameworks like iComment by Tan et al. combine Natural Language Processing, Machine Learning, Statistics and Program Analysis techniques to achieve the same purpose of detecting inconsistencies between comments and source code. There are several studies on code comments and measuring the quality of the commented code. Stamelos et al. tested the hypothesis that software quality grows if the code is more commented and suggest a simple ratio metric between code and comments. Fluri et al., to investigate whether developers comment their code, present a heuristic approach to associate comments with code. Steidl et al. investigate the quality of the source code comments. While for a dynamically typed language such as Python, it was expected to have a high number of mismatches, our results of RQ1 appear to be similar to those of Tan et al. who also found that under 1% of code-comment pairs contain mismatches @iComment. VII. CONCLUSION The documentation of a software system is an important source of information for developers, for example, if the system is a third-party library or API. In particular, a correct documentation becomes crucial when developers cannot read the source code and the language is dynamically typed because the incorrect use of data types arises only at runtime and creates unexpected crashes. In this work, we investigated how often developers leave the documentation inconsistent and how accurately an automatic tool based on machine learning can detect documentation type mismatches. For this purpose, we manually analyzed 239 Python methods discovering that less than 20% of them are Partial or Missing and less than 1% of Complete and Partial methods contain types mismatches. This seems to indicate that the developers of our selected systems are aware of the importance of good documentation in their context. In addition, we designed PyID with the purpose of helping developers to keep their documentation and code well aligned. Testing PyID with the manually classified methods, we reached an overall precision and recall up to 58% and 71%, respectively. ACKNOWLEDGMENT Bacchelli gratefully acknowledges the support of the Swiss National Science Foundation through the SNF Project No. PP00P2_170529. REFERENCES
{"Source-Url": "https://pure.tudelft.nl/portal/files/40300816/sanerws18maltesquemain_id5_p.pdf", "len_cl100k_base": 6900, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23907, "total-output-tokens": 9006, "length": "2e12", "weborganizer": {"__label__adult": 0.0003333091735839844, "__label__art_design": 0.00023829936981201172, "__label__crime_law": 0.0003261566162109375, "__label__education_jobs": 0.000918865203857422, "__label__entertainment": 4.971027374267578e-05, "__label__fashion_beauty": 0.0001308917999267578, "__label__finance_business": 0.00014591217041015625, "__label__food_dining": 0.0002810955047607422, "__label__games": 0.00037789344787597656, "__label__hardware": 0.00048828125, "__label__health": 0.00046443939208984375, "__label__history": 0.00015270709991455078, "__label__home_hobbies": 7.152557373046875e-05, "__label__industrial": 0.0002257823944091797, "__label__literature": 0.0002617835998535156, "__label__politics": 0.00018775463104248047, "__label__religion": 0.0003535747528076172, "__label__science_tech": 0.008544921875, "__label__social_life": 0.00010597705841064452, "__label__software": 0.005626678466796875, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.00022995471954345703, "__label__transportation": 0.00030422210693359375, "__label__travel": 0.00014960765838623047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37262, 0.04062]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37262, 0.57982]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37262, 0.86819]], "google_gemma-3-12b-it_contains_pii": [[0, 1279, false], [1279, 6746, null], [6746, 12702, null], [12702, 18147, null], [18147, 24167, null], [24167, 30011, null], [30011, 37262, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1279, true], [1279, 6746, null], [6746, 12702, null], [12702, 18147, null], [18147, 24167, null], [24167, 30011, null], [30011, 37262, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37262, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37262, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37262, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37262, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37262, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37262, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37262, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37262, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37262, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37262, null]], "pdf_page_numbers": [[0, 1279, 1], [1279, 6746, 2], [6746, 12702, 3], [12702, 18147, 4], [18147, 24167, 5], [24167, 30011, 6], [30011, 37262, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37262, 0.12745]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
8ea63b01e7ac72b1ce4ae4b5d18178f637011be5
Events, presentations and articles For Partner meetings agendas and notes, see Samvera Partner Meetings For the Samvera full Samvera Meetings and Events Diary see Samvera Events Calendar See also Related Conferences for a list of other conferences taking place where a Samvera presence might be desirable (and/or to avoid scheduling conflicts) - Forthcoming Events - Past events (with presentations if available) - Related Conferences Forthcoming Events One or more members of the Samvera Community will be attending the following events. Events listed with the Samvera icon are organised by or on behalf of Samvera, those labelled were organized by or on behalf of Hydra (our previous name). Samvera Virtual Connect 2020 14-15 May 2020 | Online PASIG Postponed to 22-24 September 2020 | Biblioteca Digital Memoria de Madrid, Spain Open Repositories 2020 Postponed to 2021 | Stellenbosch University, South Africa Hyrax & Hyku User Workshop 6-7 August 2020 | Notch8, San Diego, CA Intro Samvera Camp 29 September - 2 October 2020 | San Diego Samvera Developer Congress 16 - 18 November 2020 | Virtual | Now virtual and moved to November due to the COVID-19 disruption Samvera Partner meeting 26 October 2020 | UC Santa Barbara Library | CANCELLED due to the COVID-19 disruption Arrangements will be made to replace this with a virtual meeting. Samvera Connect 2020 27 - 30 October 2020 | Hilton Santa Barbara Beachfront Resort, Santa Barbara, CA | CANCELLED due to the COVID-19 disruption Arrangements will be made to replace this with a virtual meeting. Open Repositories 2021 31 May - 3 June 2021 | Stellenbosch University, South Africa Past events (with presentations if available) Samvera Partner meeting 27-28 April 2020 | On-line Introductory Samvera Camp 14-17 April 2020 | University of North Carolina at Chapel Hill CANCELLED due to COVID-19 emergency Code4Lib 2020 8-11 March 2020 | Westin Hotel, Pittsburgh Solar Vortex - Samvera Developer Congress 22-24 January 2020 | UC San Diego CNI Fall 2019 Membership Meeting 9-10 December 2019 | Omni Shoreham Hotel Washington, DC Samvera Connect 2019 22-25 October 2019 | Washington University in St Louis, MO Samvera Developer Congress 21 October 2019 | Washington University in St Louis Samvera Partner meeting 21 October 2019 | Washington University in St Louis 2019 DLF Forum 13-17 October 2019 | Tampa, Florida Samvera European Regional Group Notch8 Hyrax and Hyku User Workshop 26-27 September | San Diego, CA Linked Data working meeting 24-27 September | Stanford University Libraries Samvera Camp (introductory) 9-12 September 2019 | UCLA 2019 IIIF Conference 24-28 June 2019 | Göttingen, Germany Open Repositories 2019 10-13 June 2019 | Universität Hamburg, Hamburg, Germany Samvera West Coast Regional Meeting 23 May | UC San Diego Samvera Partner meeting 29-30 April 2019 | IUPUI, Indianapolis Samvera Virtual Connect 2019 23-24 April 2019 | Online Samvera European Regional Group 28 March 2019 | University of Oxford LDCX Invitation only 25-27 March 2019 | Stanford University Samvera European Regional Group 13 December 2018 | Senate House, University of London CNI Fall 2018 Membership meeting 10-11 December 2018 | Washington DC 2018 DLF Forum 14-18 October 2018 | Las Vegas Samvera Connect 2018 9-12 October 2018 | University of Utah, Salt Lake City Samvera Partner meeting 8 October 2018 | University of Utah, Salt Lake City Sandy Metz/POOD II 3-5 October 2018 | Durham, NC Samvera Camp 24-27 September 2018 | Durham, NC Registration: https://samveracamp-duke2018.eventbrite.com Samvera European Regional Group 20 September 2018 | Senate House, University of London Samvera Virtual Connect 2018 11 July 2018 | On-line: full details at the link above Repository Fringe 2-3 July 2018 | Royal Society of Edinburgh Open Repositories 2018 4-7 June 2018 | Montana State University, Bozeman, Montana - Samvera workshop for OR18: Presentation **Advanced Samvera Camp** 7-9 May 2018 | Minneapolis, MN **Samvera Camp** 23-26 April 2018 | Portland, Oregon **Samvera European Regional Group** 19 April 2018 | Senate House, University of London **Samvera Developers' Congress** 29-30 March 2018 | Stanford University **Samvera Partner meeting** 29-30 March 2018 | Stanford University **LDCX** Invitation only 26-28 March 2018 | Stanford University **Samvera West Coast Regional meeting** 16 March 2018 | Henry Madden Library, Fresno State in Fresno, CA **Samvera Europe meeting** 14 December 2017 | LSE, London **Samvera Connect 2017** 6-9 November 2017 | Evanston, IL **Fedora and Samvera Camp** 4-8 September | University of Oxford, UK **Samvera European Regional Group** 27 July 2017 | LSE, London **Samvera Virtual Connect 2017** 18 July 2017 | Online Open Repositories 2017 26-29 June 2017 | Brisbane, Australia Advanced Hydra Camp 8-10 May 2017 | Minneapolis, MN Hydra Camp 17-20 April 2017 | Emory University Hydra Developers’ congress 30-31 March 2017 | Stanford University Hydra Partner meeting 30-31 March 2017 | Stanford University LDCX Invitation-only event 27-29 March 2017 | Stanford University West Coast Regional Meeting 10 February 2017 | UC Santa Cruz Hydra Connect 2016 3-6 October 2016 | Boston, MA Penn State developer event 19-23 September 2016 | State College, PA Archivematica Camp 24-26 August 2016 | University of Michigan School of Information Hydra Virtual Connect 2016 7 July 2016 | On-line: full details at the link above Open Repositories 2016 13-16 June, 2016 | Dublin, Ireland Annual open source conference for the repository community Hydra Developers’ congress May 2016 | University of Michigan, Ann Arbor Hydra Developers’ congress 24-25 March 2016 | Stanford University Hydra Power Steering meeting Invitation-only event 24-25 March 2016 | Stanford University LDCX Invitation-only event 21-23 March 2016 | Stanford University West Coast Regional Meeting 26 February 2016 | UCSB Hydra Camp 22-25 February 2016 | UCSB Hydra Developers’ congress 3-5 February 2016 | UCSD Hydra Connect 2015 21-24 September, 2015 | Minneapolis, Minnesota Open Repositories 2015 8-11 June, 2015 | Indianapolis, Indiana Annual open source conference for the repository community List of Hydra-related presentations etc at OR2015 Hydra Northeast (US) Regional Meeting 7 May 2015 | Brown University, Providence, Rhode Island Hydra Europe Symposium 2015 23-24 April 2015 | London School of Economics and Political Science (LSE), London Hydra Camp London 20-23 April 2015 | London School of Economics and Political Science (LSE), London Hydra Developers’ congress 26-27 March 2015 | Stanford University Hydra Power Steering meeting Invitation-only event 26-27 March 2015 | Stanford University LAMDevConX Invitation-only event 23-25 March 2015 | Stanford University Hydra: many heads, many connections. Enriching Fedora Repositories with ORCID (slides and recording) 2 April 2015 | Duraspace Hot Topics Series: Integrating ORCID Persistent Identifiers with DSpace, Fedora and VIVO Advanced Blacklight Workshop 13 March 2015 | Yale University, New Haven, Connecticut Hydra Camp 9-12 March 2015 | Yale University, New Haven, Connecticut Post-Code4Lib Hydra Developers' meeting 12-13 February 2015 | UoPDX Library, Portland, Oregon Code4Lib 9-12 February 2015 | Portland Hilton (Portland, OR) a. RailsBridge (Carolyn Cole) b. Intro to Blacklight (Justin Coyne, Mark Bussey, Jessie Keck) c. GeoHydra / GeoBlacklight (Jack Reed, Darren Hardy, Bess Sadler) d. Dive into Hydra (Possibly with a focus on installing Worthwhile) (Justin Coyne, Mark Bussey, Bess Sadler) e. Orienting newcomers and/or Q&A - possible times 10th International Digital Curation Conference (IDCC) 9-12 February 2015 | Royal College of General Practitioners, 30 Euston Square (London, UK) a. Meeting institutional needs for digital curation through shared endeavour: the application of Hydra/Fedora at the University of Hull (Chris Awre) CNI Fall Member Meeting 8-9 December 2014 | Washington DC • Digital Repository Development at Yale Library, from Michael Dula, CTO of Yale Library DLF Forum 2014 27-29 October 2014 | Atlanta, Georgia There is a wealth of Hydra and Hydra-related content at DLF this year. If you won't be able to make it to Hydra Connect 2 (Cleveland, Sept 30-Oct 2), or if you want to carry on the conversation with the broader community, you should consider booking early to DLF: they tend to fill up relatively early. • Hydra Installfest: Monday, October 27, 1:30-3:30pm, Salons 1,2,3 • Developing With Hydra: Tuesday, October 28, 1:30-3:30pm, Salons 1,2,3 • DevOps for Digital Repositories • Sustaining Open Source Software • The Future of Fedora: Update on Fedora 4 • Avalon Media System: Implementation and Community • Spotlight: A Self-Service Tool for Showcasing Digital Collections • Placing the IR Within the User's Workflow: Connecting Hydra-based Repositories with Zotero • Running Up That Hill: The Academic Preservation Trust: A Community Based Approach to Digital Preservation Hydra UK Regional meeting 22 October 2014 | London School of Economics and Political Science AMIA 2014 8-11 October 2014 | Savannah, Georgia Association of Moving Image Archivists (AMIA) annual conference Hydra Connect #2 30 September - 3 October 2014 | Case Western University, Cleveland, Ohio Annual gathering of the Hydra Community - 4 days including workshops, plenary and breakout sessions Innovatics conference 27-29 August 2014 | Santiago and Valparaiso, Chile Dive into Hydra workshop (in Spanish) as part of Innovatics conference Keynote speaker (Bess Sadler) Hydra Camp 26-29 August 2014 | Princeton, New Jersey Four day Hydra Developer training class - syllabus Open Repositories 2014 9-13 June, 2014 | Helsinki, Finland Annual open source conference for the repository community. See below the list of Hydra-related talks accepted, with timings and links to abstracts on the OR2014 conference website. Workshops - WK1A and WK2A: GIS in Digital Repositories, 9th June, 9:00am-5:00pm - Abstract - WK1G: Introduction to Hydra for Developers, 9th June, 9:00-12:30 - Abstract - WK2E: DevOps for Digital Libraries, 9th June, 1:30-5:00 - Abstract - WK2F: Implementing RDF metadata in Hydra, 9th June, 1:30-5:00 - Abstract - WK3B: Hydra for Managers, 9th June, 5:30-7:00 - Abstract OR2014 Plenary - P3C: Self-deposit, discovery, and delivery of scientific datasets using GeoHydra, 10th June, 3:00-4:15 - Abstract - P4A: Spotlight: A Blacklight Plugin for Showcasing Digital Collections, 11th June, 11:15-12:30 - Abstract - P4B: Hacking User Experience in a Repository Service: ScholarSphere as a Case Study, 11 June, 11:15-12:30 - Abstract - P4B: Distributed Repositories of Medieval Calendars and Crowd-Sourcing of Transcription, 11 June 11:15-12:30 - Abstract - P4B: From Local Practice to Linked Open Data: Rethinking Metadata in Hydra, 11 June, 11:15-12:30 - Abstract - P4B: Building Successful, Open Repository Software Ecosystems: Technology and Community, 11th June, 11:15-12:30 - Abstract - P7A: From library repository to university-wide service: Stanford Digital Repository as a case study, 12th June, 11:15-12:30 - Abstract - P7C: Leveraging Agile & Resourcing for Success - Hydramata, Avalon, Fedora 4 and Islandora (panel), 12th June, 11:15-12:30 - Abstract - P8A: Audio and Video Repositories at Scale: Indiana University’s Media Digitization and Preservation Initiative, 12th June, 1:30-2:20 - Abstract - P8B: Sustaining your open source project through training: a Hydra case study, 12th June, 1:30-2:20 - Abstract - P8B: Hydramata: Building a Nimble Solution with Hydra to Transcend the Institutional Repository, 12th June, 1:30-2:20 - Abstract Fedora Interest Group - IG3A (Fedora/Hydra): Facing the Hydra alone: three case studies, 13th June, 11:15-12:30 - Abstract - IG3A (Fedora/Hydra): Extending the Hydra Head to Create a Pluggable, Extensible Architecture: Diving into the Technology of Hydramata, 13th June, 11:15-12:30 - Abstract - IG3A (Fedora/Hydra): Issues with Fedora & Hydra, experiences from a research-data-driven implementation, 13th June, 11:15-12:30 - Abstract - IG4A (Fedora): Avalon Media System project update: a Hydra solution for digital audio and video access, 19th June, 1:30-2:45 - Abstract Posters and Demonstrations - 10th June, 6:30-9:00 - Abstracts - Hydra Europe - Avalon Media System demonstration ORCID Outreach and CodeFest Meeting 21-22 May 2014 | Chicago Outreach meeting for customers, ORCID members, and researchers highlighting solutions and adoption strategies. - ORCID Identifiers in Repositories Panel: ORCID Hydra Plug-in Hydra Camp - 6-9 May | Minneapolis Four day Hydra developer training Hydra Developers Congress - 24-25 April 2014 | Stanford University Two day in-person Hydra Developer coding fest Hydra "Power" Steering meeting - invitation only - 24-25 April 2014 | Stanford University Two day strategic planning event - Hydra Steering Group and Invited advisers only LAMDevConX - invitation only 21-23 April 2014 | Stanford University Library, Archive, and Museum technical summit - invitation only event European Hydra Camp - 8-11 April 2014 | Trinity College Dublin Four day Hydra developer training course European Hydra Symposium Code4Lib 2014 24 - 27 March | Raleigh, NC Annual grassroots library technologist conference in the US Pre-conference workshops: - Intro to Blacklight - Blacklight hackfest - Rails Bridge intro to programming - GeoHydra: Managing Geospatial Content Scheduled talks: - Building for others (and ourselves): the Avalon Media System - Sustaining your Open Source project through training - Behold Fedora 4: The Incredible Shrinking Repository! ORCID Dev Congress 4-7 March 2014 | Chicago Dev House ORCID and Hydra Plug-in integration Adopter and Contributor Dev Meeting to jump start adoption Hydra Connect January 2014 21-24 January 2014 | UCSD, San Diego Hydra Connect January 2014 CNI Fall 2013 9-10 Dec 2013 | Washington, DC - Collaborating to Manage Research Data (Notre Dame and Hydra Partners) (pptx) DLF 2013 4-6 Nov 2013 | Austin, TX Hydra Activities at DLF2013 EOD Conference 17-18 October 2013 | National Library of Technology, Prague, Czech Republic From content silos to an integrated digital library (Royal Library project presentation, including slides about Hydra) HydraCamp 1-4 October 2013 | Case Western Reserve University, Cleveland, OH HydraCamp Syllabus - Fall 2013 DARIAH General VCC Meeting 5-6 September 2013 | Copenhagen, Denmark Using Hydra/Fedora for digital library infrastructure (5 minutes presentation) OR13 Conference 7-13 July 2013 | Prince Edward Island, Canada Hydra Activities at OR13 - State of the HydraSphere for OR13 (pptx) • 24x7 Presentation: “Testing Your Archive: Delivering on the Promise of Persistence” • Duke University Libraries Preservation Repository (pdf) OAi8 19-21 June 2013 | University of Geneva, Switzerland • Hydra poster Hydra Camp 8-12 April 2013 | Dublin LibDevConX^4 25-27 March 2013 | Stanford Code4Lib 2013 February 11-14 | Chicago, IL Hydra UK 22 November 2012, LSE, London • Introduction to Hydra by Chris Awre • Hydra in Hull by Richard Green • Hydra@GCU: a repository for audio and video by Caroline Webb • Hydra at LSE by Ed Fay • Hydra at Oxford by Neil Jefferies • Hydra UK discussion notes • Twitter hashtag for ongoing comment and discussion - #hydrauk • Ariadne event report Hydra Webinar Series - Fall, 2012 • DuraSpace Webinar Hub • Series Announcement - Hydra Webinar Series - 2012 • Webinar 1 – Introduction to Hydra, presented by Tom Cramer (Sept 25, 2012) • Watch the Webinar Recording • View the slides (slideshare) • Webinar 2 - A Case Study on General Repository Applications, presented by Rick Johnson and Richard Green (Oct 16, 2012) • Watch the Webinar Recording • View the slides (slideshare) • Webinar 3 - Hydra Technical Deep Dive, presented by Matt Zumwalt (scheduled for Oct 30, 2012, but recorded separately due to Superstorm Sandy) • Watch the Webinar Recording • Watch the Q&A follow-up session • View the slides (slideshare) OR12 Conference 9-13 July 2012, Edinburgh – Schedule of Hydra Events at OR12 in Edinburgh • Intro to Hydra for OR12 PreConWorkshop by Chris Awre • HydraSphere: One Body, Many More Heads, One Year Later Fedora User Group panel on Hydra, by Tom Cramer • Hylandora by Tom Cramer and Jonathan Green • Hydra Framework Technical Update for OR12 by Matt Zumwalt • Seaside Research Portal: A Best of Breed Approach to Digital Exhibits and Collection Management by Rick Johnson • Towards a mature, multi-purpose repository for the institution by Chris Awre LibDevConX^3 26-28 March 2012, Stanford CNI 2011 December 12 - 13, 2011 | Arlington, VA Hydra: One Body, Many Heads for Repository-Powered Library Applications DLF 2011 Forum 31 Oct - 1 November, 2011 | Baltimore Hypatia Proposal, Powerpoint OR11 Conference 7-11 June 2011 | Austin Main conference presentation - Building the Hydra - Enhancing Repository Provision through Multi-Institution Collaboration: Chris Awre Fedora track 24/7 block: - SALT, ETDs and EEMs - Stanford's suite of Hydra services: Tom Cramer - Libra - an unmediated, self-deposit, institutional repository at the University of Virginia: Julie Meloni - Hydra in Hull: Richard Green - A Hydra head for Northwestern University Digital Image Library: Bill Parod - Hydra-based digital exhibits gallery at Notre Dame: Dan Brubaker Horst Fedora track: Tools and integration - Hydra technical deep-dive: Matt Zumwalt Hydra-related presentations - CLIF: Moving repositories upstream in the content lifecycle: Richard Green and Simon Waddington LibDevConX^2 21 - 23 March 2011 | Stanford University Code4Lib 2011 7-10 February, 2011 | Bloomington, Indiana Related Presentations: - Digital Exhibits at Notre Dame Using Hydra by Rick Johnson & Dan Brubaker Horst - Opinionated Metadata by Matt Zumwalt Fedora UK & Ireland group meeting 13 December 2010 | London School of Economics and Political Science DLF Fall Forum 1-3 November 2010 | Palo Alto, CA Tom Cramer & Matt Zumwalt's presentation Hydra Camp 4-8 October 2010 | Minneapolis Repository Fringe September 2010 | Edinburgh Presentation: Hydra - Chris Awre (Video) OR10 Conference 6 - 9 July 2010 | Madrid Related Presentations: Hydra: a technical and community framework for customised, shared repository applications Blacklight: Leveraging a Rich Discovery Interface in Open Repository Architectures LibDevConX 23 - 25 March 2010 | Stanford University Fedora UK & Ireland User Group, Fedora EU User Group 8 December 2009 | Oxford Presentations: - The Hydra initiative: Underpinning repository interaction for research support - Chris Awre - Content models in the Hydra Project - Richard Green Fedora UK & Ireland User Group 9 June 2009 | Dublin, Republic of Ireland OR09 Conference 18 - 21 May 2009 | Atlanta, GA Presentations: - Designing and building a reusable framework for multipurpose, multifunction, multi-institutional repository-powered solutions - available here - Case studies in workflow: Three approaches - available here Related Conferences - Annual schedule of Samvera-related conferences which Samverans may be interested in attending
{"Source-Url": "https://wiki.lyrasis.org/download/temp/pdfexport-20200913-130920-1213-6109/samvera-Events%2Cpresentationsandarticles-130920-1213-6110.pdf?contentType=application/pdf", "len_cl100k_base": 5196, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 27623, "total-output-tokens": 6529, "length": "2e12", "weborganizer": {"__label__adult": 0.0005230903625488281, "__label__art_design": 0.0020961761474609375, "__label__crime_law": 0.00075531005859375, "__label__education_jobs": 0.09271240234375, "__label__entertainment": 0.0004150867462158203, "__label__fashion_beauty": 0.0003304481506347656, "__label__finance_business": 0.0022449493408203125, "__label__food_dining": 0.0005159378051757812, "__label__games": 0.0010900497436523438, "__label__hardware": 0.001049041748046875, "__label__health": 0.000728607177734375, "__label__history": 0.0016422271728515625, "__label__home_hobbies": 0.0004382133483886719, "__label__industrial": 0.0005502700805664062, "__label__literature": 0.0010099411010742188, "__label__politics": 0.001140594482421875, "__label__religion": 0.0006399154663085938, "__label__science_tech": 0.056884765625, "__label__social_life": 0.002197265625, "__label__software": 0.291748046875, "__label__software_dev": 0.53857421875, "__label__sports_fitness": 0.00043392181396484375, "__label__transportation": 0.0006999969482421875, "__label__travel": 0.001674652099609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19393, 0.08811]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19393, 0.00293]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19393, 0.79547]], "google_gemma-3-12b-it_contains_pii": [[0, 1378, false], [1378, 2426, null], [2426, 3233, null], [3233, 3908, null], [3908, 4748, null], [4748, 5750, null], [5750, 6769, null], [6769, 9156, null], [9156, 11829, null], [11829, 13388, null], [13388, 14871, null], [14871, 16797, null], [16797, 18308, null], [18308, 19393, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1378, true], [1378, 2426, null], [2426, 3233, null], [3233, 3908, null], [3908, 4748, null], [4748, 5750, null], [5750, 6769, null], [6769, 9156, null], [9156, 11829, null], [11829, 13388, null], [13388, 14871, null], [14871, 16797, null], [16797, 18308, null], [18308, 19393, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19393, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19393, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19393, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19393, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19393, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19393, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19393, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19393, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19393, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19393, null]], "pdf_page_numbers": [[0, 1378, 1], [1378, 2426, 2], [2426, 3233, 3], [3233, 3908, 4], [3908, 4748, 5], [4748, 5750, 6], [5750, 6769, 7], [6769, 9156, 8], [9156, 11829, 9], [11829, 13388, 10], [13388, 14871, 11], [14871, 16797, 12], [16797, 18308, 13], [18308, 19393, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19393, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
da90db2b7bbd6bf582d2ef5a55a3ad91320f10c0
Extended Model driven Architecture to B Method Ammar Aljer, Philippe Devienne To cite this version: Ammar Aljer, Philippe Devienne. Extended Model driven Architecture to B Method. Ubiquitous Computing and Communication Journal, UBICC publishers, 2011, Special Issue on ICIT 2011. <hal-00832612> HAL Id: hal-00832612 https://hal.archives-ouvertes.fr/hal-00832612 Submitted on 11 Jun 2013 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. EXTENDED MODEL DRIVEN ARCHITECTURE TO B METHOD Ammar Aljer Faculty of Electrical and Electronic Engineering, University of Aleppo, Aleppo, Syria ammar.aljer@lifl.fr Philippe Devienne Lille’s Computer Science Laboratory, University of Lille, Lille, France philippe.devienne@lifl.fr ABSTRACT Model Driven Architecture (MDA) design approach proposes to separate design into two stages: implementation independent stage then an implementation-dependent one. This improves the reusability, the reliability, the standability, the maintainability, etc. Here we show how MDA can be augmented using a formal refinement approach: B method. Doing so enables to gradually refine the development from the abstract specification to the executing implementation through many controlled steps. Each refinement step is mathematically represented and is proven to be correct, by conceconce then the implementation is proven to satisfy the specification; furthermore this approach permits to prove the coherence between components in low levels even if they are branched in different technologies during the development. Keywords: MDA, B method, Co-design Refinement, Embedded System, VHDL 1 INTRODUCTION As computer performance improves and human-built systems augment, there are continuous efforts to employ suitable Computer Aided Design tools that are able to develop such complex systems. A common attitude between designers in different technologies is to use more abstract design levels that enable designer to concentrate, at first, on the most important requirements of the system. In hardware domain, many tools are produced to develop higher levels than printed circuits or RTL (Register Transfer Level). VHDL (IEEE 1076) is emerged on 1987. It permits to represent a complete hardware system. It became the dominant in Hardware modelling. VerilogSystem is standardised in 2005 to manage abstract level of hardware system. In software area, number of OOP languages has emerged. They give more facilities to treat complex system than procedural languages. An implementation-independent tool, UML (unified modelling language), use graphical diagrams to gather common aspects of OOP Languages using. An object oriented system is made up of interaction components. Each component (object) has its own local state and provides operations on that state. In Object oriented design process, Designer concentrates more on precising classes (abstraction of real objects) and the relationships between these classes. MDA (model driven architecture) was launched by the OMG (Object Management Group) in 2001. It proposes to separate the design into two stages: implementation-independent stage then an implementation-dependent one. “The transition between these stages of development should, ideally, be seamless, with compatible notation used at each stage. Moving to the next stage involves refining the previous stage by adding details to exiting object classes and devising new classes to provide additional functionality. As information is concealed within objects, detailed design decision about the representation of data can be delayed until the system is implemented.”[8]. Another important aspect of nowadays systems is the interference between different technologies. Most systems consist of different cooperating sub-systems where some functionality may migrate from one technology to another in further versions of the system. In our project, which is illustrated in Fig. 1, we improved MDA approach in three main aspects: 1. Smoothing transfer from the abstract specification of the system into the implementation with a proven refinement from each level to the next and the more deterministic one. 2. Formal notation of the complete system in the abstract levels 3. Formal projection of components that are implemented in hardware technology. Our approach (that joins the advantages of MDA and B method) permits to obtain many advantages: 1. The possibility to obtain a correct-by-design system. 2. Increase the reusability: when a modification is necessary, we preserve all design levels that are more abstract than the level where modification is occurred. 3. The possibility of migration between technologies in low levels without reproving the complete system if the immigration preserves the logical behaviour captured in the formal projection. Figure 1: Refined MDA The dashed line in Fig. 1 shows the temporal axe of project development. The top of the left part of Fig. 1 shows that the first step is to formally specify the requirements. This stage may be achieved during an iterative process where new requirements do not contradict with the previous ones. The Formal requirements specification is followed by another stage to design the main components of the wanted system independently of the implementation technology. Also this stage is, in most cases, achieved iterative process during many steps of refinement. In real applications the previous two stages (formal requirements and the implementation independent design) are not completely separated. Using the formal refinement of B, components in each step is proven to be coherent and refine the previous step. Right part of Fig. 1 shows how designers in each community may their own development tools and techniques to partially implement the system. A formal representation of the implementation of the different technologies is traced to prove: 1. the correctness of each component regarding to its specification, 2. the coherence between components in low levels either if they are implemented in one technology or different technologies. 3. the satisfaction of the Implantation-Independent Architecture declared in the previous stage. 4. and the coexisting, if necessary, with mathematical representation of parts of the real environments such as physical laws, external systems, etc. 2 MODEL DRIVEN ARCHITECTURE Since the invention of Newmann, the general attitude of software tools developer is to abstract Newmann computer architecture. FORTRAN may be considered as the first high level language. From the outside, it uses formal mathematical-like expressions but actually these expressions and instructions are chosen to abstract the executive machine code. A compiler is written to convert each FORTRAN program code into machine code. Programs were used to partially help client with automatically and rapidly executing an algorithm. Most of later software developments (such as structural programming then OOP) concentrated on the abstraction of the executive machine code. With OOP, programmer concentrate more and more on the Classes that are abstractions of real word. Nowadays writing the implementation is partially automated and designer may give more attention on system structure. Actually with CASE (Computer Aided Software Engineering) tools, programmer can graphically specify the components of his/her design, precise the operation of each component and defines the relations between components then executive code is automatically generated. Nowadays computer is used not only to execute a program but to represent a complete system and furthermore to simulate a complex of interacting systems. With MDA (Model Driven Architecture) design is completely separated between implementation-independent stage and an implementation-dependent one. With this attitude to represent as system rather than a program, verification becomes more and more difficult because its cost increases exponentially with complexity. With such approach, Reusing is augmented. In OOP, programmer reuses ancient classes or libraries (written by him or by others) in new projects. With COSTS (Commercial, off-the-shelf), programmer reuses a complete software system or sub-system. He ought to adapt them to the novel environment. Since 1950s, huge efforts are made to cover microinstructions with many abstraction layers: Assembly, High Level Languages, Structural Programming, OOP, UML (Unified Modeling language) and MDA. But only few efforts are made to formulate the other side of the programming task; that is client requirements. With the increasing machine power and augmenting complexity of computer based systems, Software engineering developed many principles and techniques to formulate client requirements. Comparing to the development of programming language, theses efforts rest primitive and a formal gap between what a program do and what a client wants is always exists. SDLC (System Development Life Cycle) in Software engineering usually begins with requirement specification [10] and many UML diagrams partially describe requirements such as Use Case Diagrams, Activity Diagrams .etc. These representations of requirements are still superficial, non formal (or semi formal) and no formal linkage is defined to link these requirements with the corresponding implementation code. 3 HDL, HARDWARE DESCRIPTION LANGUAGES: Due to the difference between hardware product and software product, Production of hardware or software component passes through tow different sequences. Software engineers concentrate on requirement collection, development, verification, deployment .etc. Hardware engineers emphasis on functional level, logic gate level, RTL (Register Transfer Level) and printed circuit level. The increasing system complexity obligates both communities to develop their tools towards abstract system level. 3.1 VHDL VHDL that is the dominant language in hardware design was the first to take system level in account. Even if VHDL [2] was designed for electronic design automation to describe VLSI circuits, it argues that it can be used as a general-purpose language and even can handle parallelism. From hardware community point of view, VHDL may be used to describe the structure of the system since any circuit may be defined as a black box (ENTITY) where all the inputs and outputs are defined then by a white box (ARCHITECTURE) where all the components and connections between these components are declared. Components in the architecture are functionally defined and they could be mapped later to the real word components by an additional level (CONFIGURATION). So it is supported with libraries that contain all specifications of electronic units known in the world. These layers permit to simulate the real circuit in order to verify the design. ARCHITECTURE layer in VHDL may define the behaviour of the circuit instead of its structure. Beside VHDL, most important HDLs, such as SystemVerilog and SystemC respect the distinction between abstract and implemented levels. 3.2 HDL and Co-Design Verification Simulation is the principle verification tool in HDL. Furthermore, most Co-design verification methods depend on Co-simulation of two or more types of components that are designed by different technologies. Each research community tries to extend design stages to include more abstract levels. Fortunately, we can observe many common properties in the research result of these different communities. It is quite interesting to compare them and to show that they could be prefigured and structured within a model driven architecture. In this paper, we focus on development with B approach and show how it may be applied on HDL. 4 B METHOD, MOCHA, EVENT B: B method [1] is known in software engineering as a formal method to specify and to develop finely the specification towards an executable program basing on set theory and first order logic notation. B draws together advances in formal methods that span the last forty years (pre and post notations, guarded commands, stepwise refinement, and the refinement of both calculus and data). During the software development in B method, many versions of the same component may be found. The first and the most abstract one is the Abstract Machine where client needs are declared. Then the following versions should be more concrete and precise more and more how we obtain the needed specifications. These versions are called Refinements except the last one where there is no more possible refinement. This deterministic version is called Implementation. B generates the necessary proof obligations to verify the coherence of each component and correctness of the development. Furthermore, B tools help to execute these proofs. Like B, Mocha [9] is an interactive verification environment for the modular and hierarchical verification of heterogeneous systems. Mocha supports the heterogeneous modeling framework of reactive components and based on Alternating Temporal Logic (ATL), for specifying collaborations and interactions between the components of a system. Event B is an evolution of B Method. Key features of B Event are the extensions to events for modeling concurrency. The primary concept in doing formal developments in Event-B is that of a model. A model contains the complete mathematical development of a Discrete Transition System. It is made of several components of two kinds: machines and contexts. Machines contain the variables, invariants, theorems, and events of a model, whereas contexts contain carrier sets, constants, axioms, and theorems of a mode. The Rodin platform is an open source Eclipse-based IDE for Event B is further extendable with plugins. 5 BHDL: B ↔ VHDL The principle of BHDL is to make use of the common properties between B, ADL and HDL in order to use a common formal iterate language. This will facilitate the verification of design correctness since the early steps of co-design. Fortunately, B method has its own mathematical notation that can be used during all development steps. The correctness of a system described by B language may be “proven” by many tools as AtelierB, B Toolkit, B-For-Free and RODIN [3]. Declaration of ADL main components of system is graphically built, Then, two different notations are generated: VHDL and B. The produced B code contains the main features of VHDL one. After that, design may be separated in relation to the technologic choices. ![Figure 3: Principle of BHDL.](image) Each Architecture in VHDL is attached to one Entity and it may contain recursively one or more Entities. This structure looks similar to extern-view and intern-view in ADL, procedure call and procedure implementation in imperative language .etc. Also in B method two basic components excite: the Abstract machine and the Refinement. The first one is usually used to precise the specifications of the component; the interface variables, the internal variables, the invariant relation between them and the pre and post conditions of the necessary operations. The second component may refine an abstract machine; that means it precise partly how the operations may be implemented. The Refinement component may be, in his turn, refined recursively by more deterministic Refinements. The last refinement step, when the behaviour becomes completely deterministic, is called the implementation. B tools may prove the consistency of each component and the refinement relation. ![Figure 4: Structural refinement and proof obligation](image) In our project each Entity is translated by an Abstract machine and each Architecture by a refinement. The ports are declared as Variables and the port typing as Invariant. Furthermore we enhanced the VHDL notation with logical properties. These properties are injected in B Invariant. The connection between subcomponents of the Refinement should guarantee the Invariant specified in the abstract machine (see Fig. 4). 5.1 Hierarchy In VHDL, the transition from an Entity into a corresponding Architecture is usually performed in one step. In BHDL, this may be finely performed by many steps or levels. We may consider the refinement of a component in BHDL as a replacement by other components. Also we may refine a component by another one which has the same structure and links but with more strict logic property. In all cases the refinement is performed towards lower levels where the behaviour of the system becomes more deterministic. The principal relation between the interface (external view) and its refinement (or between two levels of refinement) is: \[ \text{Connection}(q_1, q_2, \ldots, q_n) \Rightarrow q \] which means that the logical connection between the properties of the sub-components should satisfied the properties indicated in the abstract machine that represents the Entity. 5.2 Compositionality and Invariant Let us consider the following simple example for illustrating captures of multiple mathematical views and reliability. ![Diagram](image) **Figure 5:** Structure of Comp1 component. Fig. 5 shows a system that contains two Nand components. The modified version of VGUI allow to draw a similar connected boxes and to precise the logic properties and the internal structure of each box. Then VHDL+ and B code is generated. VGUI generated the following VHDL+ code for this example: ```vhdl+ STRUCTURE comp1 OF comp SIGNAL s BEGIN gate1 : nand PORT MAP (i1,s,o) gate2 : nand PORT MAP (i2,i3,s) END ENTITY nand PORT x, y : IN std_logic z : OUT std_logic -- z = nand (x,y) B specification END ``` 5.3 Specification Languages As B is used in this example as formal specification language, PSL is an "add-on" language for Hardware description languages that has recently been standardized by the IEEE in 2005. PSL standard is based upon IBM's "Sugar" language, which was developed and validated at IBM Labs for many years before IBM donated the language to Accellera for standardization. PSL works alongside a design written in VHDL, Verilog or SystemVerilog. But in future it may be extended to work with other languages. Properties written in PSL may be embedded within the HDL code as comments or may be placed in a separated file alongside the HDL code. PSL includes multiple abstraction layers for assertion types ranging from low-level Boolean and Temporal to higher-level Modeling and Verification. Formally, PSL is structured into four layers: the Boolean, Temporal, Verification and Modeling layers. At its lowest-level, PSL uses references to signals, variables and values that exist in the design's conventional HDL description. Sugar used CTL (Computation Tree Logic) formalism to express properties for model checking. But the finaly the underling semantic foundation was migrated from CTL to LTL (Linear-Time Temporal Logic) because the latter is considered more accessible to a wider audience and it is more suitable for simulation. The temporal operators of the foundation language provide syntactic sugaring on the top of LTL operators. These temporal operators include: - **Always:** it holds if its operator holds in every signal cycle. - **Never:** it holds if its operand fails to hold in every signal cycle. - **Next:** it holds if its operand holds in the cycle that in the immediately follows. - **Until:** it holds if the property at its left-hand holds in every cycle from the current cycle up until the next cycle in which the property at its right-hand holds. - **Before:** it holds if the left-hand operand holds at least once between the current cycle and the next time the right-hand operand holds. 5.4 Fault Tolerance in BHDL The usual development in B method goes from the abstract requirement to the concrete execution. During the development, the behaviour becomes more and more deterministic. In spite of that, BHDL can takes in account the possibility to describe a fault scenario. Here we describe the ideal system with the behaviour of the ideal variables in the abstract machine, then, by Refinement, we inject the possible fault. This fault is declared using false variables. Then, we propose the correction step for the false variables. At the end, we prove that the corrected values of the false variables respect the INVARIANT of the initial ones. The additional variables and the correction operations are the cost of trust behavior of the system. 5.5 Dependency Relation BHDG project can make use of B tools to verify the dependence between an output and an input. In Refinement components, each connection produces an independency relation between two variables. Two types of connections may be noticed; the connection between the sub-components and the intern wires and the connection between sub-components and outer ports. The direction of the dependency is related to the signal direction. As we see, this relation recursively depends on the lower layers. As Refinement (architecture) can see only the abstract machines (ENTITYs) of its sub-components. So that, as the Refinement cannot see the Refinements of its own sub-components, it cannot see their dependency relation (see Fig. 6). One solution is to modify the Invariant of each Abstract machine where dependency relation is declared. To facilitate the modification we write a part the invariant of the abstract machine in an independent file that may be easily modified by the refinement. We defined a transitive relation “Depend” on the ensemble PORTS with one direction. This relation should be defined on variables attached to the instances of the interne components not to the generic form of them so we add new variables for each instance to define the dependency relation. For example, we shall write the dependency relation for the following component. ![Figure 6: Dependency Relation.](image) All these modification of the INVARIANT are applied at refinement level where we can see the subcomponents. But we need this information at the abstract machine level because we need to know the dependency relation in a higher level where this component (or abstract machine) is included, in its turn, as subcomponent. The abstract machine of the right part of Fig. 7 is used as a sub-component in the refinement of the left part. This dependency relation has been use to check fan-out property. In digital circuits, fan-out defines the maximum number of digital inputs that the output of a single logic gate can feed. The value of the fan-out is a big impact on test and debugging. ![Figure 7: Dependency Information Transfer.](image) 6 REALISATION OF BHDL PROJECT: The project is totally implemented by three distinct components of BHDL: 6.1 A Graphical Interface for System Entry (VGUI) As we mentioned above, we make use of VGUI (VHDL Graphical User Interface) to built the system entry of Hardware Diagrams. It is an open source tool that may be considered as a simple component description tool. VGUI may be used to create generic interconnected boxes. Each box may be decomposed hierarchically into sub-boxes and so on. The boxes and the connections of VGUI are typed. In cooperation with VGUI developer, we added the possibility to attach logic property to each box and hide data. Eventually, VGUI generates VHDL code annotated with B expressions. This step is optional; designer may use a textual editor to directly write the annotated code to be analyzed by the following step. 6.2 B Model Generator Here a B model that corresponds to the annotated VHDL model is crated. The A compiler is built to generate B code. From the external view of VHDL or from an entity in VHDL model, it generates the suitable B Abstract Machine that contains the necessary properties of the Entity and traces the structure of VHDL model. In a similar way, the internal view in VGUI is translated into Architecture in VHDL then into a refinement in B. Because that design in VHDL usually depends of some predefined standard libraries, we created some B components that correspond to some VHDL libraries (such as the Standard logic 1164). The compiler is the most important practical part of BHDL project. It is built on ANTLR compiler generator. ANTLR (Another Tool for Language Recognition) is a powerful tool that accepts grammatical language descriptions and generates programs (compilers or translators) that can recognize texts in the described languages, analyzes these texts, constructs trees corresponding to their structure and generates events related to the syntax. These events, written in C++ or in Java, may be used to translate the text into other languages. It can generate AST (Abstract Syntactic Trees) which can stock a lot of information about the analyzed text, provides tree rewriting rules for easily translating these ASTs. The correction of such a translator depends only on the correction of every elementary rewriting rule (declarative semantics). As VGUI, ANTLR is open source software written in Java. The translation from VHDL+ to B in is performed over many steps: - BHDL Lexer/Parser : which analyses the input VHDL+, verifies the syntax and the semantic of VHDL code, then it generates a pure VHDL tree (AST) with independent branches that contain the B annotations - TreeWalker: this tree parser parses the previous AST in order to capture the necessary information to construct a new AST that corresponds to B model. - B-Generator: It traverses the AST produced by the TreeWalker in order to generate B code. Even if a corresponding B model is automatically created, the design correctness is not automatically proven. The generated B code should be proven to be correct. B tools (AtelierB, B4Free, B-Toolkit) render the task easy. It generates the necessary prove obligations (POs), automatically produces an important quantity of the proofs, cooperates with the programmer to prove the rest of the POs. Here, if the model is not completely proven, some defects may be detected and the original VHDL design should be modified. 7 AFCIM AND PCSI PROJECTS BHDL project is developed in the LIFL (Lille’s Computer Science Laboratory). This research first conducted into the AFCIM project (LIFL, INRETS, HEUDIASYC Lab). The French project AFCIM (Formal Architectures for Conception and Maintenance of Embedded Systems) coordinated by Philippe Devienne (LIFL) is a collaborative research between four French universities and institutes (LIFC, LIFL, Heudiasyc, INRETS). The global architecture of the AFCIM project is shown in Fig. 9: ![Figure 8: main transformations of BHDL.](image) **Figure 8: main transformations of BHDL.** ![Figure 9: AFCIM Project](image) **Figure 9: AFCIM Project** From a general Model Driven Architecture (i.e the common part of specific description languages like ADL, HDL...), we add formal annotations and specifications according to the requirements or the fault scenarios that we want to handle. All the tools used in our platform are freely used and distributed (Rodin, Eclipse, ANTLR, ...). Eventually the main concepts of BHDL and AFCIM is being augmented and implemented with support of PCSI project (Zero Defect Systems) between Lille University, Aleppo University and Annaba University. The main new features of the project are the following Fig.10: 7.1 Including PSL Instead of special comments used in the first version of BHDL to represent the logical behavior of VHDL components, we use here a formal language, PSL, that is standardized in 2005. PSL (Property Specification Language) [12] is a language for the formal specification of hardware. It is used to describe properties that are required to hold in the design under verification. It contains Boolean, Temporal, Verification and modelling layers. The flavour of PSL could be added to many HDL (Hardware Description Language) such as VHDL, Verilog, SystemVerilog. This enlarges the usability of our tool since PSL is expressive and standard. 7.2 Extending Scope of VHDL Treated in BHDL While the first version of BHDL mainly manipulates the design structure decorated with logical properties, here we enlarge the model to accept important concepts of VHDL such as signals where the concept of Time appears. Beside ENTITY and ARCHITECTURE VHDL contains other design units such as CONFIGURATION. These units could be taken in the future. 7.3 Creating the Target Model Using Event-B Instead of Classical B The purpose of Event-B is to model full systems (including hardware, software and environment of operation). Classical B is not suitable to represent temporal properties which are important in hardware design. Furthermore, Event-B facilitates the representation of many subsystems in a global one. After the creation of an HDL model, it will be traced in B. in order to facilitate the proof of the consistency and the formal refinement of the model; we integrated our work in Eclipse environment. Eclipse is generic platform to develop multi-language software comprising an integrated development environment (IDE) and an extensible plug-in system. The Rodin Platform is an Eclipse-based IDE for Event-B that provides effective support for refinement and mathematical proof. The platform is open source, contributes to the Eclipse framework and is further extendable with plugins. Such integration renders the integration between hardware community and software community easy since they work on the same environment. All the tools used in our platform are freely used and distributed (Rodin, Eclipse, Antlr, …). 7.4 Automated Addition of Robustness We focus on the problem in evolving a fault-intolerant program to a fault-tolerant one. The question is “Is It possible to add a default scenario to an existing model or program and generate automatically the tolerant model or program?” This problem occurs during program evolution new requirement (fault-tolerance property, timing constraints, and safety property) change. We argue here that refinement can handle this evolution. In others words a fault-tolerant program is a refined form of its intolerant one. We have shown how to apply this formalism to characterize fault-tolerance mechanisms and to then reason about logical and mathematical properties. For instance, the hamming code is a kind of data refinement. By adding data redundancy (extra parity bits), error-detection and even error-correction are possible. This can generalize to handle Byzantine properties. Fault tolerance is often based on replication and redundancy. This is involved by the use of hybrid systems with different sources of energy (electric, mechanic). This duplication can be also seen as component refinement or algorithmic refinement. For instance, nowadays, because of the integration of circuits, stuck-at-fault is a more and more frequent fault model. According that the probability that a circuit contains at least k stuck-a-fault is too high, we can generate an equivalent circuit, except that it is k-stuck-at-fault tolerant. This transformation can be seen a refinement, that a logico-mathematical completion w.r.t. a default model. ACKNOWLEDGEMENT The CAD tool, VGUI, is adapted to or project in cooperation with Mr. Carl Hein. An ANTLR parser template to generate AST trees is built in cooperation with Mr. J. L. Boolanger. 8 REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00832612/document", "len_cl100k_base": 6347, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33382, "total-output-tokens": 7228, "length": "2e12", "weborganizer": {"__label__adult": 0.00042176246643066406, "__label__art_design": 0.0006804466247558594, "__label__crime_law": 0.0003736019134521485, "__label__education_jobs": 0.0007152557373046875, "__label__entertainment": 8.13603401184082e-05, "__label__fashion_beauty": 0.00020372867584228516, "__label__finance_business": 0.0002598762512207031, "__label__food_dining": 0.0003821849822998047, "__label__games": 0.0005998611450195312, "__label__hardware": 0.0033721923828125, "__label__health": 0.0005993843078613281, "__label__history": 0.0003807544708251953, "__label__home_hobbies": 0.00014388561248779297, "__label__industrial": 0.0008759498596191406, "__label__literature": 0.00026345252990722656, "__label__politics": 0.0003044605255126953, "__label__religion": 0.0007767677307128906, "__label__science_tech": 0.0809326171875, "__label__social_life": 8.577108383178711e-05, "__label__software": 0.0057373046875, "__label__software_dev": 0.9013671875, "__label__sports_fitness": 0.0003750324249267578, "__label__transportation": 0.0009927749633789062, "__label__travel": 0.00025272369384765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32951, 0.02572]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32951, 0.76917]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32951, 0.90336]], "google_gemma-3-12b-it_contains_pii": [[0, 933, false], [933, 4702, null], [4702, 8301, null], [8301, 13375, null], [13375, 16342, null], [16342, 20685, null], [20685, 24359, null], [24359, 27626, null], [27626, 31506, null], [31506, 32951, null]], "google_gemma-3-12b-it_is_public_document": [[0, 933, true], [933, 4702, null], [4702, 8301, null], [8301, 13375, null], [13375, 16342, null], [16342, 20685, null], [20685, 24359, null], [24359, 27626, null], [27626, 31506, null], [31506, 32951, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32951, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32951, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32951, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32951, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32951, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32951, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32951, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32951, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32951, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32951, null]], "pdf_page_numbers": [[0, 933, 1], [933, 4702, 2], [4702, 8301, 3], [8301, 13375, 4], [13375, 16342, 5], [16342, 20685, 6], [20685, 24359, 7], [24359, 27626, 8], [27626, 31506, 9], [31506, 32951, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32951, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
f992e5cc3d683c809eea4175c65f1af8ad9d1033
iNFAnt: NFA pattern matching on GPGPU devices Original Availability: This version is available at: 11583/2373004 since: 2020-12-13T14:42:02Z Publisher: ACM Published DOI:10.1145/1880153.1880157 Terms of use: oppenAccess This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository Publisher copyright (Article begins on next page) iNFAnt: NFA Pattern Matching on GPGPU Devices Niccolo' Cascarano, Pierluigi Rolando, Fulvio Risso, Riccardo Sisto Politecnico di Torino Turin, Italy {niccolo.cascarano, pierluigi.rolando, fulvio.risso, riccardo.sisto}@polito.it ABSTRACT This paper presents iNFAnt, a parallel engine for regular expression pattern matching. In contrast with traditional approaches, iNFAnt adopts non-deterministic automata, allowing the compilation of very large and complex rule sets that are otherwise hard to treat. iNFAnt is explicitly designed and developed to run on graphical processing units that provide large amounts of concurrent threads; this parallelism is exploited to handle the non-determinism of the model and to process multiple packets at once, thus achieving high performance levels. Categories and Subject Descriptors C.2.3 [Computer-Communication Networks]: Network Operations General Terms Experimentation, Algorithms Keywords NFA, pattern matching, CUDA, GPGPU, regular expression 1. INTRODUCTION Pattern matching, i.e. the task of matching a string of symbols against a set of given patterns, plays an important role in multiple fields that range from bioinformatics (e.g. for analyzing DNA sequences) to high-speed packet processing, where it is a critical component for packet filtering, traffic classification and, in general, deep packet inspection. Pattern matching is commonly performed by expressing patterns as sets of regular expressions and converting them into finite state automata (FSAs), mathematical models that represent (potentially infinite) sets of strings. The behavior of FSAs is simple to emulate on computing devices in order to perform the actual matching procedure and finite automata can be easily composed together with the full set of boolean operators. Two kinds of FSAs are known from the literature, deterministic (DFA) and non-deterministic (NFA). While automata theory proves them equivalent in terms of expressiveness, their practical properties are different: NFA traversal requires, by definition, non-deterministic choices that are hard to emulate on actual, deterministic processors; on the other hand DFAs, while fast to execute, can be less space-efficient, requiring very large amounts of memory to store certain peculiar patterns that are rather common in practice (the so-called state space explosion) [2]. In general, software-based NFA implementations suffer from a higher per-byte traversal cost when compared with DFAs: intuitively, this is because multiple NFA states can be active at any given step, while only a single one must be considered when processing DFAs. So far software research has been focused mainly on DFAs, as they provide a relatively easy way to achieve high throughputs; many efforts have been aimed at solving the inherent downsides of this model and avoid the aforementioned memory explosion. At the same time, NFAs have often been relegated to the design of hardware devices (e.g. FPGAs) that can easily mimic their behavior, or for use where high throughput is not the primary concern (e.g. many general-purpose pattern-matching libraries). This paper presents iNFAnt, a NFA-based regular expression engine running on graphical processing units (GPUs). iNFAnt represents a significant departure from traditional software-based pattern matching engines both for its underlying automaton model, the NFA, and its target hardware platform, the GPU. NFA adoption allows iNFAnt to efficiently store very large regular expression sets in a limited amount of memory while the parallelism offered by the underlying hardware helps counteracting the higher per-byte traversal cost of NFAs with respect to DFAs and the higher instruction execution time of GPUs with respect to CPUs. iNFAnt also represents, as far as we know, one of the first approaches to pattern matching designed from the ground up for the heavily parallel execution environment offered by the modern programmable GPUs, as opposed to being an adaptation of a technique originally designed for general-purpose processors. 2. RELATED WORKS It is common knowledge that pattern matching is the most time-expensive operation to be performed in intrusion detection systems and similar applications: accelerating its execution has been the object of several academic works. Some of them have already considered the idea of using the parallelism offered by GPUs. It is possible either to try to execute the whole packet processing application on a graphical device or to accelerate only the pattern matching portion. The first case saw the development of Gnort [7], a full port of the Snort IDS1 to a GPU environment. 1Available at http://www.snort.org/. Gnort did not initially support regular expressions, delegating them to the host CPU; it has been since extended with a DFA-based regex engine [8]. DFA approaches, however, incur in state space explosion, typically solved by heuristically splitting the rules into smaller subsets or (as Gnort does) translating only a “well-behaved” subset while keeping the rest in NFA form for host processing [8]. Both solutions are suboptimal for our goals as splitting leads to inefficiencies (all the DFAs must be traversed) while resorting to host processing defies the goal of graphical hardware adoption. Other more advanced approaches for countering the DFA state explosion problem have been proposed, such as HFAs [1] that split the DFA under construction at the point where state space explosion would happen. There have been some experiences in porting advanced techniques, such as XFAs [5], to GPUs; however adapting a traversal algorithm designed for CPUs on a GPU-based device is not straightforward or efficient because of deep architectural differences. Other techniques described in the literature use GPUs to perform preprocessing steps or, alternatively, employ inexact algorithms to perform matching (e.g., [6]). These approaches are out of scope for the purpose of this paper, aimed at a full-fledged regex engine. Perhaps the work most closely related to iNFAnt is reported in [4] and describes methods to run DFAs and NFAs on a high-speed single-instruction, multiple data (SIMD) processor. While NFAs are recognized as a viable technique on parallel hardware and for reducing memory consumption, the proposed algorithm implements only a subset of the regular expression operators; moreover it considers an architecture radically different from GPUs in terms of specifications and programming model. 3. CUDA ARCHITECTURE The latest trends have seen a shift towards the development of inexpensive, highly-parallel and programmable GPUs. Employing these processors in fields unrelated to computer graphics has been dubbed \textit{general-purpose computation on graphical processing units} or GPGPU. There are multiple kinds of programmable GPUs available on the market and only recently a standard programming interface, OpenCL\(^2\), is emerging. For the purpose of this work, we have used nVidia devices that implement and expose the Compute Unified Device Architecture\(^3\) (CUDA) programming interface. 3.1 Execution cores CUDA devices are logically composed of arrays of single instruction, multiple threads (SIMT) processors, the multiprocessors, each one containing a number of physical execution cores (typically 8). The devices support thousands of threads at the same time, multiplexed on their far smaller set of cores by a dedicated hardware scheduler that avoids the overhead usually associated to context switching. The instruction set is RISC and most instructions require multiple clock cycles for their execution instruction: efficiency comes from the large number of cores, not their individual performance which is low. In the SIMT paradigm each multiprocessor executes the same instruction simultaneously for multiple threads by assigning a different execution context to each core; when this is not possible (e.g., due to the different outcomes of a conditional branch) threads are said to \textit{diverge} and the execution of groups of threads that go along different code paths is sequentialized by the scheduler. CUDA GPUs reduce the amount of branching (and thus divergence) with predicated execution, i.e., the writeback phase of most instructions can be conditionally disabled by fencing them with a predicate register: if false, the instruction is still executed but does not modify the state of the processor. Conditional execution is automatically introduced by the compiler as a replacement for small, potentially divergent code sequences e.g. in simple if-then-else constructs. 3.2 Memory hierarchy CUDA devices provide a varied hierarchy of memory areas with different sizes and access times; it is the programmer’s responsibility to choose the appropriate usage for each one, also considering access patterns and that no caching is implicitly performed by the hardware. In addition to a number of 32-bit registers shared by all the threads, each multiprocessor carries an on-chip \textit{shared memory}. Even if slower than registers, shared memory is still significantly fast: its latency can be measured in tens of clock cycles; it is, however, small: our board carries 16 kBytes of shared memory per multiprocessor. Shared memory is also banked: multiple accesses to independent banks are carried out simultaneously but conflicts force serialization. Bulk storage is provided by \textit{global memory}, ranging from hundreds of megabytes up to more than a gigabyte (depending on the card model). The on-board (though not on-chip) global memory is connected to each multiprocessor with a high-bandwidth bus, providing more than 80 Gb/s on our test card. The downside is that every access incurs in a very high latency cost, estimated around 400-600 processor cycles. Latency hiding is one of the reasons for the large number of threads supported by CUDA devices: the hardware scheduler automatically suspends threads that are waiting for the completion of global memory transactions and switches to others that are ready to run. In order to use efficiently all the available bandwidth it is necessary to perform as few accesses as possible, as wide as the memory bus allows. Since each thread typically accesses small amounts of memory at a time, a hardware controller tries to automatically \textit{coalesce} many smaller memory accesses in fewer, larger transactions at run-time. This is possible only if all the accesses involved respect a well-defined pattern: on newer CUDA devices all the addresses must fall within the same naturally-aligned 256-byte memory area. CUDA devices provide two further special interfaces to global memory areas under the form of \textit{constant} and \textit{texture memory} that provide the additional benefit of hardware caching. Given their limitations in size (e.g., 64 kBytes at most for constant memory) and supported access patterns, they are currently unused by iNFAnt. 3.3 Concurrency and data-sharing model CUDA devices are intended to be used in scenarios where each thread requires minimal interaction with its siblings: only a subset of the common synchronization and communication primitives is therefore provided. \(^2\)Described at http://www.khronos.org/opencl/. \(^3\)Available at http://www.nvidia.com/object/cuda_home_new.html. For any application, the set of active threads is divided into blocks: threads from the same block are always scheduled on a specific multiprocessor and communicate through its shared memory; ad-hoc primitives enable atomic read-modify-write cycles. This is the only form of inter-thread communication currently supported by the CUDA model: there are no reliable semantics for concurrent accesses to global memory and threads belonging to different blocks cannot exchange data. Synchronization works in a similar fashion: CUDA provides primitives for pausing a thread until all the others in the same block have reached the same point in their execution flow. Once again, threads belonging to different blocks cannot interact. 4. INFANT DESIGN It appears clear from Section 3 that traditional algorithms developed for general-purpose processors are bad matches for the CUDA architecture, often using a small number of threads and paying little attention to memory access patterns. This is even more true for classic automata traversal algorithms: input symbols must be processed sequentially and their randomness can lead to unpredictable branching and irregular access patterns. It appears likely that a good traversal algorithm should be a departure from the traditionally accepted practice. Given the CUDA architecture and the problem at hand, we have identified the following design guidelines: 1. **Memory bandwidth is abundant.** Reducing the number of per-thread global memory accesses is not a priority if they are fully coalesced and there are enough threads to effectively hide memory latency. Shared memory can be considered fast enough for our purpose without requiring any special considerations. 2. **Memory space is scarce.** This is especially true for the shared memory and for registers but global memory should be used carefully as well: although comparatively big, it is common for automata to grow beyond the available amount, even when starting from small regex sets; the ability to store very large automata is also an advantage with multistriding (described in Section 4.3). 3. **Threads are cheap.** In contrast to CPUs, CUDA devices are designed to work best when presented with very large numbers of threads, up to 512 per block, and the maximum number of blocks as supported by the actual GPU considered. 4. **Thread divergence is expensive.** The large number of threads is manageable by the hardware only if all of them execute the same instruction at the same time or if, at worst, the number of possible alternative paths is very small [5]. The program should be structured so that jumps are few and replaceable with predicated execution whenever feasible. 4.1 NFA representation In order to adhere to our guidelines and in contrast to classic approaches, iNFAnt adopts an internal format for the FSA transition graph that we dubbed symbol-first representation: the system keeps a list of \((source, destination)\) tuples, representing transitions, sorted by their triggering symbol. This list can grow very large so it must be stored in a global memory array, together with an ancillary data structure that records the first transition for each symbol, to allow easy random look-ups. As an example, the representation for the automaton in fig. 1(a) is reported in fig. 1(b). The current implementation allocates 16 bits per state label, thus supporting up to 65535 states, which is more than enough for our current workloads that peak at around 6000 - 9000 states. It should be noted that this limitation does not affect the maximum number of transitions, that depends only on global memory availability. In order to reduce the number of transitions to be stored, and also to speed up execution, iNFAnt adopts a special representation for self-looping states, i.e. those with an outgoing transition to themselves for each symbol of the alphabet. These states are marked as persistent in a dedicated bit-vector and, once reached during a traversal and marked as active, they will never be reset. Bit-vectors containing current and future active state sets are stored in shared-memory. 4.2 Traversal algorithm The traversal algorithm follows naturally from the data structure definition. Many packets are batched together and mapped 1:1 to CUDA blocks to be processed in parallel: every thread in each block executes the instructions reported as pseudo-code in fig. 2. State bit-vectors appear with a \(sv\) subscript and underlined statements are performed in cooperation by all the threads in a block. More precisely, the copies in lines 1, 5, 13 are performed by assigning each thread a different portion of the bit-vectors involved. Underlined statements also correspond to synchronization points in the program: after execution, each thread will wait for the others to reach the same point before proceeding. Parallelism is exploited not only because at any given time multiple blocks are active to process multiple packets but also because for each packet each thread examines a different transition among those pending for the current symbol when running the inner while loop (lines 6 - 12). A large number of transitions can be processed in parallel this way and if there are enough threads then the time spent in processing a single symbol will be effectively equal to what would be required for a deterministic automaton. With regard to access patterns, the traversal algorithm requires global memory for reading the current input symbol and for accessing the transition table: both these ac- --- Figure 1: Symbol-first representation. cesses can be coalesced. All the threads working on the same packet access the same symbol (and offset) at the same time because of synchronization: the card can execute the same packet: in the current implementation only the condition corresponding to lines 9-10 in fig. 2 can diverge from the stride offset equals the number of threads. These very regular memory patterns are pivotal to exploiting all the available bandwidth and provide big improvements over the almost random patterns that derive from traditional traversal algorithms; even the ‘padding’ reads that happen on the last iteration of the inner loop if there remain more threads than transitions do not cause a noticeable performance degradation. The symbol-first representation allows an efficient usage of the available global memory space by storing only useful data, a property that would not be provided by e.g. the classical but potentially very sparse state transition matrix that requires a storage location for each combination of current state and input symbol. The traversal algorithm, moreover, can be executed with very little divergence among the threads assigned to the same packet: in the current implementation only the conditional choice corresponding to lines 9-10 in fig. 2 can diverge and the compiler is able to handle this case with predicated instructions. Some divergence is possible during initialization or when writing back results, but these phases execute quickly and once per batch of packets, so their overhead is negligible with respect to the total running time of the algorithm. Finally, it must be noted that the shared memory accesses required in the inner loop for reading the current state vector and setting future states do not follow any predefined pattern. Given the speed of shared memory, its banked structure and the occasional nature of the updates (performed only if a transition actually triggers), this is not expected to be an issue, as confirmed by the results reported in Section 5.1. 4.3 NFA multistriding An interesting property of the iNFAnt algorithm and data structure is that they easily support multistrided automata. Multistriding is a transformation that repeatedly squares the input alphabet of a state machine and adjusts its transition graph accordingly: intuitively, the alphabet of a 2-strided automaton consists of all the possible pairs of original input symbols and each transition is the composition of 2 adjacent transitions of the original. The transformation required for 2-striding a FSA, documented in [2], can be performed ahead of time and offline. After multistriding it is possible for the traversal algorithm to consider pairs of symbols at once, thus reducing global execution time. Squaring has multiple effects, most prominently producing an increase in both transition count and alphabet size. The former is not a major problem if the source automaton is small, but can (and it does, in our tests) quickly lead to memory exhaustion if this is not the case, e.g. when creating DFAs from large rule sets, thus limiting the applicability of the procedure when not working with NFAs. The increase in alphabet size, on the contrary, is particularly troublesome if the procedure is repeated multiple times, as the length of each symbol and the cardinality of the alphabet can quickly approach intractability. In order to avoid this issue iNFAnt performs an alphabet compression step that removes any symbol that does not appear on transition labels and renames all the remaining ones based on equivalence classes. Compression also makes the symbol set dense, allowing simpler data structures to be used for look-ups. Each multistriding step emits a translation table that (in general) maps pairs of input symbols into a single output symbol: this is possible without an explosion in symbol count because in most cases only a small portion of the possible symbol space is used. In order for a packet to be processed after multistriding, it must first undergo the same translation, a procedure currently performed on the host CPU using a hashed look-up table. The CPU executes this algorithm in pipeline with GPU-based automaton traversal, reducing its run-time impact; the number of symbols to be processed is halved by each rewrite and, by extension, the time required to process each data unit on the GPU is reduced, with no modifications to the traversal algorithm. 4.4 System interface A major architectural choice is how to interface the iNFAnt engine with other external components, both for the creation of the required data structures and to pass packets to and from the GPU at run-time. The current iNFAnt prototype exposes a simple API that allows loading precompiled NFAs on the graphics card and submitting a batch of packets for processing. Packet copies can be performed through DMA and results are read back in a similar fashion. While GPU operations such as transfer initiation or kernel launch are not free, they execute quickly and, in most cases, have been found to be of little relevance when compared to the actual time spent in pattern matching. 5. EXPERIMENTAL EVALUATION iNFAnt has been evaluated by comparing its throughput and memory consumption with those achieved by HFAs, that represent the current state of the art for many purposes by following closely the behavior (and speed) of DFAs on non-troublesome rule sets, while implementing strategies to prevent state space explosions. The test plan involved 3 regex sets designed to highlight different aspects of the applications under scrutiny. The http-sig rule set is composed of 2 regular expressions that recognize specific HTTP headers; the resulting automaton is ``` 1: currentsv ← initialsv 2: while ¬input.empty do 3: c ← input.first 4: input ← input.tail 5: futuresv ← currentsv ∨ persistentsv 6: while a transition on c is pending do 7: src ← transition source 8: dst ← transition destination 9: if currentsv[src] is set then 10: atomicSet(futuresv, dst) 11: end if 12: end while 13: currentsv ← futuresv 14: end while 15: return currentsv ``` Figure 2: Traversal algorithm Figure 3: Throughput measurements As it can be seen, the throughput achieved by non-strided NFAs is comparable to though lower than corresponding HFA results. This can be justified by the higher per-byte traversal cost of NFAs and by the higher instruction execution time of GPUs: even if parallelism reduces the amount of time required to process a single packet, this is not enough to completely compensate for the aforementioned aspects. However, the situation is vastly improved by the introduction of multistriding and the self-loop state optimization, leading to far better throughputs than HFAs. 5.1 Pattern-matching throughput Figure 3 reports the best throughputs obtained for all techniques. In order not to inflate the results, all measurements were performed by taking into account only payload bytes and excluding packet headers (that were not examined). The 'NFA++' data series reports results obtained by enabling the self-looping states optimization described in sec. 4.1. iNFAnt allows the user to set the number of threads per packet and the number of packets submitted to the card in a batch for parallel processing: the best results obtained by exploring the possible configuration space are reported here. As it can be seen, the throughput achieved by non-strided NFAs is comparable to though lower than corresponding HFA results. This can be justified by the higher per-byte traversal cost of NFAs and by the higher instruction execution time of GPUs: even if parallelism reduces the amount of time required to process a single packet, this is not enough to completely compensate for the aforementioned aspects. However, the situation is vastly improved by the introduction of multistriding and the self-loop state optimization, leading to far better throughputs than HFAs. 5.2 Global memory consumption Figure 4 shows the amount of global memory required for automaton storage, which is by far the largest data structure used by both the techniques considered; shared memory occupation in iNFAnt is considerably below the maximum amount in all test cases (about 6000 states are required for L7-filter). --- simple and almost completely linear, posing little challenge both to iNFAnt and HFA and providing baseline results to compare per-byte costs. The Snort534 set (taken from [3]) consists of 534 regular expressions; it can be divided into subsets that share an initial portion while the tails differ, a structure that makes it a good target for HFAs. Finally, all the protocol signatures from the L7 traffic classifier\(^4\) make up the L7-filter set, which is a very complex and irregular test set where no common prefixes or other properties can be exploited. In spite of its limited size (around 120 regexs), the L7-filter is the largest of our test cases in terms of memory occupation, regardless of the form in which it is compiled. All the tests were performed using a single core of the otherwise-unloaded test machine, a 4-core Xeon machine running at 3 GHz and provided with 4 GiB of RAM; GPU tests were conducted on the same platform equipped with an nVidia GeForce 260 GTX graphics card with 1 GiB of RAM and 27 multiprocessors clocked at 1.24 GHz. All relevant caches (e.g. processor, disk) were warmed by performing unmeasured test runs. As input, a 1 GiB trace of real-world network traffic was used. The two platforms (GPU card and CPU host system) are significantly different in terms of architecture and specification, thus making their performance not directly comparable. However, they both represent significant examples of commercially available middle-tier hardware. Hence, the throughput measurements reported in the following sections should be regarded as order-of-magnitude estimates of the performance obtainable using commodity hardware devices. --- \(^4\)Available at [http://l7-filter.sf.net/](http://l7-filter.sf.net/). It appears clear that in general the NFAs used by iNFAnt use comparable or less memory than the corresponding HFAs; it is interesting to note that the L7-filter rule set is impossible to compile in HFA form on our test machine, regardless of the provisions built into the HFA model; its column in the chart corresponds to the lower bound of estimated consumption (4 GiB). A direct comparison with DFAs yields even better results for NFAs: besides L7-filter, Snort534 incurs in state space explosion as well. The difference between NFAs and other approaches is exacerbated when considering multistriding: given the increment in size, only the adoption of NFAs makes this technique feasible. The NFA memory consumption reported must also be considered as a worst-case measurement: the NFAs considered were not in a minimal, canonical form and it might be possible to further reduce their sizes by appropriately modifying the generation process. 5.3 Multistriding and self-loop handling Both throughput and memory occupation are affected by iNFAnt optimizations. As expected, in most cases multistriding improves run-time performance, mainly because of shortened input packets (in term of symbols), requiring less iterations in the traversal algorithm; the improvements observed are roughly linear with the number of automaton squarings performed, a result consistent with our bottleneck analysis. At the same time, multistriding yields larger automata, mainly because of increased transition counts; this effect is clearly visible in fig. 4. Nevertheless, iNFAnt is effective in dealing with this issue. On one side, as it can be seen from the charts the available amount of global memory is adequate in all cases; on the other side the increase in transition counts is somewhat offset by larger alphabets, making the number of transitions to be examined per symbol grow relatively slowly. As for the rewriting operation itself, in most practical cases it requires less time than automata traversal so its cost can be completely absorbed by pipelining. Self-looping state optimization, on the contrary, directly reduces transition counts. While obviously not designed to completely counteract the effects of multistriding, the introduction of separate handling for self-looping states proves to be very effective both at reducing the number of transitions stored in global memory (especially with deeper multistriding) and at speeding up execution, once again thanks to lower per-symbol transition counts. 6. CONCLUSIONS AND FUTURE WORKS This paper presented the design and evaluation of iNFAnt, a novel NFA-based pattern matching engine. iNFAnt is explicitly designed to run on graphical processing units, exploiting the large number of execution cores and the high-bandwidth memory interconnections through its ad-hoc data structure and traversal algorithm; more in detail, the automaton representation and traversal algorithm adopted by iNFAnt match well the CUDA architecture, allowing full coalescing of memory accesses and requiring very little thread divergence. The adoption of the NFA model allows a significant reduction in memory occupation from the get-go, avoiding state space issues by design and enabling iNFAnt to handle complex rule sets; the optimized handling of self-looping states further reduces memory consumption while at the same time improving run-time performance. Additional free memory, if available, can be traded off for processing speed with the adoption of multistriding, thus effectively counteracting the higher per-byte cost deriving from the non-deterministic model and the high instruction execution time taken by GPUs. Multistriding is especially feasible on the iNFAnt platform because of the lower baseline memory requirements and because the traversal performance depends on the number of transitions per input symbol; other FSA engines, especially if relying on a small alphabet, might be adversely affected by its introduction. While iNFAnt might not be the first GPU-based pattern matching engine, to the best of our knowledge, it is one of the first to use NFAs to implement a technique specifically designed for graphical processors. In contrast to most approaches ported from general-purpose CPUs, the bottleneck is not memory bandwidth but the execution cores processing speed; higher throughputs could be achieved on the same architecture with more and/or faster execution units. With regard to future developments, we are planning to perform string rewriting directly on the GPU, thus completely offloading the host CPU; while the task itself is embarrassingly parallel, an efficient implementation of look-up tables on CUDA devices is not. A more thorough evaluation of run-time behavior is also in progress, comparing iNFAnt with more alternative techniques and performing additional scalability tests on more powerful hardware devices. 7. REFERENCES
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2373004/53283/10CCR-Infant.pdf", "len_cl100k_base": 6447, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 20551, "total-output-tokens": 7501, "length": "2e12", "weborganizer": {"__label__adult": 0.0006113052368164062, "__label__art_design": 0.0007615089416503906, "__label__crime_law": 0.000762939453125, "__label__education_jobs": 0.00045609474182128906, "__label__entertainment": 0.00017881393432617188, "__label__fashion_beauty": 0.0002846717834472656, "__label__finance_business": 0.0002536773681640625, "__label__food_dining": 0.0004086494445800781, "__label__games": 0.0009741783142089844, "__label__hardware": 0.01534271240234375, "__label__health": 0.0008153915405273438, "__label__history": 0.0005311965942382812, "__label__home_hobbies": 0.00016307830810546875, "__label__industrial": 0.0012979507446289062, "__label__literature": 0.0003380775451660156, "__label__politics": 0.0004220008850097656, "__label__religion": 0.0009541511535644532, "__label__science_tech": 0.440673828125, "__label__social_life": 0.000102996826171875, "__label__software": 0.0179443359375, "__label__software_dev": 0.51513671875, "__label__sports_fitness": 0.0003941059112548828, "__label__transportation": 0.0010395050048828125, "__label__travel": 0.0002682209014892578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34111, 0.0317]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34111, 0.48068]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34111, 0.91448]], "google_gemma-3-12b-it_contains_pii": [[0, 641, false], [641, 5340, null], [5340, 11985, null], [11985, 17582, null], [17582, 23704, null], [23704, 27594, null], [27594, 34111, null]], "google_gemma-3-12b-it_is_public_document": [[0, 641, true], [641, 5340, null], [5340, 11985, null], [11985, 17582, null], [17582, 23704, null], [23704, 27594, null], [27594, 34111, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34111, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34111, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34111, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34111, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34111, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34111, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34111, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34111, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34111, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34111, null]], "pdf_page_numbers": [[0, 641, 1], [641, 5340, 2], [5340, 11985, 3], [11985, 17582, 4], [17582, 23704, 5], [23704, 27594, 6], [27594, 34111, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34111, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
9aa05cbb6425b6b3a3a5fe2c0a6ff19c1a0aaaf6
Abstract—Spade is a new open source hardware description language (HDL) designed to increase developer productivity without sacrificing the low-level control offered by HDLs. It is a standalone language which takes inspiration from modern software languages, and adds useful abstractions for common hardware constructs. It also comes with a convenient set of tooling, such as a helpful compiler, a build system with dependency management, tools for debugging, and editor integration. Index Terms—Hardware description languages, languages and compilers, Design automation I. INTRODUCTION Developing digital hardware is traditionally done using Verilog or VHDL, both languages originating in the 1980s. While those languages have evolved since their inception, they are still lacking many subsequent advancements in programming language design. Spade1 is a new open source Hardware Description Language (HDL) which aims to reduce the development effort of digital hardware by taking inspiration from software languages, and adding language-level mechanisms for common hardware structures. This is while still retaining the low-level control provided by HDLs. Being inspired by Rust and functional programming languages, Spade is expression-based and has a rich type system. It supports product-types like structs and tuples, and sum-types in the form of enums. The language also supports linear type-checking which can be used to ensure that hardware resources such as memory ports are used exactly once. Spade has built in constructs and abstractions for common hardware structures such as pipelines, memories, and registers. Pipelining allows the user to specify a computation to be performed, with explicit statements for separating stages of the pipeline, but without the need of separate variables for each pipeline stage. Retiming such a pipeline does not require changing any variables, only moving the staging statement. Additionally, the delays of pipelines are explicit in the language and checked by the compiler to ensure that changes to a pipeline do not affect the computation results. In order to more accurately reflect the hardware being described, all logic is combinatorial by default with the only sequential elements being registers and memories which are instantiated explicitly. The rest of the paper is structured as follows. Related works are discussed in Section II, including highlighting where Spade differs. The basic semantics are introduced in Section III, while in Section IV the use of linear types to model input and output ports is described. The software provided for Spade is discussed in Section V, with concluding remarks in Section VI. II. RELATED WORK In recent years, several new HDLs have been developed. A common approach is to embed the language as a Domain Specific Language (DSL) inside a conventional software programming language such as Scala, Python, or Ruby. There, the HDL consists of a library of hardware constructs which the user instantiates in the host language in order to describe their hardware. This allows the user to take advantage of the power of the host language in their hardware description, for example, by using software control flow structures to generate parameterized hardware and using object-oriented or functional programming approaches in the hardware description. Additionally, this approach exposes all the tooling available for the host language to the hardware designer, such as dependency managers, build tools, and IDEs. Examples of this include Chisel [1], SpinalHDL [2] and DFIant [3] embedded in Scala, Amaranth [4] embedded in Python, ROHD [5] embedded in Dart, and RubyRTL [6] embedded in Ruby. Some languages take subsets of conventional programming languages and compile them to hardware. An example of this is Clash [7] which compiles a large subset of Haskell to hardware by taking advantage of the natural mapping between a pure functional language and hardware. Another example is PipelineC [8] which is a C-like HDL with automatic pipelining. While not regular C, it is close enough to it to allow many PipelineC programs to be compiled and “simulated” by a standard C-compiler. TL-Verilog [9] is a SystemVerilog extension which supports timing abstract modeling, where behavior and timing are separated. It provides language features for common hardware constructs such as pipelines and FIFOs. There are also several languages which are completely independent of existing languages. One such example is Bluespec [10] in which hardware is described by guarded atomic actions. Another example is Silice [11] which contains abstractions for common hardware constructs without losing control over the generated hardware. In addition, it provides a higher-level description style where the design is expressed as sequences of operations with software-like control flow. Finally, a common alternative for describing hardware is High-Level Synthesis (HLS), in which higher-level languages, typically designed for software, are compiled to hardware. 1https://gitlab.com/spade-lang/spade/ This design methodology is quite different from the HDLs described previously. Specifically, HLS tools generally provide limited control over the hardware being generated. In particular, the exact timing of any circuit is generally abstracted away, and synchronization between HLS generated modules is done at runtime via synchronization signals. Now, a natural question to ask is whether there is a need for another HDL, and what Spade offers that the existing work does not? First, it is not an embedded DSL in another language, which separates it from the likes of Chisel and Amaranth. This gives more freedom in the design of the language, as it is not restricted to the expressiveness of a DSL. For example, most embedded HDLs are forced to invent new “keywords” in order to not clash with the host language, a typical example being Chisel using when, and otherwise instead of the more familiar if, and else. This problem runs deeper than surface level keywords, however. As an example, Scala has pattern matching, but in Chisel and SpinalHDL, that feature can only be used at “build time” on Scala values, not in hardware on “runtime” values. Finally, a potential user is not required to learn both the host language and DSL when initially picking up the language. Unlike Clash and PipelineC, Spade is not restricted to conforming to the execution model of the host language which means that the language can be designed directly for hardware description without any restrictions. Spade describes the behavior of circuits in a cycle-to-cycle manner, which gives more control than Bluespec, where the fundamental abstraction is atomic rules, and PipelineC where the compiler automatically performs pipelining. Finally, Spade is similar in spirit to TL-Verilog. However, by designing a new language instead of building on SystemVerilog, there is more freedom to explore new language constructs. ### III. Basic Semantics In order to describe the programming model and semantics of Spade, an example is used. Listing 1 shows Spade code which blinks an LED at a configurable interval. Lines 1–2 define an entity called blink which takes a clock signal of type `clock`, a reset signal of type `bool`, and a 20-bit integer, `int<20>`, specifying the maximum value for the counter. The return value is a single `bool` value which drives the led. The entity keyword specifies that this `unit` can contain sequential logic. One can also write a function, `fn`, which only allows combinational logic, and `pipeline` which defines a pipeline. The details of pipelines are discussed later. When instantiating a unit, the syntax differs between functions, entities, and pipelines, giving a reader of the code an indication of what can happen behind an instantiation. Line 4 defines a register called `counter` which is clocked by the `clk` signal and reset to 0 by the `rst` signal. Registers in Spade are explicit constructs rather than being inferred from the use of something like `rising_edge`. Registers are, together with writes to memories, the only sequential constructs in the language, everything else describes combinational circuits. In Spade, the behavior of a circuit is described in a cycle-to-cycle manner. The new value of a register in the design is given as an expression of the register values in the current clock cycle. All variables in Spade are immutable: they can only be assigned once. Single assignment is possible because Spade is an expression-oriented language, meaning that most statements are expressions and produce values [12]. For example, the `counter` register is set to the value of the expression following the equals sign on line 4, in this case an if-expression spanning lines 5–10. If the counter has reached the max value, the value `returned` by the if expression is 0, otherwise it is the next value of the counter. Compared to traditional imperative HDLs, the use of immutable variables and expression-based control flow is closer to the resulting hardware where variables correspond to wires and “control flow” is implemented with multiplexers. The language also does not have an explicit return keyword, the last expression in a unit body is the output of the entity, here on line 11. The call to the `trunc` function on line 9 truncates the value of `(counter + 1)` from 21 bits down to 20 bits. This is needed as Spade prevents accidental overflow by using the largest possible values for most arithmetic operations. More details on the type system is given in Section III-B. Spade has similar scoping rules to most modern software languages: variables are only visible below their definition which makes it more difficult to accidentally create combinatorial loops. Additionally, these scoping rules make it easier for a developer to find the definition of a variable. The definition will always be above where it is used, and generally be grouped with its assignment. Sometimes it is still useful to create a loop of dependencies between variables, however, this is done explicitly using the `decl` keyword. #### A. Pipelines Pipelining is an important construct in most hardware designs, it allows designs to maintain a high clock frequency and throughput at the cost of latency. However, despite their importance, most HDLs require the user to manually build their pipelines, a process that is tedious and error-prone as one must make sure that computations are performed on values corresponding to the correct time step. In some cases, designers use patterns on their variable names, and ad-hoc static checking tools to verify that the pipelining is correct [13]. Spade on the other hand has language-level support for describing pipelines where the user describes which computa- The pipelining feature decouples the description of the computation from the description of the pipeline itself. In a pipeline without feedback a developer can easily add and remove pipeline stages as needed without altering the output value of the pipeline. If such changes potentially affect the outcome of other parts of the project, the inclusion of the depth at the call site ensures that the user is made aware of such potential issues via compiler error messages. In pipelines with feedback the structured description of the stages still helps during design iteration, even if some manual care is required to ensure correct computation. B. Types and Pattern Matching Spade is a statically typed language with type inference. This means that types can be omitted in most code since the compiler will infer the appropriate types from context, and report errors if types can not be inferred. Like most languages, Spade has primitive types such as integers and booleans, and compound types like arrays, tuples, and structs. In addition, the language supports enum types inspired by software languages like Rust, Haskell, and ML. Unlike their C or VHDL namesake, these enums have data associated with them in addition to being one of a set of variants. A common use case for this construct is the Option type which is defined in the standard library as shown in Listing 4. It is generic over a contained type T and takes on one of two values: Some in which case a value, val, of type T is present, or None in which case no such value is present. One can view the Option type as having a valid/invalid signal bundled with the data it validates. The main way to interact with enums in Spade is the match-expression, which allows pattern matching on values. As an example, Listing 5 shows the match-expression in use on a tuple of Option-values: a and b. The resulting hardware is shown in Fig 2. If a is Some, its inner value is bound to val and returned from the match expression. If a is None but b is Some, its inner value is returned. Finally, if both a and b are None, 0 is returned. Enums are encoded as bits that specify the currently active variant, the discriminant, followed by bits containing the wires are similarly not pipelined but allow setting the value via units without them being delayed in pipelines. Mutable respectively. An immutable wire can be used to pass values &mut contains the concept of ports and wires. Wires come in memory control-signals and the corresponding output. Additional delays unless manually mitigated, for example by delayed by the pipeline mechanism. This pipelining introduces approach has a few issues: the control signals and output are value which must be fed back into the controlling unit. This as an input to the unit. The memory in turn, produces an output memory. Then the value read from the memory must be passed an address, a write-enable signal, and a value to write to the memory. Finally, ports are collections of wires and other ports which are bundled together. C. Memories As discussed earlier, Spade requires register definitions to include an expression for the new value of the whole register as a function of the values from the previous clock cycle. However, this abstraction becomes problematic when working with memory-like structures in which only a small part of the total state is updated in each cycle. In order to mitigate this, Spade has an explicit construct for memories, currently implemented as entities defined in the standard library which the compiler handles separately. Instantiation of a memory is done using the clocked_memory entity which creates a memory with a fixed set of write ports. Reading from said memory is done via the similarly defined read_memory entity. This means that unlike VHDL and Verilog, memories are explicitly instantiated as memories, rather than inferred from the structure of the code. An example of using memories is included in the next Section. IV. PORTS The discussion so far has been centered around computations on values. A unit receives a set of values as inputs, and produces a set of values as output. Internally, values are used in computation and are pipelined by pipelines. This becomes inconvenient when working with something like a memory which is external to the current unit. The unit must return an address, a write-enable signal, and a value to write to the memory. Then the value read from the memory must be passed as an input to the unit. The memory in turn, produces an output value which must be fed back into the controlling unit. This approach has a few issues: the control signals and output are delayed by the pipeline mechanism. This pipelining introduces additional delays unless manually mitigated, for example by stage references. Additionally, there is no clear link between memory control-signals and the corresponding output. In order to mitigate this issue, the Spade type system contains the concept of ports and wires. Wires come in two forms: mutable and immutable denoted by &mut and & respectively. An immutable wire can be used to pass values via units without them being delayed in pipelines. Mutable wires are similarly not pipelined but allow setting the value of the wire in a module which takes the wire as an input. Finally, ports are collections of wires and other ports which are bundled together. Listing 6 contains an example of how the port feature can be used to share a memory between two pipelines. The first lines define port-types containing read, write, and address wires. Line 8 defines an entity where the actual memory is instantiated. It returns a read-port, and a write-port. Line 9 has been trimmed for space but instantiates the mut-wire r_addr and a WPort. Lines 10–11 instantiate a memory with a single write-port using the clocked_memory entity, and lines 12–14 asynchronously reads one value from the memory. At the end of the entity, the read-port is assembled from the mutable address, and the result wire, and returned along with the write-port. The pipelines reader and writer are two pipelines which use the read-port and write-port respectively. On line 20, the reader pipeline sets the address it wants to read from, and reads the resulting value on line 21. Similarly, the writer sets the target address and write-value on line 27. Finally, the three units are instantiated on lines 32–34. The ports returned by the memory are passed to the reader and writer. There are some pitfalls when working with mutable wires: A unit returning a mutable wire expects a value to be set for that wire, otherwise the value of the wire may be undefined, or if it is set conditionally, a latch could be inferred. Similarly, if a wire is driven by multiple drivers, conflicting values may cause issues. While it is possible to catch this in simulation, it is better to let the compiler catch such errors. The solution to this is inspired by [14] which uses affine types to ensure correct memory access patterns in an HLS tool. Affine types can guarantee that a value is used at most once, resolving the multiple driver problem, however, to ensure that all mutable wires are set exactly once, the stronger notion of linear types [15] is required, and implemented in Spade. Because Spade describes behavior in a cycle-to-cycle manner, the implementation of a linear type system is easier than in the general case. Each resource of linear type must be consumed exactly once each clock cycle. A value is consumed when it is set using the set statement, or passed to another unit, which delegates the consumption requirement to that unit. Linear type checking happens after normal type checking, meaning that the compiler knows which resources are of linear type and must be checked. For each expression which produces a resource of linear type, a tree is created where leaf nodes represent primitive linear types, and non-leaves represent compound linear types such as tuples or structs. Any statement that aliases a resource, such as a let-binding of an expression, or field access on a struct, creates a pointer from the alias to the corresponding tree and node. When expressions or statements which consume resources, such as the set-statement, or a resource being passed to a unit are encountered, the nodes corresponding to the consumed object are marked as consumed. If the node or its child nodes are already marked as consumed, the resource is used more than once and an error is thrown. This ensures that nodes are not used more than once, but does not guarantee that they are used exactly once. Therefore, at the end of the process, a final pass goes over all the trees to ensure that each node is consumed. If the traversal finds an unconsumed leaf, it represents a resource of linear type which was not set, and an error is reported. To exemplify the linear type checking algorithm, Fig. 3 shows this process in action for the code in Listing 7. In the figure, \( e_x \) represent anonymous names given to sub-expressions before they are bound to variables. A dashed node represents it not being consumed, a solid node means it is consumed, and a crossed node indicates double consumption. V. Spade Software A good set of tools for a language, both for working with the language itself, and for integration with existing tools can be of huge help for driving language adoption. First and foremost, the Spade compiler and the language is built from the start to produce useful and easy to read error messages such as the one shown in Listing 3, and unhelpful diagnostics from the compiler are considered bugs. The language is also designed to prefer emitting errors over potentially surprising behavior, for example, by requiring explicit truncation after potentially overflowing arithmetic. A. Compiler Architecture The Spade compiler is a multi-stage compiler written in Rust, which compiles the input Spade code to a target language, currently a slim subset of SystemVerilog. The compilation process starts with lexing and parsing to generate an Abstract Syntax Tree (AST). The AST is then traversed thrice, first to collect all types in the program, then to collect all units, and finally to be lowered into a High-Level Intermediate Representation (HIR). The AST to HIR lowering process retains the tree structure of the AST but resolves names and scoping rules, and performs initial semantic analysis. Once the HIR is generated, type inference is performed, followed by linear type checking as discussed earlier. The HIR along with the type information is used to generate a Mid-Level Intermediate Representation (MIR). In this step, more semantic analysis is performed, and the tree structure is flattened to a list of simple statements. Finally, SystemVerilog is generated from the MIR. SystemVerilog is chosen as the target language as it is well-supported by both open source and proprietary simulation and synthesis tools, but the compiler is written in a target independent way to enable experimenting with, or changing to a different backend with ease. Especially interesting backend are the CIRCT [16] dialects, such as LLHD [17] or Calyx [18], which can offer language independent optimization as well as code generation of other output languages than SystemVerilog. B. Tooling and Ecosystem Spade comes with the build tool Swim\textsuperscript{2} which manages project files, backend build tools, and dependencies. A Swim project consists of Spade files and a configuration file written in TOML. This file contains, among other things: build tool parameters, raw Verilog files to be included in the project, and external dependencies. Swim then manages namespaces of project files, downloading and versioning of dependencies as well as calling the synthesis and simulation tools with a convenient interface. It also has a plugin system for extending the build flow, for example by running commands to generate Spade or Verilog code, loading additional Yosys plugins or \textsuperscript{2}https://gitlab.com/spade-lang/swim Listing 8. Example of a test bench for Spade written using cocotb. # top=peripherals::timer::timer_test_harness @cocotb.test() async def timer_works(dut): s = SpadeExt(dut) clk = dut.clk_i await start_clock(clk) s.i.mem_range = "(1024, 2048)" s.i.addr = "1024 + 0" s.i.memory_command = "Command::Write(10)" await FallingEdge(clk) s.o.assert_eq("10") bundling the output bitstream into an executable for a microcontroller which in turn programs a target FPGA. To facilitate integration of spade with existing projects, units can be annotated to prevent name mangling. Spade units can also be marked as external, in order to allow use of existing IP blocks within Spade projects. A tree-sitter grammar, and a rudimentary Language Server Protocol (LSP) [19] server is available, enabling an IDE-like experience in any text editor supporting LSP and/or tree-sitter. There is limited effort to generate Verilog similar in structure to the input Spade code. However, the compiler does attempt to keep names readable, and has functions for mapping names and expressions back to their source location to aid debugging. Swim automatically translates values in VCD files into their high-level Spade values, and the compiler includes a source mapping in the output Verilog which makes things like timing reports readable without looking at the output Verilog. C. Test Benches and Simulation The Spade language itself is designed primarily for hardware description and synthesis, rather than simulation. However, in order to verify correctness of the resulting hardware, simulation and test benches are essential. Spade tests are written using cocotb [20], a Python based co-simulation test bench environment for verification. The cocotb library is extended with features for writing Spade values as inputs and outputs to the unit under test, while Verilog generated by the compiler is simulated with off-the-shelf Verilog simulators. As an example, Listing 8 shows part of a test bench for a memory mapped timer peripheral. The first line specifies the module being tested, and the next two lines are a standard cocotb test case definition. Line 4 wraps the cocotb design under test in a Spade class which extends the cocotb interface to add Spade-specific features. Lines 6–7 start a task to drive the clock of the Design Under Test (DUT). On lines 9–11, the inputs to the module are set to values which are compiled and evaluated by the Spade compiler. Finally, line 13 asserts that the output is as expected, again passing a string containing a Spade expression as the expected value. In order to achieve this, the Python test bench must be able to compile Spade code, and in order to allow the use of types and units defined in the project inside a test bench, the state of the compiler must be made available to the test bench. For this reason, the state of the compiler after building a project is serialized and stored on disk. Additionally, parts of the Spade compiler are exported as a Python module which reads the stored state, then compiles and evaluates the expressions used in the test bench. The resulting bit vectors are used to drive the inputs of the DUT, or compared to the expected output. VI. Conclusions Spade is an HDL which attempts to ease the development of FPGAs or ASICs. To do so, it makes common hardware constructs like pipelines, registers, and memories explicit and part of the language. It is heavily inspired by modern software languages, e.g., by integrating a powerful type system and pattern matching. Unlike most current alternative HDLs, Spade is its own standalone language with a custom compiler, and does not abstract away the underlying hardware. Finally, it comes with useful tooling, such as a compiler designed to emit detailed error messages, a build system with dependency management, and an LSP implementation for an IDE-like experience. References
{"Source-Url": "https://spade-lang.org/osda2023.pdf", "len_cl100k_base": 5440, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 20862, "total-output-tokens": 6609, "length": "2e12", "weborganizer": {"__label__adult": 0.0006108283996582031, "__label__art_design": 0.0004475116729736328, "__label__crime_law": 0.0004193782806396485, "__label__education_jobs": 0.0003571510314941406, "__label__entertainment": 9.369850158691406e-05, "__label__fashion_beauty": 0.0002875328063964844, "__label__finance_business": 0.00024020671844482425, "__label__food_dining": 0.0005483627319335938, "__label__games": 0.0008091926574707031, "__label__hardware": 0.02972412109375, "__label__health": 0.0006136894226074219, "__label__history": 0.0002815723419189453, "__label__home_hobbies": 0.00023233890533447263, "__label__industrial": 0.0017385482788085938, "__label__literature": 0.00018668174743652344, "__label__politics": 0.0003509521484375, "__label__religion": 0.0008764266967773438, "__label__science_tech": 0.04290771484375, "__label__social_life": 6.669759750366211e-05, "__label__software": 0.00859832763671875, "__label__software_dev": 0.90869140625, "__label__sports_fitness": 0.00051116943359375, "__label__transportation": 0.0010356903076171875, "__label__travel": 0.00025582313537597656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29307, 0.01911]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29307, 0.79193]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29307, 0.91487]], "google_gemma-3-12b-it_contains_pii": [[0, 5089, false], [5089, 10829, null], [10829, 13041, null], [13041, 17544, null], [17544, 22878, null], [22878, 29307, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5089, true], [5089, 10829, null], [10829, 13041, null], [13041, 17544, null], [17544, 22878, null], [22878, 29307, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29307, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29307, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29307, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29307, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29307, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29307, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29307, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29307, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29307, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29307, null]], "pdf_page_numbers": [[0, 5089, 1], [5089, 10829, 2], [10829, 13041, 3], [13041, 17544, 4], [17544, 22878, 5], [22878, 29307, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29307, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
5482269c5142cd218ac6094a9cda5dd5486982d7
Abstract Firmware and application software development is often the critical path for many embedded designs. Problems that appear in the late phases of the development can be extremely difficult to track down and debug, thus putting project schedules at risk. Traditional debug techniques cannot always help to localize the issue. This whitepaper presents the concept of debugging with ‘real-time trace’ hardware assistance and shows its benefits, including how it can vastly reduce the amount of time needed to track down problems in the code. In addition, this paper introduces the other benefits available with a real-time trace system, such as hot-spot profiling and code coverage. Introduction A study from Cambridge University [1] has estimated that software developers spend almost 50% of their time debugging code, at a total cost of $312 billion per year in terms of developer salaries and overhead. Debugging native desktop applications is already a complex task, but cross-debugging an embedded system can be significantly more challenging for a number of reasons: real-time execution, lack of visibility into the target resources, dependencies on external peripherals, etc. For many embedded designs, firmware and application development is often on the critical path and the last phase of the design cycle. Problems with the firmware can delay the entire release and cause cost overruns in the best case and completely missed market opportunity windows in the worst case. Given these challenges, it’s absolutely critical for engineering managers to provide their developers with powerful debugging tools that can help to quickly resolve even the most challenging bugs. Traditional Embedded Debugging Techniques How do engineers typically debug? Are the tools at their disposal effective? Traditionally, a couple of different techniques are used to debug embedded systems, but these have a number of drawbacks. “printf” Debugging If code is not behaving as expected, engineers add diagnostic print statements to the code, which emits the relevant data to the terminal as the code runs (see Figure 1). They can then examine this in the hope of finding clues about the cause of the problem (for example, the value of a certain variable at a given point in time is not what is expected). This technique can work in cases where the bug is caused by a deterministic algorithmic error and the developer is quite sure of the approximate place in the code where the problem is occurring. However, this can be a time-consuming process and is not guaranteed to localize the problem. Some disadvantages of this method include: - It is almost useless in real-time code, since inserting print statements changes the program’s timing and code paths and may entirely mask the problem. - Many deeply embedded systems do not have UARTs connected to debug consoles and semi-hosted I/O via the debugger also affects real-time performance. - Many edit-compile-run iterations are needed; this can be time consuming if the code base takes a long time to compile or if the code needs to run for a long time to hit the problem area. Surprisingly, a number of developers still rely on this debugging technique, which likely contributes to the huge debugging costs reported in the research. --- **Run-time Debugging** Run-time debugging is a common technique used by most embedded developers. The embedded CPU has JTAG pins exported to a header on the development board that are then connected to a small debug “pod” that serves as an interface between the target and the host PC. The developer runs a cross-debugger on the host, which then communicates with and controls the target CPU over the JTAG connection (see Figure 2). The debugger typically offers functionality like: - Single stepping through source or disassembly - Inserting code and data breakpoints, and free-running to them - Viewing and changing local and global variables - Viewing and changing target memory and registers Although this technique allows for much more in-depth debugging than the “printf” method, one major drawback is that the target must be halted in order to examine its resources. In some cases, depending on the CPU, certain items can be accessed while the target is running, but most items are only available when the target is halted. In a real-time system, this can be detrimental since stopping the target immediately changes real-time behavior; in many cases the problem being investigated will disappear when code is being single-stepped, or halted. Stopping the target causes further problems if the application is communicating with peripherals or external nodes that are not aware that the CPU has halted and are still either sending or receiving data. If the system gets notified via interrupts about external events like incoming packets from a network MAC or mailbox messages from another CPU, the interrupts risk ‘getting lost’ and the remote side may ‘time out’ if the target is stopped for too long. Another significant drawback to this debugging technique is that it can only show the current state of the system (current values of registers, variables, memory, etc.) at a particular point. There is no ability to see any history of the code’s execution. A Better Type of Debugging — Real-Time Trace Given the disadvantages inherent with the printf and run-time debugging techniques, these tools are insufficient for finding the most challenging bugs. With only these tools at the developer’s disposal, the debugging effort could add weeks or more to the development schedule. This is where real-time trace (RTT) debugging can be used to great advantage. With this method, the embedded SoC contains an RTT module that sits next to the processor. As the target runs, the trace module records information about the execution on the target. This may include: - Sequence of program counter (PC) values - Memory reads/writes and associated data values - Register reads/writes Combined together, the above information provides a complete picture, or trace history, of exactly what happened on the target during the execution. Importing this data into the appropriate analysis tools on the host side allows the engineer to do a post mortem debug to quickly pin-point the issues and uncover exactly why the code is not performing as expected. The following are some keys advantages of real-time trace: - Developers can see why/how execution arrived at a certain point, via a back-trace or instruction history, and can now easily answer questions like "Why and how did I end up in this function?" and "Why did my code crash?" - Developers can easily correlate the corruption event with a specific instruction to see exactly where data corruption occurred - Trace information can usually be captured non-intrusively, meaning that the application’s real-time performance is not affected - Trace information allows developers to profile their code to find out where time is actually being spent and to determine if deadlines are being met - Trace can capture and record intermittent behaviors that may not always occur. Once captured, developers can easily track the root cause Despite real-time trace’s numerous advantages, there are some things to keep in mind when investigating trace solutions: - The RTT logic block consumes space on the SoC. Although modern process technologies make this less important, it’s still a consideration. It’s important to select an RTT technology that is build-time scalable so it can be configured to match the nature and typical use case of the SoC - The RTT logic block consumes extra power. It’s therefore important to choose an RTT solution that allows the RTT block to be gated off when not in use - An external debug probe is often needed. Therefore, choosing a solution that works with an efficient and cost-effective debug probe is important Choosing a Real-Time Trace Module There are many types of RTT modules that differ in functionality, area, power, and performance. Depending on the application, various types of RTT modules may be used. A deeply embedded processor running a single process might take advantage of a lightweight trace module that only records the last eight or sixteen branches in the code. Such a module would have very low area overhead, and still typically allows reconstruction of a trace of at least 100 instructions. On the other hand, a multicore, high-performance embedded processor running Linux requires a more comprehensive solution, as traces of tens of thousands of instructions or more may be needed to debug complex issues in a multi-process environment. The following sections describe the tradeoffs in more detail. Configurability Typically an RTT module can be configured to save only the information necessary for debugging: - Small trace: save only non-sequential PC values - Medium trace: save PC values and memory reads/writes - Full trace: save PC values, memory reads/writes, and register writes These options must be chosen at design-time because additional hardware must be added to connect the RTT module to the internals of the processor. An RTT module that records a full trace makes debugging easier than if it records only a small trace. However, these configuration capabilities ultimately affect the overall gate count and performance of the RTT module, as well as the amount of memory required to store the trace history. Designers can therefore make area vs. functionality vs. performance tradeoffs. Filters No matter how the RTT module is configured, it will generate a huge amount of data during the capture, since it is potentially recording trace history for a processor that could be executing at hundreds of MHz. One technique to reduce the amount of data generated is to use compression, but a full trace can still consume gigabytes of space. A better solution is to make use of the filters within the trace module that allow developers to constrain the trace to regions of interest within the code. Some examples of how filters can be used to restrict the trace: - Trace only when function func() is executing - Start tracing when function bar() begins execution and stop when bar() finishes, tracing all activity, including sub-functions during that time - Trace memory accesses only when global variable X is written - Trace memory accesses when value 1234 is written to address Y - Trace register writes for r3, r4, r5 and r6 With the detailed controls that filtering offers, developers can focus the tracing results on specific areas of the application, which ultimately facilitates application debugging. Trace Storage Memory to store the trace history is the essential component of the RTT module. In general, trace data can either be stored locally in on-chip or in on-board memory for later retrieval by the host, or can be exported directly from the target in real time using a dedicated high-speed interface. Table 1 lists some different storage options and where they can typically be used. A local buffer made of flip-flops is typically implemented inside the RTT module. It has the fastest possible access time, but the memory size is very limited. When the size of the local buffer is not enough, an RTT module with a dedicated on-chip memory may be used. The RTT module can be connected to the memory either directly or via a dedicated bus fabric. When even larger storage is required, the RTT module can use shared system memory. In this case it is connected to an existing bus fabric as a typical master. However, use of this type of memory is not non-intrusive, as the RTT module will compete for the bus access with other masters. When it is required to have very large storage and still maintain non-intrusive operation, an ‘off-board’ memory can be connected to the RTT module via a dedicated high-speed debug interface. Figure 3 illustrates different approaches to connecting trace storage to the RTT module. **Nexus Interface** Nexus (IEEE-ISTO 5001™) is an open industry standard that provides a general-purpose interface for the software development and debug of embedded processors [2]. Nexus is a packet-based protocol that can use either a typical JTAG port or a dedicated high-speed auxiliary port. If an RTT module has an option to export the auxiliary port of the Nexus interface, it can be used to dump trace history to the external debug pod in real time. The advantages of an RTT with Nexus: - No need to have a dedicated trace memory – saves area and power - No need to save trace history in a shared system memory – avoids intrusive memory transactions The disadvantages of an RTT with Nexus: - Pin count increase (Nexus typically adds 7-19 pins, depending on the data width) - A special debug probe is required <table> <thead> <tr> <th>Storage type</th> <th>Memory type</th> <th>Typical size</th> <th>Storage good for</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td>Small trace</td> </tr> <tr> <td>Local buffer</td> <td>Flip-flops</td> <td>32-512 Bytes</td> <td>✓</td> </tr> <tr> <td>Dedicated on-chip memory</td> <td>SRAM</td> <td>1..128 kBytes</td> <td>✓</td> </tr> <tr> <td>Shared system memory</td> <td>SRAM (on-chip)/DRAM (on-board)</td> <td>Up to a few GB</td> <td>✓</td> </tr> <tr> <td>External trace pod</td> <td>Any (off-board)</td> <td>Up to a few GB</td> <td>✓</td> </tr> </tbody> </table> Table 1: RTT trace storage options Multicore Trace An RTT module that natively supports multicore systems allows designers to connect a single trace module to multiple processor cores. Eliminating the need to instantiate separate trace modules for each core saves silicon area. It also saves pin count when the Nexus interface is used, as only one interface per multi-core processor is required. Power Saving Features An RTT module is used for debugging only. When programs are working correctly, the RTT is not needed, yet continues to consume precious power. It is therefore essential to minimize power consumption while the RTT is idle. Using an RTT with the following features can help: - Clock gating allows systems to minimize dynamic power usage. The RTT clock is switched off whenever the RTT is not used. - Power gating allows systems to minimize leakage power by switching off the power supply for certain areas of a chip. Depending on the implementation of the RTT module, it may support partial or full shutdown. This is particularly important for RTT modules with large dedicated trace memories. Trace Probes An RTT module with a small on-chip memory does not require any special trace probes. Virtually any low-cost JTAG probe can be used to export the trace to the host where it will be analyzed. Even if the size of the trace exceeds tens of megabytes, it is typically a matter of minutes to read it via JTAG. Using a Nexus interface requires a dedicated Nexus-compatible high-speed trace probe, such as the Ashling Ultra-XD probe. Ashling Ultra-XD The Ultra-XD is a powerful high-speed trace and run-time control debug probe. The Ultra-XD allows capturing and viewing of program-flow and data-accesses in real-time, non-intrusively, and also supports program download and exercising (go, step, halt, breakpoints, interrogate memory, registers and variables). This means that a single probe can be used for both tracing and regular ‘run-time’ debugging. The Ultra-XD captures up to 4 GB of trace frames allowing long tracing periods before the trace buffer overflow. The Ultra-XD trace buffer can be optionally configured as a circular buffer, thereby always recording the last ‘n’ trace events, as opposed to events from the beginning of the session up until the buffer reached capacity. The Ultra-XD features automatic trace clock and data skew adjustment (“AUTOLOCK”) to ensure integrity of captured high-speed data and it automatically calibrates itself to the embedded target’s trace data port. Ultra-XD supports single- and multicore designs and is compatible with hardware debug standards including cJTAG, JTAG, and RTT NEXUS Trace. Figure 4 shows the Ultra-XD connected in a typical system. Figure 4: Ashling Ultra-XD probe Trace Analysis Software Raw trace history data on its own is not very useful. Typically the gathering and analysis of trace captures is entirely controlled by RTT-aware debuggers. The debug process works as follows: 1. Developers set up trace sources and filters and select the storage memory if multiple choices are available (for example: on-chip memory or a Nexus-compatible debug pod), then activate tracing. A typical configuration window is shown in Figure 5. 2. After the tracing is activated, developers run their applications under debugger control as normal. 3. After execution ends (or is stopped), developers acquire the trace data for display within the debugger. The trace data shows the sequence of instructions that were executed. If configured, memory and register accesses are shown interleaved with the instructions. This is essentially an ‘instruction history’ display that shows the sequence of events that ultimately ends at the current program counter. Here’s an example of how developers can make use of the captured trace data: The code is hitting an exception and is halting inside an exception handler. The engineer needs to understand what sequence of events caused this exception to occur. After the trace is configured, the developer would set a breakpoint on the exception handler and run the code until the breakpoint hits. After acquiring the trace, the developer can scroll backwards through the trace log to find the sequence of events and function calls that led to the exception. Debugging with Trace Information A large trace log can be difficult and time consuming to parse through. Therefore, for complex problems, the debug tools should be able to assist with the trace analysis. Some debuggers offer the ability to ‘debug’ through the trace log. In other words, typical ‘run-time’ debugging techniques can be used, but with the captured trace data. Developers can capture a non-intrusive trace then have the debugger’s full ‘execution’ capabilities at their disposal to analyze the trace. Developers have access to standard run-time debugging features like breakpoints, watchpoints (data breakpoints), stepping, etc. In addition, they can use backwards execution, a unique feature not available with standard debugging. When using trace, developers typically need to understand why they have arrived at a certain point in the code. Reverse execution can help them to ‘unwind’ the execution and see where the program took an unexpected path or exactly why some data corruption occurred. For example, they can track variables, memory and register values, observing how they change during the backwards execution. **Profiling** The trace data contains information about the execution and many trace-aware debuggers can analyze this data to generate profiling information about the target’s execution during the trace capture. For example, it’s helpful to understand where the particular ‘hotspots’ are within an application, to see if there are any unexpected bottlenecks and to target future optimization efforts. Profiling data can be available both on a per-function basis and on a per-line basis. **Real-Time Trace Options for ARC Processors** There are two different trace solutions available for Synopsys’ DesignWare ARC® processor families, allowing designers the flexibility to choose a solution most suitable for their project: - ARC SmaRT (Small Real-time Trace) - ARC RTT (Real-Time Trace) **SmaRT** SmaRT is a low-gate count solution that captures instruction trace history until the processor halts. It uses an internal buffer to store the addresses of any non-contiguous instruction that is executed. When the target halts, this captured information is then uploaded to the ARC MetaWare Debugger via a standard JTAG probe. The debugger can then reconstruct the execution and present the developer with an instruction history display, showing a back-trace that can help the developer understand how execution arrived at a certain point. The size of the SmaRT buffer is configurable at design time, which allows developers to trade-off between the size of the module and the length of the back-trace. **ARC RTT** ARC RTT is a complete and highly configurable trace solution available for the ARC HS and ARC EM families of cores. It supports three different trace modes, allowing developers to choose what they want to include in the traces, ranging from only program counter values up to a full trace including program counter, memory accesses, and register accesses. ARC RTT can be configured to use either on-chip memories or can offload trace data in real time using the Ashling Ultra-XD pod. ARC RTT module includes two different types of filters: - Address filters – allows filtering control based on the addresses of various trace sources - Data filters – allows filtering based on the data associated with various trace sources These filters allow full control over the trace information that’s captured, allowing tracing to be focused on particular problem areas in the application. The ARC MetaWare Debugger is fully RTT-aware and has full integration with the Ashling Ultra-XD probe. Developers control and analyze the trace directly within the debugger, by first setting up the trace parameters and filters, and then viewing the captured trace data. The ARC Replay feature provides the ability to ‘debug’ the captured trace, using either forward or backward execution, breakpoints, stepping, etc. Conclusion Real-time trace is a powerful debugging technique that can help developers find even the most difficult bugs, and can vastly reduce the amount of time spent debugging an application. It’s important to select a trace solution that is scalable, so that designers can make appropriate area vs. functionality vs. performance tradeoffs for their application. Different trace storage options are available, such as on-chip storage or real-time offload via a probe like the Ashling Ultra-XD. In addition to the trace hardware module itself, a complete solution needs to also provide advanced analysis tools within the debugging environment. The DesignWare ARC processors support two different trace modules, SmaRT and RTT, each with different capabilities, both of which are tightly integrated with the ARC MetaWare Debugger software. The debugger includes a number of power analysis and profiling tools including a Replay capability, which facilitate the analysis of the trace data. References
{"Source-Url": "http://www.ashling.com/wp-content/uploads/Real-time_trace_a_better_way_to_debug_embedded_applications.pdf", "len_cl100k_base": 4568, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 18395, "total-output-tokens": 4966, "length": "2e12", "weborganizer": {"__label__adult": 0.000835418701171875, "__label__art_design": 0.0008826255798339844, "__label__crime_law": 0.0008616447448730469, "__label__education_jobs": 0.0005445480346679688, "__label__entertainment": 0.0001430511474609375, "__label__fashion_beauty": 0.00045013427734375, "__label__finance_business": 0.0005021095275878906, "__label__food_dining": 0.0005869865417480469, "__label__games": 0.0017366409301757812, "__label__hardware": 0.1219482421875, "__label__health": 0.0008878707885742188, "__label__history": 0.0004274845123291016, "__label__home_hobbies": 0.0003643035888671875, "__label__industrial": 0.0031261444091796875, "__label__literature": 0.00024819374084472656, "__label__politics": 0.00038504600524902344, "__label__religion": 0.0008540153503417969, "__label__science_tech": 0.18017578125, "__label__social_life": 7.039308547973633e-05, "__label__software": 0.01473236083984375, "__label__software_dev": 0.6669921875, "__label__sports_fitness": 0.0008363723754882812, "__label__transportation": 0.0022144317626953125, "__label__travel": 0.00028705596923828125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22914, 0.00541]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22914, 0.56964]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22914, 0.9146]], "google_gemma-3-12b-it_contains_pii": [[0, 2593, false], [2593, 5262, null], [5262, 7889, null], [7889, 11028, null], [11028, 13547, null], [13547, 16268, null], [16268, 18928, null], [18928, 21742, null], [21742, 22914, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2593, true], [2593, 5262, null], [5262, 7889, null], [7889, 11028, null], [11028, 13547, null], [13547, 16268, null], [16268, 18928, null], [18928, 21742, null], [21742, 22914, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22914, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22914, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22914, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22914, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22914, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22914, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22914, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22914, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22914, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22914, null]], "pdf_page_numbers": [[0, 2593, 1], [2593, 5262, 2], [5262, 7889, 3], [7889, 11028, 4], [11028, 13547, 5], [13547, 16268, 6], [16268, 18928, 7], [18928, 21742, 8], [21742, 22914, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22914, 0.05426]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
b7c1ca65645c2a49652e081710809fc7cb47108c
New features in DB2 Everyplace let you take office resources with you — even when you leave the office behind. Working Outside the Box As mobile devices such as personal digital assistants (PDAs), smart phones, and handheld PCs proliferate, so do the applications available for them. Once used primarily for personal information management with simple applications for address books, calendars, and to-do lists, these devices are increasingly called on to support more complex applications that, for example, can browse the Internet or manage information from enterprise databases. PDA access to enterprise data and applications allows many business activities to take place outside the traditional brick-and-mortar office. But implementing these complex applications on mobile devices poses many challenges. First, mobile devices operate in a somewhat constrained environment. Display, memory, and CPU speed have not yet reached the level of laptops and workstations (although they are heading in that direction). Those limitations can make porting existing enterprise applications to mobile devices difficult. Creating a whole new set of applications for mobile devices isn’t very practical: IT departments need to be able to leverage existing applications by porting them to mobile devices or, at the very least, leverage existing database skills. And, the deployed applications have to require no management by the user. A mobile database solution that required field staff (such as insurance adjusters or airline gate agents) to create their own bidirectional indexes or run a REORG would cause more problems than it’s worth. Demands for anytime, anywhere data are changing the landscape of business. Recently, IBM extended and strengthened the capabilities of DB2 Everyplace version 7.2 to help meet the challenges facing mobile applications. Today, businesses can leverage the power of relational technology from "fingerprint"-constrained devices (my term for the tiny amount of space available for mobile applications on cellular phones and PDAs). I’ll explain how the enhancements to DB2 Everyplace help you navigate the new terrain. **WHAT IS DB2 EVERYPLACE?** DB2 Everyplace is a small fingerprint (150KB) relational database and synchronization architecture designed to extend relational database functionality and enterprise data to mobile and embedded devices. It uses standard interfaces (such as SQL and JDBC), supports such popular programming languages as C, C++, and Java, and offers a point-and-click CASE tool for rapid application development and prototyping. DB2 Everyplace is compliant with the Synchronization Markup Language (SyncML), an emerging XML-based W3 standard that defines a protocol for synchronization data interchange. Theoretically, SyncML-compliant products could plug into a SyncML infrastructure, making it extendible and open. Figure 1 shows a typical DB2 Everyplace environment. Three components make up the DB2 Everyplace solution: an application development CASE tool, the DB2 Everyplace database, and the synchronization server (SyncServer). Each component received enhancements in DB2 Everyplace v.7.2, which I’ll detail for you. Note: This article focuses only on the enhancements in the latest release of DB2 Everyplace. If you are not familiar with the functions, features, and benefits of DB2 Everyplace from earlier versions, you can find that information at ibm.com/software/data/db2/everyplace. DB2 Everyplace v.7.2 introduces enhancements that add increased platform support to the core database engine, extend functionality, and improve performance. New database platforms include the Palm OS 4.0 and Symbian EPOC R6 (Quartz and Crystal) operating systems. Several devices using the Symbian EPOC operating system are expected in the latter part of this year, including the Nokia 9210 Communicator. The complete list of devices DB2 Everyplace supports includes Palm OS, Windows CE/Pocket PC, EPOC R5, EPOC R6, QNX Neutrino, Linux, embedded Linux, and Win32-based platforms. New midtier SyncServer platforms include AIX, Linux, and Solaris. DB2 Everyplace v.7.2 extends the interoperability of mobile devices in a larger computing environment by enabling data access from any JDBC-compatible database and by supporting the SyncML standard. Performance enhancements allow better handling of synchronization data and the new indexing techniques, which I’ll explain. DB2 Everyplace v.7.2 includes several upgrades for application developers. This release marks the availability of several APIs that enable developers to build automatic synchronization into applications, access unlimited enterprise data sources, better monitor and manage the synchronization process, and customize data handling on the mobile device. Developers can also write applications in C, C++, or Java using any of the standard development tools, including IBM VisualAge Micro Edition, Metrowerks CodeWarrior, and Microsoft Visual Studio tools. Let’s look at the enhancements to each component in more detail. **DATABASE ENHANCEMENTS** In addition to the increased platform support I mentioned, DB2 Everyplace v.7.2 includes several additional database enhancements. Remote stored procedure for query. Specific enhancements to the DB2 Everyplace engine enable support for the Remote Query & Stored Procedure adapter. This adapter lets a DB2 Everyplace application open a new database handle while maintaining its existing handle with the local DB2 Everyplace database. This new database handle is used to invoke a stored procedure on a remote database and receive data back into a DB2 Everyplace table for local manipulation. As an example of how this feature works in the real world, let’s look at a healthcare vertical. A doctor can carry a drug application on his wireless PDA that allows him to check drug interactions and write prescriptions. The doctor doesn’t carry patient data on the device due to security concerns and storage limitations. When prescribing a drug, the doctor can retrieve information about the drug (including its interactions with other drugs) locally at the click of a button. The application can then use this adapter to run a stored procedure at the Patient database to retrieve a drug history and verify that the patient has no allergy to the drug and that no dangerous interactions with other drugs the patient is taking will occur. Tapping another button completes the prescription and electronically sends it to a pharmacy to be filled. And this all occurs without the notoriously messy handwriting of the medical professional. **Indexing improvements.** Indexing enhancements in this release of DB2 Everyplace include support for bidirectional index scanning. Now, a single index can support several different output ordering queries for a table. Creating an index on a table with multiple columns in the index will allow this one index to support the following ORDER BY operations: - `ORDER BY X ASC, Y ASC` - `ORDER BY X ASC, Y DESC` - `ORDER BY X DESC, Y ASC` - `ORDER BY X DESC, Y DESC` Earlier versions of DB2 Everyplace required creation of at least four different indexes to support efficient execution of these operations. Apart from the maintenance issues associated with the creation of multiple indexes, a major benefit of this enhancement is the reduced amount of footprint that is dedicated to indexes. One index can now do the work of four—a very important feature considering the constrained architecture associated with pervasive devices. In addition, the dirty bits used to mark changed data (at the row level) for synchronization by DB2 Everyplace are now indexed. Therefore, when the DB2 Everyplace database synchronization adapter is called on, DB2 Everyplace can assemble the data marked for synchronization in a more timely and efficient manner. **Basic transactions.** DB2 Everyplace v.7.2 introduces the notion of atomicity by means of transaction support for the database. There are four requirements of relational databases: atomicity, consistency, independence, and dependability (ACID). A database is considered atomic when it can achieve an all-or-nothing process. For example, something either happens or it doesn’t. This process is handled through transactions. Previously, every statement was automatically committed to the database. Transactions let you execute several statements and then either COMMIT or ROLLBACK all the operations performed within the set of statements. Listing 1 shows an example of a typical transaction using a COMMIT. ### Extended DECIMAL and TIMESTAMP support. Earlier versions of DB2 Everyplace supported a 15-character DECIMAL data type in addition to TIME and DATE data types. DB2 Everyplace v.7.2 changes the DECIMAL data type support to 31 characters and introduces a better space management scheme. For example, in earlier versions, a DECIMAL value like 8.32 would require that all 16 bytes of storage be allocated to the data type. Now, DB2 Everyplace will compact the required storage in association with the size of the DECIMAL value; therefore, 8.32 may take as little as 4 bytes of storage, depending on the decimal precision. The new TIMESTAMP data type uses the format `yyyy-xx-dd-hh-mm.ss.zzzzzz`, where `yyyy` is the year, `xx` is the month, `dd` is the day, `hh` is the hour, `mm` is the minute, `ss` are the seconds, and `zzzzzz` are the microseconds. ### SQL enhancements. DB2 Everyplace v.7.2 INSERT operations now support SUBSELECTs, further extending the SQL API to the handheld version of DB2. For example, assume T1 and T2 are tables with the same definition. Consider the following insert: ```sql INSERT INTO T2 SELECT * FROM T1 ``` The SUBSELECT of table T1 is used to INSERT all of the rows in table T1 into table T2. Other SQL extensions to this version of DB2 Everyplace include the IN predicate, LENGTH function, and the RTRIM (right trim) function. The IN predicate allows operations such as the comparison of values with other values selected or subsetted in a statement. For example, you can use the following expression: ```sql DEPTNO IN (’D01’, ’B01’, ’C01’) ``` The LENGTH functions will return the actual length of a VARCHAR, CHAR, or binary large object (BLOB) data type. The RTRIM function will remove blanks from the end of VARCHAR or CHAR string expressions. ### Storage support. When running DB2 Everyplace on Windows 32-bit, Linux, or Neutrino operating systems, the DB2 Everyplace database and application can now be run directly from read-only media such as CD-ROMs or ROM chips in embedded devices. A real-world example is an insurance company that has disparate agents across the country, with wireless connectivity that is not robust enough to lend itself to a synchronization solution. Each quarter, sales brokers receive a CD-ROM that contains the complete offerings portfolio and a DB2 Everyplace application that lets them browse, display, and query the data. The DB2 Everyplace application could run directly from the CD-ROM and be completely hidden from the user, or it could be installed on the device and only read data from the CD-ROM. This feature empowers ISVs and roll-your-own application enterprises with the ability to hide the IT infrastructure from lines of business. DB2 Everyplace v.7.2 also extends support for Compact Flash devices by using the Palm OS version 4 Secondary Storage API to support all secondary storage types implemented on Palm OS. This enhancement allows DB2 Everyplace to support databases residing in the TRG Pro Compact Flash slot or in the expansion slots for newer devices, including the Sony CLIE Memory Stick and the upcoming Palm SD Card. DB2 Everyplace can store data on Memory Sticks, Compact Flash Storage Cards, the IBM Microdrive (a Compact Flash device), MMC Storage cards, and SD Storage Cards. Now, pervasive applications have the advantage of applying mobile rela- ```sql // autocommit off SQLSetConnectAttr(...SQL_AUTOCOMMIT_OFF...) // some update operation SQLExecDirect(..., “select from employees where salary > 150000”, ...) SQLExecDirect(..., “insert into employees where salary > 2500”, ...) // commit the transaction (or rollback) SQLEndTran(..., SQL_COMMIT) ``` Listing 1: Using transactions for an all-or-nothing approach. tional techniques to large data sets that exceed the inherent capacity of the pervasive device. **Improved Command Line Processor (CLP) with BLOB support.** DB2 Everyplace v.7.2 now includes a CLP application for all supported platforms. This code, included in the sample files that are shipped with DB2 Everyplace, is common across all platforms (including UNICODE and ANSI codepage support). The CLP includes source code for each operating system, has the ability to display BLOB columns (first 50KB), and supports IMPORT and EXPORT operations. Previous versions of DB2 Everyplace provided a CLP through the Query by Example application that was only available on Palm OS installations. Most business users will not use the CLP to work with the database; however, it’s an example of the true relational functionality on the pervasive device that’s available if needed and gives application developers a method of testing and building applications. **DB2 EVERYPLACE APPLICATION DEVELOPMENT** The application development enhancements in this release center on the Mobile Application Builder (MAB). The MAB, part of the DB2 Everyplace suite, is a point-and-click rapid application development tool that can be used to create DB2 Everyplace applications. It is available as a free download from the Web at www.ibm.com/software/data/db2/everyplace. Previously known as the Personal Application Builder (PAB), MAB was renamed to better reflect its role as a general purpose, quick start mobile application. Various enhancements have been made to the MAB development tool. **Improved BLOB code support.** PAB, the earlier version of MAB, required developers to write scripts to retrieve and store BLOB data in the DB2 Everyplace database. DB2 Everyplace v.7.2 now automatically generates code for storing and retrieving BLOBs, shortening the development cycle, and flattening the MAB learning curve. **Sync API support.** With DB2 Everyplace v.7.1.1, IBM shipped an IBM Sync C/C++ interface to allow developers to access client synchronization engine functions. MAB now allows application developers to call these synchronization functions from within applications developed using MAB, without writing a single line of code. Applications developed using MAB now support automated and programmatic access to synchronization functions. For example, instead of exiting the application and launching a synchronization session, a user could tap on an icon, select an option from a pull-down menu, or even rely on event-driven synchronization. **MAB support for DB2 Everyplace database enhancements.** The new functions added to the DB2 Everyplace database can be used via MAB. For example, MAB includes functions to use the new TIMESTAMP and larger DECIMAL data types as well as the SELECT DISTINCT and ORDER BY operations. MAB also supports DB2 Everyplace databases lo- Mobile tools are less powerful when used in isolation. cated at nonlocal paths, such as secondary storage, compact flash, and the IBM Microdrive. Improved error handling and documentation will help users get this tool up and running quickly. Enhanced sample visiting nurse application (VNApp). The enhanced VNApp, now called VNPlus, includes all of the interface widgets supported by MAB. For example, the enhanced sample application includes a digital signature that represents a nurse’s signature on record. A tutorial helps users transform VNApp into the full-featured VNPlus sample and provides guidance on using all of MAB’s features. DB2 EVERYPLACE SYNCSERVER The enhancements to the DB2 Everyplace v.7.2 SyncServer raise the bar for all synchronized pervasive database solutions on the market. Additional platform support. DB2 Everyplace v.7.2 expands the supported SyncServer platforms to include AIX, Solaris, and Linux, in addition to Windows NT/2000. DB2 Everyplace SyncServer now supports synchronizing with all client DB2 Everyplace platforms, including Palm OS, Windows CE/Pocket PC, EPOC R5, EPOC R6, QNX Neutrino, Linux and embedded Linux, and Win32-based platforms. Previous versions of DB2 Everyplace on Windows 32-bit operating systems did not support synchronization. JDBC adapter. Perhaps the single most significant enhancement to DB2 Everyplace v.7.2 is the addition of the powerful JDBC adapter that enables access to any back-end, JDBC-compliant data source, including Oracle, Sybase, SQL Server 2000, Informix, and DB2. The JDBC adapter for SyncServer is a new relational data adapter that works with the existing DB2 Everyplace database client adapter on the mobile device. In earlier versions of DB2 Everyplace, the DB2 Family adapter, which uses Data Propagator (DPropR), was the only relational data adapter. The JDBC adapter doesn’t use DPropR technology. Instead, it relies on custom code created to work with triggers in the source database to maintain a synchronized environment—all transparent to the administrator. You can create a special implementation of the JDBC adapter with a new PUT option. This option provides for one-way inserts into the enterprise data source. For example, with this option, a parking enforcement officer could enter a parking ticket into the system, and the ticket information would be sent to the enterprise data server. No information needs to be returned or synchronized with that attendant’s PDA. JDBC subscriptions with the PUT option bypass mirror databases and only perform an INSERT on the enterprise data source. If, for some reason, the enterprise data source is not available, the synchronization would fail. Conflict improvements. Several improvements enhance the conflict handling of the DB2 Everyplace SyncServer and pervasive devices. The SyncServer now logs all rejected rows at the server. This feature gives administrators better insight into situations that cause conflict and enables them to tune a user’s synchronization configurations to avoid as many conflicts as possible by formulating business rules for sync time and segregation of data. The enhanced DB2 Everyplace SyncServer now supports a user exit whenever a conflict is logged. When a conflict is detected, the administrator has the option to call some custom code to take additional action in response to a conflict. For example, a user exit could call a mail API to send a message to a system management tool, notify an administrator of the conflict, or even email a user with a conflict notification. Encryption over the wire. DB2 Everyplace v.7.2 introduces optional encryption for all communications sent out over the TCP/IP connection between the DB2 Everyplace SyncServer and the pervasive device—a necessity for e-business. This encryption prevents unauthorized parties from “sniffing the connection” and logging the data sent between the SyncServer and the pervasive device. The encryption implemented by DB2 Everyplace v.7.2 is based on the data encryption standard (DES). DES implements symmetric encryption, which means that both the client and server use the same key to encrypt and decrypt the WBXML message. Symmetric keys are quicker than asymmetric keys—a method that uses different encryption keys on each side of the communication wire. Virtual private networks implement asymmetric encryption keys. An administrator has the option to implement no encryption, 56-bit encryption, or 128-bit encryption, which provides the security level demanded by business models that involve sensitive data such as credit card numbers or health records. Vertical partitioning columns of source data. The subscription facilities in DB2 Everyplace SyncServer v.7.2 add column partitioning of source data for use in data subscriptions. This allows administrators to select from a large table the columns they want to synchronize to the mobile devices. A typical example is a Personnel table in the Human Resource’s departmental database. A Personnel table may contain information about all employees in the company in columns such as Office_phone number, Office_faxnumber, Email_address, Title, and Salary. Portions of the Personnel table could be used for a company directory lookup application; however, data such as salary information is confidential. Vertical partitioning would allow an administrator to use a table that contains both sensitive data and important business data by selecting just the important business data columns for synchronization to the mobile devices. In our example, the Salary column (the sensitive data) could be left securely inside the enterprise. MOBILE DEVICE ADMINISTRATION CENTER DB2 Everyplace v.7.2 includes enhancements to help minimize the management of the pervasive environment. Check Clause support allows for the definition of multiple Check Clauses for columns of a data subscription. The size of the Check Clause field has been extended to 1,024 characters. Existing Check Clauses on the source database can be automatically extracted and stored in the Check Clause field. An administrator can also define Check Clauses for the mobile database using a simple comma-delimited format. All of these enhancements allow administrators to work business rules into the synchronization architecture. DB2 Everyplace v.7.2 also includes new index management tools in the mobile device administration center (MDAC) that allow administrators to manage the creation of indexes for the tables on mobile devices. For example, an administrator may feel that a new index could improve the policy lookup function in an insurance application. The administrator could define this index, and on the next client synchronization, the index would be created on the pervasive device. However, once the index is created, it isn’t synchronized between the source server and the mobile device. You can define up to 10 columns in one index using a wizard similar to the DB2 Control Center Create Index wizard to point and click the columns to use in the table index, as well as specify index ordering (ascending or descending). Indexes can also be extracted from JDBC source databases, which allow administrators to reuse existing index definitions where possible. Automated installation of DB2 UDB Workgroup Edition. In earlier versions, you had to have at minimum DB2 Personal Edition installed for the DB2 Everyplace SyncServer. DB2 Everyplace v.7.2 includes the option to automatically install and configure DB2 UDB Workgroup edition as part of the installation. Publish adapter API and IBM Sync API. The APIs used for client and server Sync Adapters and the IBM Sync API will now be published in the documentation. Open APIs allow developers to expand and enhance DB2 Everyplace SyncServer functionality. The Client Sync Adapter API allows developers to write new adapters to support additional data types, data handling, and client data sources on the mobile device side of synchronization. Using this open API, developers could write their own adapters to handle the transfer of proprietary data files. The Server Sync Adapter API allows developers to write new adapters to support additional data types, data handling, and source data sources on the DB2 Everyplace SyncServer side of synchronization. The IBM Sync API allows programmatic access to synchronization functions of the IBM Sync Client synchronization engine. Application developers can now add buttons to automatically start synchronization from within their custom applications, avoiding having to force users to use the IBM Sync Application to perform synchronization. KEEPING CONNECTIONS OPEN Mobile computing has become a competitive requirement in today’s information-driven economy. Laptops have long been standard equipment for business travelers. Today, handheld devices, some of which can connect through wireless networks, are springing up everywhere. But these powerful mobile tools are limited if used in isolation. A laptop or PDA by itself doesn’t do a salesperson much good if the corporate database changes in the middle of a business trip. To maintain their competitive edge, mobile professionals need seamless access to enterprise data, no matter where they may be. The true power of pervasive computing resides not in the devices themselves but in their ability to tap into data from other sources. DB2 Everyplace brings the power of DB2 to mobile devices, leveraging their ability to synchronize data with other systems and literally putting enterprise data in the pocket of the mobile workforce. Paul Zikopoulos is a database specialist with the IBM Global Sales Support team with more than six years of DB2 experience. He has written numerous magazine articles and books about DB2 for DB2 Magazine and other publications and is a coauthor of the books DB2 for Dummies (Hungry Minds Inc., 2000) and DB2 Fundamentals Certification for Dummies (Hungry Minds Inc., 2001). You can reach him at paulz_ibm@yahoo.com. RESOURCES: DB2 Everyplace www.ibm.com/software/data/db2/everyplace
{"Source-Url": "http://www.decus.de/slides/sy2002/16_04/1l02_zusatz1.pdf", "len_cl100k_base": 5045, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 18561, "total-output-tokens": 5397, "length": "2e12", "weborganizer": {"__label__adult": 0.0003237724304199219, "__label__art_design": 0.0002160072326660156, "__label__crime_law": 0.0002701282501220703, "__label__education_jobs": 0.0005044937133789062, "__label__entertainment": 9.518861770629884e-05, "__label__fashion_beauty": 0.00013399124145507812, "__label__finance_business": 0.0019893646240234375, "__label__food_dining": 0.00023996829986572263, "__label__games": 0.0006923675537109375, "__label__hardware": 0.0026302337646484375, "__label__health": 0.00033020973205566406, "__label__history": 0.00016760826110839844, "__label__home_hobbies": 7.110834121704102e-05, "__label__industrial": 0.00042629241943359375, "__label__literature": 0.0001595020294189453, "__label__politics": 0.00011444091796875, "__label__religion": 0.00027489662170410156, "__label__science_tech": 0.02484130859375, "__label__social_life": 6.705522537231445e-05, "__label__software": 0.2183837890625, "__label__software_dev": 0.74755859375, "__label__sports_fitness": 0.00020563602447509768, "__label__transportation": 0.00035691261291503906, "__label__travel": 0.0001838207244873047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25220, 0.01387]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25220, 0.28168]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25220, 0.87873]], "google_gemma-3-12b-it_contains_pii": [[0, 111, false], [111, 725, null], [725, 5995, null], [5995, 12293, null], [12293, 15161, null], [15161, 21855, null], [21855, 25220, null]], "google_gemma-3-12b-it_is_public_document": [[0, 111, true], [111, 725, null], [725, 5995, null], [5995, 12293, null], [12293, 15161, null], [15161, 21855, null], [21855, 25220, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 25220, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25220, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25220, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25220, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25220, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, true], [5000, 25220, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25220, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25220, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25220, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25220, null]], "pdf_page_numbers": [[0, 111, 1], [111, 725, 2], [725, 5995, 3], [5995, 12293, 4], [12293, 15161, 5], [15161, 21855, 6], [21855, 25220, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25220, 0.0]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
c98ba713259eba11820fed6e1d4187aba35bf290
A Precise Model for Google Cloud Platform Stéphanie Challita, Faiez Zalila, Christophe Gourdin, Philippe Merle To cite this version: HAL Id: hal-01689659 https://inria.hal.science/hal-01689659 Submitted on 22 Jan 2018 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. A Precise Model for Google Cloud Platform Stéphanie Challita, Faiez Zalila, Christophe Gourdin, and Philippe Merle Inria Lille - Nord Europe & University of Lille CRISTAL UMR CNRS 9189, France firstname.lastname@inria.fr Abstract—Today, Google Cloud Platform (GCP) is one of the leaders among cloud APIs. Although it was established only five years ago, GCP has gained notable expansion due to its suite of public cloud services that it based on a huge, solid infrastructure. GCP allows developers to use these services by accessing GCP RESTful API that is described through HTML pages on its website[1]. However, the documentation of GCP API is written in natural language (English prose) and therefore shows several drawbacks, such as informal heterogeneous documentation, imprecise types, implicit attribute metadata, hidden links, redundancy and lack of visual support. To avoid confusion and misunderstandings, the cloud developers obviously need a precise specification of the knowledge and activities in GCP. Therefore, this paper introduces GCP MODEL, an inferred formal model-driven specification of GCP which describes without ambiguity the resources offered by GCP. GCP MODEL is conform to the Open Cloud Computing Interface (OCCI) metamodel and is implemented based on the open source model-driven Eclipse-based OCCIWARE tool chain. Thanks to our GCP MODEL, we offer corrections to the drawbacks we identified. I. INTRODUCTION During the last few years, cloud computing has become an emerging field in the IT industry. Numerous cloud providers are offering competing computation, storage, network and application hosting services, while providing coverage in several continents promising the best on-demand prices and performance. These services have heterogeneous names [1], characteristics and functionalities. In addition, cloud providers rely on technically different Application Programming Interfaces (APIs), i.e., cloud management interfaces that provide programmatic remote access to their resources. As for the semantics of these cloud APIs, they are only informally defined, i.e., described in natural language on the API website pages. The developer would not understand the exact behaviour of the provider when he/she asks for a virtual machine for example, which makes changing from one provider to another very complex and costly. However, the cloud developers need to build configurations that better fit their needs while reducing their dependence on any given provider. This is known as multi-cloud computing [2], which is the use of resources from multiple cloud providers where there is no agreement between providers. The latter is quite advantageous for cloud developers and mainly aims to better exploit offerings in the cloud market by employing a combination of cloud resources. To build multi-cloud systems, the cloud developer needs a resource-oriented model for each API in order to understand its semantics and compare cloud resources. We observe that these models do not exist but cloud APIs exhaustively describe their cloud resources and operations by providing wealthy informal documentations. Therefore, analyzing them is highly challenging due to the lack of precise semantics, which leads to confusion when comparing the cloud services and hinders multi-cloud computing. To address this problem, we have proposed in a previous work [3], FCLouds, which is a formal-based framework for semantic interoperability in multi-clouds. FCLouds contains a catalog of cloud precise models, that provide formal description of cloud APIs concepts and operations, for a better understanding of their behaviour. Having rigorously specified the structure and behaviour semantics of each cloud API, we can consequently define transformation rules between them, thus ensure their semantic interoperability. The precise models of FCLouds are inferred from the cloud APIs textual documentations. Among the cloud APIs, Google Cloud Platform (GCP) is today one of the most important and growing in the cloud market. It provides developers several products to build a range of programs from simple websites to complex worldwide distributed applications. GCP offers hosting services on the same supporting infrastructure that Google uses internally for end-user products like Google Search and YouTube. This outstanding reliability results in GCP being adopted by eminent organizations such as Airbus, Coca-Cola, HTC, Spotify, etc. In addition, the number of GCP partners has also increased substantially, most notably Equinix, Intel and Red Hat. To use cloud services, expert developers refer at first to the cloud API documentation, which is an agreement between the cloud provider and the developer on exactly how the cloud API will operate. By going through the GCP documentation, we realize that it contains wealthy information about GCP services and operations, such as the semantics of each attribute and the behaviour of each operation. However, GCP documentation is written in natural language, a.k.a. English prose, which results in human errors and/or semantic confusions. Also, the current GCP documentation lacks of visual support, hence the developer will spend considerable time before figuring out the links between GCP resources. Our paper presents a precise model that describes GCP resources and operations, reasons about this API and provides corrections to its current drawbacks, such as informal heterogeneous documentation, imprecise types, implicit attribute metadata, hidden links, redundancy and lack of visual support. This is a work of reverse engineering [4], which 1https://cloud.google.com is the process of extracting knowledge from a man-made documentation and re-producing it based on the extracted information. In order to formally encode the GCP API without ambiguity, we choose to infer a GCP MODEL from the GCP documentation. In fact, our approach leverages the use of Model-Driven Engineering (MDE) because it is advantageous on many levels [5], especially by providing a precise and ho- mogeneous specification, and reducing the cost of developing complex systems. MDE allows to rise in abstraction from the implementation level to the model level, and to provide a graphical output and a formal verification of GCP structure and operations. Our GCP MODEL conforms to the OCCIWARE METAMODEL [6]. The latter is a precise metamodel of Open Cloud Computing Interface (OCCI) the open cloud standard, specified by the Open Grid Forum (OGF) and defining a common interface for describing any kind of cloud computing resources. The first contribution of this paper is an analysis of GCP documentation because it is as important as analyzing the API itself. Secondly, we propose a precise GCP MODEL that consists in a formal specification of GCP. This model, automat- ically built, also provides corrections for the drawbacks that we identified in GCP documentation. The remaining of this paper is structured as follows. In Section II we identify six general drawbacks of GCP documentation that motivate our work. Next, Section III describes our model-driven approach for a better description of GCP API and gives an overview of some background concepts we use in our GCP MODEL. In Section IV , we present some related work. Finally, Section V concludes the paper with future work. II. DRAWBACKS AND MOTIVATIONS The object of our study is the GCP documentation. GCP is a proprietary cloud platform that consists of a set of physical assets (e.g., computers and hard disk drives) and virtual resources (e.g., virtual machines, a.k.a. VMs) hosted in Google’s data centers around the globe. We especially target this API because it belongs to a well-known cloud provider and because we believe it can be represented within a better formal specification. GCP documentation is available in the form of HTML pages online. The URL is the starting point of our study and the base for building our GCP MODEL. This page exhaustively lists the resources supported by the deployment manager, and provides a hyperlink to each of these resources. Normally, the developer will use the deployment manager to deploy his/her applications. The deployment manager will then provision the required resources. Therefore, we adopt this page to study the documentation of each GCP resource that could be provisioned by the developer. Through our study of GCP, we have identified six main con- ceptual drawbacks/limitations on GCP documentation, which are detailed below. A. Informal Heterogeneous Documentation Enforcing compliance to documentation guidelines requires specialized training and a strongly managed documentation process. However, often due to aggressive development sched- ules, developers neglect these extensive processes and end up writing documentations in an ad-hoc manner with some little guidance. This results in poor quality documentations that are rarely fit for any downstream software activities. By going through the HTML pages of GCP documentation, it was not long before we realized that it has two different for- mats to describe the attributes of each resource (cf. Figure 1). This is an issue because it may disturb and upset the reader, i.e, the cloud developer. Figure 1. Different documentation formats. B. Imprecise Types Figure 2. Imprecise string types. 1http://occi-wg.org/about/specification 2https://cloud.google.com/deployment-manager/docs/configuration/supported-resource-types GCP documentation is represented by a huge number of descriptive tables written in natural language. Thus it is a syntactically and semantically error-prone documentation; it may contain human-errors and its static and dynamic semantics are not well-formed, i.e., does not describe without ambiguity the API and its behavior. In fact, some of the written sentences are imprecise and can be interpreted in various different ways, which can lead to confusions and misunderstandings when the user wants to provision cloud resources from GCP API. For each resource attribute, we checked the corresponding type and description to assess whether the information is accurate. Figure 2 shows that the current GCP documentation states explicitly that string types are supported. But later on, further details in the description explain how to set such strings. For example, the effective type of the attribute is a URL in (1), an email address in (2), an enumeration in (3), and an array in (4). The cloud developer may define non-valid string formats for his/her application. The bugs will be detected during the last steps of the provisioning process and fixing them becomes a tricky and time consuming task. In addition, Figure 3 shows that GCP documentation employs several ways to denote an enumeration type. Sometimes, the enumeration literals are listed in the description of the attribute, and sometimes they are retrievable from another HTML page. C. Implicit Attribute Metadata We notice that GCP documentation contains implicit information in the attribute description. For example, it contains some information that specifies if an attribute: - is optional or required (cf. Figure 5), - is mutable or immutable (cf. Figure 6), - has a default value (cf. Figure 7). These constraints are only explained in the description of each attribute, but lacks of any verification process. The developer will not be able to ensure, before the deployment phase, that his/her code meets these constraints. D. Hidden Links A link is the relationship between two resource instances: a source and a target. These links are implicit in GCP documentation but they are important for proper organization of GCP resources. They are represented by a nested hierarchy, where a resource is encompassed by another resource and where an attribute defines the link between these resources, either directly or indirectly. Figure 8 shows an example of a deducible link, a.k.a. networkInterface because the description of this attribute is a URL pointing to the target resource, a.k.a. network. Therefore, we can say that networkInterface is a link that connects an instance to a network. If graphical support exists, this link would definitely be more explicit. E. Redundancy In addition to this, we observe from our study that GCP documentation is redundant. According to our observation, it contains a set of attributes and actions in common, i.e., with the same attribute name and type, and the same action name and type respectively. Among this set, we especially notice a redundancy of the attributes name, id, kind, selfLink, description, etc., as well as of the actions get, list, delete, insert, etc. F. Lack of Visual Support Finally, the information in GCP documentation is only descriptive, which involves a huge time to be properly understood and analyzed. In contrast to textual descriptions, visual diagrams help to avoid wastage of time because it easily highlights in short but catchy view the concepts of the API. Consequently, logical sequence and comparative analysis can be undertaken to enable quick understanding and attention. Cloud developers can view the graphs at a glance to understand the documentation very quickly which is more complicated through descriptive format. Overall, these six drawbacks above are calling for more analysis of GCP documentation and for corrections. Once the development has begun, corrections can be exponentially time consuming and expensive to amend. Therefore, the cloud developer firstly needs a clear detailed specification, with no ambiguous information, in order to: 1) make the development faster and meet expectations of the cloud API, 2) avoid the different interpretations of a functionality and minimize assumptions, and, 3) help the developer to move along more smoothly with the API updates for maintainability purpose. III. APPROACH This section presents our approach that takes advantage of MDE techniques to precisely, textually and graphically, describe GCP API. In fact, MDE is emerging and emphasizing on the use of models and model transformations to raise the level of abstraction and automation in the software development. To understand the concepts that rely under the architecture of our approach, we begin by giving an illustration of it in Figure 9. This architecture is composed of three main parts: a Snapshot of GCP HTML pages, a GCP Crawler and a GCP Model increased by Model Transformations. Each of these three parts is detailed in the following. A. Snapshot of GCP HTML pages Google is the master of its cloud API and its documentation, which means that GCP engineers could update/correct GCP documentation, whenever they are requested to or they feel the urge to. But since continuously following up with GCP documentation is crippling and costly, we locally save the HTML pages of GCP documentation in order to have a snapshot of GCP API at the moment of crawling its documentation. This snapshot is built on July 27th, 2017. B. GCP Crawler In order to study and understand GCP documentation, the main step of our approach is to extract all GCP resources, their attributes and actions and to save them in a format that is very simple and easily readable by a human. In this sense, extracting knowledge by hand from this documentation is not reliable nor representative of reality; if the documentation changes, extracted knowledge should also evolve through an automated process. Therefore, we have set up an automatic crawler to infer our GCP specification from the natural language documentation. C. GCP Model For a better description of the GCP resources and for reasoning over them, we propose to represent the knowledge we extracted into a model that formally specifies these resources, while providing a graphical concrete syntax and processing with transformations. This addresses the drawbacks of GCP documentation identified in Section II. Choosing the adequate metamodel when developing a model is crucial for its expressiveness. In this context, a language tailored for cloud computing domain will bring us the power to easily and finely specify and validate GCP API. Therefore, we choose to adopt the OCCIWARE METAMODEL because it is a precise metamodel dedicated to describe any kind of cloud resources. It is based on Open Cloud Computing Interface (OCCI), an... open cloud standard defining an open interface for describing Everything as a Service (XaaS). For example, OCCI describes resources that belong to the three service layers: Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). It also allows the developer to deal with a set of heterogeneous clouds. The OCCIWARE as a Service (SaaS). It also allows the developer to deal as a Service (IaaS), Platform as a Service (PaaS) and Software Everything as a Service (XaaS). For example, OCCI describes open cloud standard defining an open interface for describing primitive types (EMF)\textsuperscript{[10]} and it defines its own data type classification. In our approach, we exploit these two advantages and we build a GCP MODEL, which is an expressive model and an appropriate abstraction of the GCP API. GCP MODEL is conform to OCCIWARE METAMODEL, which is conform to Ecore METAMODEL. For details on the OCCIWARE METAMODEL, readers can refer to\textsuperscript{[6]}. Thanks to OCCIWARE METAMODEL, our GCP MODEL provides a homogeneous specification language for GCP, which tackles the Informal Heterogeneous Documentation drawback, identified in Section II. It also carries out five in-place Model Transformations that propose corrections to face and address the other drawbacks discussed in Section II. They aim for several objectives, especially Type Refinement, Implicit Attribute Metadata Detection, Link Identification, Redundancy Removal and Model Visualization. We highlight in the following these correcting transformations. - **Type Refinement** is done by adopting the data type system proposed by OCCIWARE METAMODEL, defining regular expressions, and using the EMF validator to check the type constraints that are attached to the attributes. For instance, among the constraints defined for the GCP MODEL, one constraint states that if the type of an attribute in the documentation is string and the description explains that this is an email address, our GCP MODEL will apply the email validation constraint for refinement purpose. This kind of information is translated into a \texttt{STRING} \texttt{TYPE} containing the following regular expression: \begin{verbatim} ^\[A−Z0−9\_\%+\−\@\[A−Z0−9\.+\]\\[A−Z\]\{2,6\}$ \end{verbatim} - **Implicit Attribute Metadata Detection** to explicitly store information into attributes defined in the attribute concept of our GCP MODEL. To do so, we apply Natural Language Processing (NLP)\textsuperscript{[11]}, which is a branch of the artificial intelligence field. NLP deals with analyzing and understanding the languages that humans use naturally in order to better interface with computers. Recently, NLP techniques have made great progress and have proven to be efficient in acquiring the semantics of sentences in API documentation. Among these techniques, we use the Word Tagging/Part-of-Speech (PoS)\textsuperscript{[12]} one. It consists in marking up a word in a text as corresponding to a particular part of speech, based on both its definition and its context. For this, we declare our pre-defined tags for some GCP specific attribute properties. Some pre-defined tags are as follows: - \texttt{mutable = true} if \texttt{[Input-Only]}. - \texttt{mutable = false} if \texttt{[Output-only]/read only}. - \texttt{required = true} if \texttt{[Required]}. - \texttt{required = false} if \texttt{[Optional]}. - **Link Identification** to deduce logical connections between resources. Therefore, we also refer to the idea of applying NLP techniques. This time, we use Syntactic Parsing\textsuperscript{[13]} to acquire the semantics of sentences in GCP documentation. The parse trees describe the sequential patterns that allow us to identify the semantics of a link between two resources. - **Redundancy Removal** in order to offer the cloud developers more compact, intuitive and explicit representation of GCP resources and links. To do so, we propose to have some \texttt{ABSTRACT} instances. An \texttt{ABSTRACT} instance is an abstract class from which inherit a group of Kind instances. It allows to factorize their common attributes and actions and to reuse them. This is known as Formal Concept Analysis (FCA) technique\textsuperscript{[14]}, which is a conceptual clustering technique mainly used for producing abstract concepts and deriving implicit relationships between objects described through a set of attributes. - **Model Visualization** for an easier analysis of the API, even if the model is not as sophisticated as the original documentation. In fact, when we visualize information, we understand, retain and interpret them better and quicker because the insights become obvious\textsuperscript{[15]}. Unfortunately, as discussed in Section II, GCP does not currently provide such a visual model. We have implemented a prototype of our approach in Java. We used jsoup\textsuperscript{[4]} library for building the SNAPSHot of GCP HTML pages and GCP CRAWLER, and the Eclipse-based OCCIware studio\textsuperscript{[5]}\textsuperscript{[6]} for building GCP MODEL (see AVAILABILITY section). Once our model is built, we define GCP configurations, which represent GCP INSTANCES that conform to GCP MODEL. Then, we elaborate use cases for our model-based GCP configurations as a way of checking them. To do so, we identify the code generation and model interpretation techniques which are two of the advantages of model-driven engineering\textsuperscript{[16]}. First, with the code generation approach, we use GCP INSTANCES to generate artifacts, such as: - JSON files that contain the needed structured information for creating a VM for example, through GCP deployment manager, - CURL scripts that allow us to create a VM for example via the POST action, - Shell scripts for GCP Command Line Interface (CLI), \textsuperscript{4}https://jsoup.org \textsuperscript{5}https://github.com/occiware/OCCI-Studio • Java or Python code for GCP Software Development Kits (SDKs) to aid in identifying bugs prior to runtime. Second, we experiment the model interpretation approach, by defining the business logic of GCP CONNECTOR. The latter defines the relationship between GCP INSTANCES and their executing environment. For this, the connector provides tools that are not only used to generate the necessary artifacts corresponding to the behavior of GCP actions (create, get, insert, list, patch, update, etc.), but also to efficiently make online updates for the GCP INSTANCES elements according to the changes in the executing environment and to the models@run.time approach [17]. The generated artifacts are seamlessly executed in the executing environment thanks to MDE principles [18]. This validation process is entitled “validation by test”, because it aims at verifying whether GCP INSTANCES can be executed and updated in the real world. By validating a broad spectrum of GCP INSTANCES, we validate the efficiency of our GCP MODEL. IV. RELATED WORK To the best of our knowledge, we provide the first work that investigates and formalizes a cloud API documentation. In [19], Petrillo et al. have focused on studying three different cloud APIs and proposed a catalog of seventy-three best practices for designing REpresentation State Transfer (REST) [20] APIs. In contrast to our work, this work is limited to analyzing the documentations of these APIs and does not propose any corrections. Two recent works were interested in studying REST APIs in general. [21] provides a framework to analyze the structure of REST APIs to study their structural characteristics, based on their Swagger documentations. [22] presents AutoREST, an approach and a prototype to automatically infer an OpenAPI specification from a REST API HTML documentation. Our work can be seen as a combination of these two previous works [21], [22], since we infer a rigorous model-driven specification from GCP HTML documentation and we provide some analysis of its corresponding API. However, in contrast to these two works, our work is specifically applied on a cloud REST API and proposes corrections to the detected deficits of its documentation. Moreover, given that it is an important but very challenging problem, analyzing natural language documents from different fields has been studied by many previous works. In [23], Zhai et al. apply NLP techniques to construct models from Javadoc in natural language. These models allow one to reason about library behaviour and were implemented to effectively model 326 Java API functions. [24] presents an approach for inferring formal specifications from API documents targeted towards code contract generation. [25] develops an API usage mining framework and its supporting tool called MAPO (Mining API usage Pattern from Open source repositories) for mining API usage patterns automatically. [26] proposes abstract models of quality use cases by inspecting information in use case text. V. CONCLUSION AND FUTURE WORK In this paper, we highlight six main drawbacks of GCP documentation and we argue for the need of inferring a formal specification from the current natural language documentation. To address the problem of informal heterogeneous documentation, we present our model-driven approach which consists in a GCP MODEL conform to OCIIWARE METAMODEL. Using our GCP CRAWLER, our model is automatically populated by the GCP resources that are documented in plain HTML pages. We also propose five Model Transformations to correct the remaining five drawbacks. For future work, we want to enhance the linguistic analysis of GCP documentation for a better type refinement. We would also like to provide a GCP STUDIO, which is a dedicated model-driven environment for GCP. It will ensure a specific environment for designing configurations that conform to GCP MODEL. Finally, we aim to generate from our model, thanks to OCIIware studio facilities, a new textual documentation of GCP API. Then, we aim to strengthen our validation by conducting a survey to be taken by developers that are using GCP API. This survey will help us to verify how accurate the processed documentation is and if it actually saves their development time. Also, for ultimate measurement of our approach, we will contact Google employees who are in charge of GCP API, because we believe that their expertise is the most efficient for reviewing our work. For long-term perspectives, we aim to analyze how suitable is the OCIIWARE METAMODEL for our purpose. Today, there exist several general modeling languages, such as UML which is widely adopted comparing to OCIIWARE METAMODEL. However, UML is generic and may not be tailored for expressing cloud computing concepts. Therefore, it will be interesting to investigate whether the additional complexity cost that OCIIWARE METAMODEL introduces for developers to learn and adapt is worth it. This study can be statistics-oriented by quantifying the number of OCCI concepts that are used in our GCP MODEL. Also, we plan to update our approach so it would automatically handle the evolution of GCP API. At the moment, this evolution is manually ensured. For automating the process, it is more practical if our crawler is less related to the structure of GCP HTML pages, because in reality the latter are constantly updated. This can be done by experimenting artificial intelligence algorithms to extract knowledge from GCP documentation, then studying whether the inferred GCP MODEL in this case will not be missing some information. Also, our model needs to incrementally detect streaming modifications, by calculating and modifying only the differences between the initially processed version and the newly modified one. Finally, we aim to extend our approach to analyze and enhance additional natural language cloud API documentations, e.g. Amazon Web Services (AWS), Microsoft Azure, Oracle, etc. AVAILABILITY Readers can find the snapshot of GCP documentation built on July 27th, 2017, as well as our precise GCP MODEL and its code at the following address: https://github.com/occiware/GCP-Model ACKNOWLEDGMENT This work is supported by both the OCCIware research and development project funded by the French Programme d’Investissements d’Avenir (PIA) and the Hauts-de-France Regional Council. REFERENCES http://www.occiware.org
{"Source-Url": "https://inria.hal.science/hal-01689659/document", "len_cl100k_base": 6015, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24407, "total-output-tokens": 7675, "length": "2e12", "weborganizer": {"__label__adult": 0.0002796649932861328, "__label__art_design": 0.0002961158752441406, "__label__crime_law": 0.00025653839111328125, "__label__education_jobs": 0.0006642341613769531, "__label__entertainment": 5.8710575103759766e-05, "__label__fashion_beauty": 0.00013566017150878906, "__label__finance_business": 0.0004122257232666016, "__label__food_dining": 0.0002582073211669922, "__label__games": 0.0003266334533691406, "__label__hardware": 0.0006308555603027344, "__label__health": 0.0004148483276367187, "__label__history": 0.0002341270446777344, "__label__home_hobbies": 6.413459777832031e-05, "__label__industrial": 0.0002562999725341797, "__label__literature": 0.00031948089599609375, "__label__politics": 0.00021541118621826172, "__label__religion": 0.00033020973205566406, "__label__science_tech": 0.0258636474609375, "__label__social_life": 8.779764175415039e-05, "__label__software": 0.012725830078125, "__label__software_dev": 0.95556640625, "__label__sports_fitness": 0.00017023086547851562, "__label__transportation": 0.0003578662872314453, "__label__travel": 0.00017178058624267578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33001, 0.01734]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33001, 0.43914]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33001, 0.8912]], "google_gemma-3-12b-it_contains_pii": [[0, 1052, false], [1052, 6704, null], [6704, 10525, null], [10525, 13267, null], [13267, 17389, null], [17389, 23322, null], [23322, 29397, null], [29397, 33001, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1052, true], [1052, 6704, null], [6704, 10525, null], [10525, 13267, null], [13267, 17389, null], [17389, 23322, null], [23322, 29397, null], [29397, 33001, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33001, null]], "pdf_page_numbers": [[0, 1052, 1], [1052, 6704, 2], [6704, 10525, 3], [10525, 13267, 4], [13267, 17389, 5], [17389, 23322, 6], [23322, 29397, 7], [29397, 33001, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33001, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
402ebe0995caef0b2197f36ddec1f813ea508ed2
ABSTRACT We have created a python based framework for searching Twitter using a keyword query which can be either a Hashtag or a URL. The program returns all tweets containing that query along with a lot of meta data. Our algorithm then creates a network of all the users involved in sending that tweet and creates a directed graph with users who tweeted that query as nodes and edge direction showing a “followed by” relationship (i.e. the direction in which information flows on Twitter). Once we construct the graph, we run various network analysis algorithms on it such as degree distribution, betweenness and closeness centrality, community detection and average clustering among others. We also collect available location data and time stamps to provide a complete picture of the spread of that query or ‘meme’ over time. CCS CONCEPTS - Networks~Social media networks - Networks~Online social - Information systems~Wrappers (data mining) - Information systems~Data mining KEYWORDS Twitter, online social networks, information propagation, data mining, reciprocity, retweet, information diffusion, network analysis. 1. INTRODUCTION Twitter is a very popular social media platform. It is unique in its micro-blogging style service which makes it a platform for people to share important pieces of information with a larger community of followers. Over the past few years it has gain a lot of media attention as most celebrities and politicians as well release statements on the platform making it the center of different news media stories. Twitter users send and receive messages known as “tweets”. These tweets as shown in figure 1, are generally text based post of up to 280 characters in length. They may also contain multimedia content such as images, GIFs, videos and audios as well as URLs. Tweets are delivered to users on a subscription based process whereby if user A is following user B then user A will receive tweets from user B. We represent this as an edge in our system as E (B -> A). This is because the information is flowing from B to A or in other words B is followed by A. If B is not following A, then B will not receive A’s tweets. Because of this subscription based model, our graph then becomes a directed graph with nodes representing users and edges representing this “followed by” relationship of information flow. This relationship is different from other social media platforms such as Facebook, MySpace or Snapchat in that following someone doesn’t require reciprocity. User A can follow user B, but user B doesn’t have to follow user A back. Figure 1. Example of a tweet with user mentions, a hashtag, a URL and a multimedia image Tweets as shown in Figure 1, often contain Hashtags and URLs which are reused and retweeted by other users. These Hashtags are very critical for analyzing the flow of a particular piece of information over the social media. They spread in a similar fashion... as ‘memes’ as mentioned by Leskovec et al [1]. Our application framework is designed in particular to search such Hashtags and similarly named entities over a period of time collecting key metadata and user network information to allow for detailed quantitative study of how particular memes spread over the social media platform. The reason why Twitter is an idea social media platform for doing such network analysis study is because of a number of factors. Firstly, it has an explicitly defined social media structure (the follower-following subscription model). Secondly, information on Twitter naturally spreads with hashtag associations to certain pieces of stories. Which makes tracking the flow of that particular information in the social network easy. We don’t need to spend time applying complex topic modelling algorithms or searching for named entities or looking for similar phrase clusters to categorize tweets taking about similar topics. These hashtags also then become very integral for people to find out more news and information related to particular stories, event or news. Certain hashtags if tweeted enough times in a short span of time start to ‘trend’. These top trends are shown on a users’ home screen. This feature actually makes the top trending tweets even more ‘sticky’ as more and more people start talking about it. Our framework will allow for users to collect the required data to perform detailed analysis on large sets of data and discover interesting behaviors. The system also performs key network analysis algorithms to gain better insights into the underlying network of users. 2. RELATED WORK There have been many studies done in the past on Twitter. It has gained popularity in the past decade for its role in the spread of information over social networks. There are many research papers focusing on different aspects of the social network. One can also find paid software frameworks for conducting analysis on Twitter for commercial purposes. There are also a number of open source implementations available on github to start using the Twitter API for accessing tweets. We will discuss both research and developmental work done in this area. 2.1. RESEARCH WORK Due to the social network structure and public nature of Twitter, it has gained a lot of attention in the past from various researchers in the social network domain. Some have studied the topology of the follower-following subscription model [2], [3]. Some have also conducted sentiment analysis on tweets based on query terms [4]. Meeyoung et al [5] have also studied how influence works on Twitter using parameters such as in-degree, retweets and mentions. Some have also studied accessing the credibility of tweets by learning features and propagation patterns of real and fake tweets [6]. Haewoon et al [2] have done exhaustive data mining on twitter collecting all user data and tweets to analyze the structure and dynamics of the social networking platform. Their results show that for a large majority of people, the network shows power law distribution on the number of followers/following apart from the most popular users which have a skewed number of followers and following dynamic. They follow orders of magnitude less than the number of users following them. Their research also shows Twitter follows the ‘small world’ paradigm. Where most users are separated by 4 degrees on the platform. Takeshi et al did work on the real-time aspect of twitter [7], using the platform to detect earthquakes and pin point its location. They investigate the real-time interaction of events such as earthquakes in Twitter and propose an algorithm to monitor tweets and to detect a target event. To detect a target event, they devise a classifier of tweets based on features such as the keywords in a tweet, the number of words, and their context. Subsequently, they produce a probabilistic spatiotemporal model for the target event that can find the center and the trajectory of the event location. They consider each Twitter user as a sensor, and set a problem to detect an event based on sensory observations. Location estimation methods such as Kalman filtering and particle filtering are used to estimate the locations of events. 2.2. IMPLEMENTATION WORK Twitter has developed and published their own API framework which can be used to get all manner of public data form twitter [8]. Many various wrappers have been created for this API in languages such as Python, PHP, Java, JavaScript, etc. which allows developers to write their own code based on what type of data they want and in what format [9]. However, there is no free open source analysis tool available to easily fetch data from twitter, organize and clean it, perform network and graphic algorithms on it and finally to visualize these results in a meaningful way. There are various small snippet codes available on GitHub to connect to the Twitter API to get tweets but nothing in terms of developing an actual network analysis toolbox. This is where our work comes in. We are creating an open source Python framework to scrape data from Twitter. The code will be available on Github [10] for users to download and start collecting data from Twitter. Our framework not only collects the data, it also cleans it, organizes it and also performs some visualization work in the form of network graphs, scatter plots over time and location maps, etc. Another key aspect about our work is that users can perform analysis at a granular level. Most research work done focuses on the larger network and global trends. Our framework will allow users to perform localized analysis on a per tweet/Hashtag level. 3. DATA GATHERING There are varies data mining and cleaning processes involved in making the data ready for detailed analysis work. Below we mention the APIs, Libraries and the programming environment used to gather all the required data for the framework. 3.1. Twitter API Twitter offers a comprehensive Application programming Interface (API) that is easy to crawl and collect data from [8]. The challenge is when we gather data from the free version [11] of the API which imposes strict usage and rate limitations. One key limitation is being restricted to having access to only the past 7 days of Twitter data. Which essentially makes it really hard to crawl a hashtag from its point of origin if it was first started earlier than 7 days. This can be overcome if we keep on collecting the data every day, giving us larger datasets to work with. Another big restriction is on the amount of requests we can make over time. Twitter imposes a wait time of 15 minutes after a certain number of requests made by the API, which again makes it very difficult to collect large amounts of data as collecting it all will take a lot of time. We have to make several API calls in order to collect all the relevant tweets data. First, we need to run a query for a specific term such as a hashtag. We get the complete tweets data in a stream from that query which we clean up and fetch only the key important data from it such as the Users name tweeting, the tweet content, location of the user, followers and following numbers, all the hashtags and URLs mentioned in the tweet, etc. This requires making N API calls where N = total number of tweets fetched. Figure 2 shows a snapshot of the data fetched for “#OntarioTech”, showing all the key data points gathers for the gathered tweets. For all these tweets we find M number of unique users (there are times when users have made multiple tweets on a hashtag). For all these M users we make M^2 API calls. For each pair of users, we find if they have a follower, following relationship. This query helps us in forming a directed edge graph network. 3.2. Geolocation Another key piece of data we are gathering is location data of users. For this we are using a Python library called Geopy [12]. This library has a free service available called Nominatim [13], which is an open street map API, used to convert a given text of location address into actual geographical coordinates. This library is free but applies a 1 request per second per user limitation. Which again requires a lot of time for large number of user locations. We store the latitude and longitude data of each user in our database. The reason we have to perform a separate location API call is because Twitter allows users to put in location information as a string, without limiting them to actual location data. This in turn creates a problem for data analysis as users occasionally put in fake or made up locations. Our geolocation code takes in all the users’ location information and tries to convert them into longitude and latitude data when possible. We use these coordinates to map out the users on a world map, showing where the users are tweeting that specific hashtag from. 4. IMPLEMENTATION The entire code has been written in a Python 3.7 [14] development environment using Anaconda package manager [15]. The code has been pushed to Github [16], where anyone can download the code and start gathering their own data and start performing analysis on Twitter. We used Tweepy [17], a Python wrapper for the Twitter API to gather the initial tweets data. Our code searches for a query term on the Twitter network, which returns a list of JSON objects of matching tweets with that query term. For network analysis purposes we recommend searching for Hashtags, although the code can be used to search for URLs, Names, Places, phrases and other random terms as well. The JSON object contains a lot of information regarding the tweet and the user. We collect the text of the tweet, all the hashtags and URLs mentioned in it. The user’s ID and followers count and location data. In addition to the tweets data, we then perform another API request on all the user IDs collected to find follower relationships between the users tweeting query term. This gives us a directed graph with users tweeting about a particular term as nodes and the directed edges between them representing the direction of the flow of information. After gathering all the data using Tweepy, we store it in CSV files for easy access and storage. The CSV files are then loaded into the Data Analysis script which cleans the data, performs location update using Geopy and sets up the graphs for users using NetworkX [18]. We create a directed graph based on the “followed by” relationship. The arrow indicates the flow of information. We also use Plotly [19] to plot various graphs, line and scatter plots for our results. Figure 3 shows a graph of user connections for the hashtag query #OntarioTech. 5. ANALYSIS Once we have all our data gathered and have constructed the network graph G, we have access to a host of tools to perform analysis on the data. 5.1. Linear Degree Distribution For the given network of users, we plot their total degree, in degree (following) and out degree (followed by). Figure 4 shows the plot of the graph G obtained of users who used the hashtag #OntarioTech. Figure 4 shows the plot of the average degree connectivity against degree k. This algorithm is also called the “K nearest neighbors”. 5.2. Average Clustering Compute the average clustering coefficient for the graph G. The clustering coefficient for the graph is the average, \[ C = \frac{1}{n} \sum_{u \in E} c_u \] where \( c_u \) is the clustering, \[ c_u = \frac{1}{deg^{tot}(u)(deg^{tot}(u) - 1) - 2deg^{\leftrightarrow}(u)T(u)} \] where \( T(u) \) is the number of directed triangles through node u, \( deg^{tot}(u) \) is the sum of in degree and out degree of u and \( deg^{\leftrightarrow}(u) \) is the reciprocal degree of u. 5.3. Average Degree Connectivity The average degree connectivity is the average nearest neighbor degree of nodes with degree k. For a given node i, \[ k_{nn,i}^w = \frac{1}{s_i} \sum_{j \in N(i)} w_{ij}k_j \] where \( s_i \) is the weighted degree of node i, \( w_{ij} \) is the weight of the edge that links i and j, and \( N(i) \) are the neighbors of node i. Figure 4 shows the plot of the average degree connectivity against degree k. 5.4. Betweenness Centrality Betweenness centrality of a node \( v \) is the sum of the fraction of all-pairs shortest paths that pass through \( v \) \[ c_B(v) = \sum_{s,t \in V} \frac{\sigma(s,t|v)}{\sigma(s,t)} \] where \( V \) is the set of nodes, \( \sigma(s,t) \) is the number of shortest \((s,t)\)-paths, and \( \sigma(s,t|v) \) is the number of those paths passing through some node \( v \) other than \( s,t \). If \( s=t \), \( \sigma(s,t)=1 \), and if \( v \in s,t \), \( \sigma(s,t|v)=0 \). Figure 5 shows the Betweenness Centrality of the users who tweeted with the hashtag #OntarioTech. ![Figure 6. Betweenness Centrality calculation for the users who tweeted with the hashtag #OntarioTech. This shows @OntarioTech_U as the most in between user.](image) 5.5. Closeness Centrality Closeness centrality of a node \( u \) is the reciprocal of the average shortest path distance to \( u \) over all \( n-1 \) reachable nodes. \[ C(u) = \frac{n - 1}{\sum_{v=1}^{n-1} d(v, u)} \] where \( d(v, u) \) is the shortest-path distance between \( v \) and \( u \), and \( n \) is the number of nodes that can reach \( u \). Notice that the closeness distance function computes the incoming distance to \( u \) for directed graphs. Figure 7 shows the closeness centrality for the in degree of the graph of users who tweeted with the hashtag #OntarioTech. ![Figure 7. Closeness Centrality for the in degree of the graph of users who tweeted with the hashtag #OntarioTech](image) 5.5.2 Closeness Centrality (Wasserman and Faust) Wasserman and Faust [21] propose an improved formula for graphs with more than one connected component. The result is “a ratio of the fraction of actors in the group who are reachable, to the average distance” from the reachable actors. Nodes from small components receive a smaller closeness value. Letting \( N \) denote the number of nodes in the graph, \[ C_{WF}(u) = \frac{n - 1}{N - 1} \frac{n - 1}{\sum_{v=1}^{n-1} d(v, u)} \] Figure 8 shows the results of this Wasserman-Faust closeness centrality measure. ![Figure 8. Closeness Centrality based on the Wasserman-Faust algorithm for the in degree of the graph of users who tweeted with the hashtag #OntarioTech](image) 5.6. Community Detection For community detection we use the Louvain Heuristic [20]. The method is a greedy optimization method that attempts to optimize the "modularity" of a partition of the network. Modularity is defined as: \[ Q = \frac{1}{2m} \sum_{ij} \left[ A_{ij} - \frac{k_i k_j}{2m} \right] \delta(c_i, c_j) \] Where \( A_{ij} \) represents the edge weight between nodes i and j; \( k_i \) and \( k_j \) are the sum of the weights of the edges attached to nodes i and j, respectively; \( 2m \) is the sum of all of the edge weights in the graph; \( c_i \) and \( c_j \) are the communities of the nodes; and \( \delta \) is a simple kroneker delta function. Figure 9 shows the result of applying this community detection algorithm on the undirected graph of the users who tweeted the hashtag #OntarioTech. ![Community Detection algorithm](image) Figure 9. Community Detection algorithm, based on modularity optimization on the users who tweeted the hashtag #OntarioTech 5.7. Location Data For all the users we ran a check on their geographical coordinates. Twitter allows users to enter an address text string for location. This requires an additional cleaning step to get geographical coordinates from the address text. We are using the open source Nominatim framework to convert readable address text to location coordinates. Figure 10 shows the location data which was fetched for the users who tweeted with the hashtag #OntarioTech. ![Location map](image) Figure 10. Location map for the users who tweeted with the hashtag #OntarioTech 5.8. Degree Distribution Plots For each hashtag analysis we also plot degree distribution plots for the users’ network behind it. We plot the frequency of degree, cumulative frequency and complementary cumulative frequency distribution for both in-degrees and out-degrees of the graph. Figure 11 shows the plots for the users’ network of people who tweeted with the hashtag #OntarioTech. ![Degree Distribution Plots](image) Figure 11. Plots of Frequency, CFD and CCDF against degrees for in-degrees and out-degrees of the network of users who tweeted with hashtag #OntarioTech For the case of #OntarioTech, our degree distribution doesn’t follow the power law distribution. This is probably because this is the new name of the University and is primarily being promoted by a small connected group of people, which deviates from a real world model. 6. FUTURE WORK There are many directions which we can take to extend this work. The framework can be made into a proper Python library, making it even more accessible and usable for network analysis work. More features can be added such as showing simulations of the flow of information or hashtags over a network automate like an infection network. This can be achieved using all the existing data collected using the code base and plugging that data into a JavaScript library for graphs and simulations. We can also try answering questions such as: How are people connected on Twitter? Do people with more followers influence more people? How does information flow through retweets? What is a typical topology of a network on Twitter? How different hashtags correlated with one another? And so on. We can answer many such questions once we have access to sufficient data and have constructed the underlying network of users. We can also conduct a detailed study of external influence on various hashtags that trend over the network. In particular, how different “News Cycle” stories evolve overtime on the social media platform. Similar to the work on meme tracking done by Leskovec et al [1]. 7. CONCLUSION In this project we have created a Python framework for collecting key network data from Twitter and run network analysis algorithms. The code can query any term, returning all matched tweets. We collect key data points from the tweets text and meta data, which we use to construct users network data. With the network graph we implement various network analysis algorithms such as degree distribution, closeness and betweenness centrality, clustering and community detection. Using this repository of code, users can start performing detailed analysis on Twitter at a very granular level, analyzing individual Hashtag propagation on an daily scale, perform network analysis on the users tweeting those hashtags and map out location data of the users. ACKNOWLEDGMENTS We would like to thank Twitter Inc. for providing an API to their service. The contributors to TweePy, the Python wrapper for the Twitter API. Geopy for providing an API to Nominatim Open Street Maps. Plotly and NetworkX Python library contributors for providing comprehensive charts and plotting and detailed network creation and analysis tools. Finally, AmirAli Salehi-Abari for his guidance and knowledge. REFERENCES [10] Personal repository code for the project: https://github.com/caliknadeen/Twitter-Network-Analysis [16] Github - The world’s leading software development platform: https://github.com/ [18] NetworkX - Python package for the creation, manipulation, and study of study of complex networks: https://networkx.github.io/
{"Source-Url": "https://saliknadeem.github.io/files/tw_report.pdf", "len_cl100k_base": 4858, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23887, "total-output-tokens": 5865, "length": "2e12", "weborganizer": {"__label__adult": 0.00039076805114746094, "__label__art_design": 0.0004968643188476562, "__label__crime_law": 0.00046706199645996094, "__label__education_jobs": 0.0014619827270507812, "__label__entertainment": 0.0004193782806396485, "__label__fashion_beauty": 0.00018346309661865232, "__label__finance_business": 0.0005121231079101562, "__label__food_dining": 0.0005393028259277344, "__label__games": 0.000888824462890625, "__label__hardware": 0.00089263916015625, "__label__health": 0.0005450248718261719, "__label__history": 0.0004622936248779297, "__label__home_hobbies": 0.0001404285430908203, "__label__industrial": 0.0002994537353515625, "__label__literature": 0.000720977783203125, "__label__politics": 0.0005517005920410156, "__label__religion": 0.000385284423828125, "__label__science_tech": 0.11761474609375, "__label__social_life": 0.0007929801940917969, "__label__software": 0.127685546875, "__label__software_dev": 0.7431640625, "__label__sports_fitness": 0.00034356117248535156, "__label__transportation": 0.0005021095275878906, "__label__travel": 0.00032782554626464844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24564, 0.02309]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24564, 0.21944]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24564, 0.89531]], "google_gemma-3-12b-it_contains_pii": [[0, 2935, false], [2935, 8882, null], [8882, 13619, null], [13619, 15097, null], [15097, 17328, null], [17328, 19739, null], [19739, 24564, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2935, true], [2935, 8882, null], [8882, 13619, null], [13619, 15097, null], [15097, 17328, null], [17328, 19739, null], [19739, 24564, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24564, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24564, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24564, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24564, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24564, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24564, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24564, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24564, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24564, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24564, null]], "pdf_page_numbers": [[0, 2935, 1], [2935, 8882, 2], [8882, 13619, 3], [13619, 15097, 4], [15097, 17328, 5], [17328, 19739, 6], [19739, 24564, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24564, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
f9a735de2c9cf201984df6cac5470d61990fa7aa
SOFTWARE VERSIONING IN THE CLOUD Towards Automatic Source Code Management Filippo Gioachin, Qianhui Liang, Yuxia Yao, Bu-Sung Lee HP Laboratories HPL-2012-43 Keyword(s): Software development, Cloud computing, Version control system, Revisions, Collaboration. Abstract: With the introduction of cloud computing and Web2.0, many applications are moving to the cloud environment. Version control systems have also taken a first step towards this direction. Nevertheless, existing systems are either client-server oriented or completely distributed, and they don’t match exactly the nature of the cloud. In this paper we propose a new cloud version control system focusing on the requirements imposed by cloud computing, that we identified as: concurrent editing, history rewrite, accountability, scalability, security, and fault tolerance. Our plan is to tackle these issues in a systematic way, and we present in this paper an overview of the solutions organized in three separate layers: access API, logical structure, and physical storage. SOFTWARE VERSIONING IN THE CLOUD Towards Automatic Source Code Management Filippo Gioachin∗, Qianhui Liang∗, Yuxia Yao∗ and Bu-Sung Lee† ∗Hewlett-Packard Laboratories Singapore, Singapore, Republic of Singapore †Nanyang Technological University, Singapore, Republic of Singapore Keywords: Software development, Cloud computing, Version control system, Revisions, Collaboration. Abstract: With the introduction of cloud computing and Web 2.0, many applications are moving to the cloud environment. Version control systems have also taken a first step towards this direction. Nevertheless, existing systems are either client-server oriented or completely distributed, and they don’t match exactly the nature of the cloud. In this paper we propose a new cloud version control system focusing on the requirements imposed by cloud computing, that we identified as: concurrent editing, history rewrite, accountability, scalability, security, and fault tolerance. Our plan is to tackle these issues in a systematic way, and we present in this paper an overview of the solutions organized in three separate layers: access API, logical structure, and physical storage. 1 INTRODUCTION With the advent of cloud computing, and the ubiquitous integration of network-based devices, online collaboration has become a key component in many aspects of people’s life and their work experience. Users expect their data to be accessible from everywhere, and proximity to other users in the real world is no longer essential for collaboration. In fact, many projects and industries are set up such that different people are not geospatially co-located, but rather distributed around the globe. Projects range from simple asynchronous discussions on certain topics of interest, to the creation of books or articles, to the development of multipurpose software. Software development, in particular, is a complex process composed of many phases including analysis, coding, integration, and testing. A large number of programmers usually collaborate to the development of a product, thus having efficient and effective collaboration tools is a key to improve productivity and enable higher quality of the software developed and better time-to-market. During the coding phase, programmers normally use version control systems (VCS) in order to record the changes made to the code. This is useful for multiple reasons. First of all, it allows different programmers to better coordinate independent changes to the code and simplify the task to integrate these changes together, in particular if the integration is performed or augmented by code reviewers. Finally, during the testing phase, it helps to detect more quickly why or how certain bugs have been introduced, and therefore to correct them more easily. In order to be effective for the tasks described, the revision history needs to be clean and clear. If too few revisions are present, each exposing a very large set of modifications, the reader may have a hard time to understand exactly which changes produce certain results. On the other hand, when committing changes very often and in small amounts, the history may be cluttered with irrelevant changes that have already been reverted or further modified. Although the revision history is believed by many to be an immutable entity, allowing the freedom to refine it can be beneficial to better understand the code at later stages of the software development process, thus increasing productivity. These refinements to the history can be suggested either by the user explicitly, or by automatic tools that inspect the existing history and detect better sets of changes. Clearly history refinement needs to be organized and regulated appropriately, and we shall describe some principles in the later sections. For the moment, we would like to highlight the fact that changes to the history ought not to affect users during their routine tasks of developing the software. This entails that the tools, automatic or otherwise, ought to have a complete view of all the revisions existing in the system. A cloud environment is the most favorable for such tools since all the data is known and always accessible, and users are not storing part of it in local disks. The storage and management of many project repositories in the cloud also requires special attention. This management does not affect the user directly, but can help to significantly improve system performance and reduce the resource allocation of cloud providers. For example, if a repository and all its copies in use by different users can share the same underlying resource, the space requirement can be reduced. Ideally, the cloud should minimize the resource allocation to the minimum necessary to guarantee to their users the correct retention of data in case of failures. This paper continues by describing related work in Section 2, and how our tool positions itself among them. Subsequently, Section 3 will focus on the requirements we deem important for our cloud versioning system, and Section 4 will give an overview of the system design. Final remarks will conclude. 2 RELATED WORK Version control systems are heavily used in software development, and many solutions are available to this extent, starting from the oldest Source Code Control System (SCCS) and Concurrent Versions System (CVS) (Grune, 1986), which are based on a centralized client-server model. From that time till today, the definition of change-sets has been delegated onto the user, forcing him to spend time to record the changes, while the system merely keeps track of the different versions. The third generation of centralized systems is currently embodied by Subversion (Pilato et al., 2004). All these systems are based on a central repository storing all the revisions; any operation to update the history is performed with an active connection to the server. Once a change-set has been committed to the server, and becomes part of it, it cannot be modified anymore. This implies that history is immutable, and can only progress forward. This is generally viewed as an advantage, but it forces artifacts to correct mistakes when they have been pushed to the central repository. Given their structure, they also impose a limit of one commit operation performed at a time on a repository, thus limiting the potential scalability of the infrastructure. Moreover, in Subversion, there is a single point of failure due to the presence of a local .svn directory in the user’s local disk. If this directory is tampered with or deleted, the local modifications may become impossible to commit, or, in the worst scenario, corruption could be propagated to the central repository. Given the presence of complete snapshots of the repository into local disks, the central server does not have access to all the knowledge. In particular, in our view, this limits the applicability of automatic tools capable of refining the history. More recently, distributed repositories like Git (Loeliger, 2009) or Mercurial (Mackall, 2006) have started to appear as a substitution to centralized repositories, and have become extremely popular. A key aspects of their success is their capability to perform many operations locally, even when connectivity is not available. This approach has the disadvantage that tracking what users are doing has become even harder since users can now maintain locally not only their most recent modifications, but a much larger number of change-sets. This hinders even more what automatic tools can do to improve the revision history. As mentioned earlier, in all versioning systems, the history is supposed immutable and projects should only move forward. In fact, the history can be arbitrarily changed by contributors in any way. For example, they can apply a new set of changes to an old version, and then overwrite the repository safety checks. When the history is modified in such way, no trace will be left of the original history once the new one has propagated. This poses a serious problem for the accountability of changes for future reviewing process. Several additions to distributed systems have enabled them to be seamlessly shared in the cloud. GitHub (GitHub, 2008) or SourceForge (Geeknet, 1999) are two such example. These also alleviate another problem of distributed VCSs, which is the management and coordination of all the clones of the same repository. On top of these hosting facilities, other efforts are integrating repositories with online software development, where users edit their code directly in the cloud. One example is Cloud9 (Daniëls and Arends, 2011). Even in this case, due to the limitations of current VCSs, a new copy of the data is made per user. The user then accesses and commits changes only to this personal copy. This imposes an unnecessary stress to the cloud infrastructures, requiring many copies of the same repository to exist, well beyond what is required for fault tolerance purposes. As mentioned earlier, all the VCSs seen so far require users to explicitly take charge of the versioning of their codes by defining the changes they made to the code, and committing them to the VCS. This process of committing change-sets has been automated in domains other than software development. Google Docs (Google, 2006) and Zoho Writer (Zoho, 2005) are two examples where a history is maintained transparently to the users in the realm of office productivity tools: users write their documents online, and revisions are automatically created upon save. This history is limited to a single branch, and merges are always resolved immediately. Unfortunately, for software development, merges cannot yet be resolved automatically, and branches are very important. Our cloud versioning tool positions itself between the two systems just described: between automatic versioning used in office productivity, and explicit revision management used in software engineering. History is to be written automatically by the system upon save, and updated transparently as the user continues to modify the source code. The close resemblance to traditional version control tools enables our system to support branches and all the necessary issues typical of software engineering. Automatic tools can harvest the history and change it to present it to users in a better format; users can also specify how revisions should be created, if needed. In particular, the extreme flexibility at the basis of our design enables new automatic tools to play an important role in the future. The field of software engineering has been vibrant with new ideas on accurate automatic detectors of relevant change-sets. For example, new tools are being proposed to discover code changes systematically based on the semantic of the source code (Kim and Notkin, 2009; Duley et al., 2010). These tools are complementary to our work, and can use the cloud VCS to refine the history to something that can better describe the changes made to the code, without requiring user intervention. 3 DESIRED REQUIREMENTS After we studied in detail current solutions for software revision control, and the structure of cloud computing, we realized that these did not match. We therefore analyzed the features of a VCS residing in the cloud, and consolidated them in the following six areas. Security. Content stored in the system may be confidential and may represent an important asset for a company. In a cloud environment, content will be exposed to possible read, write and change operations by unauthorized people. This naturally poses security concerns for developers and companies using this VCS. We identified three key issues for security: access control, privacy, and data deletion. When new content is created, appropriate access control policies must be enforced, either automatically or manually to limit what operations different users can perform on the data. When existing content is updated, these policies may need to be updated accordingly, for example by revoking access to an obsolete release of a library. Privacy is a closely related issue, and it refers to the need to protect user assets against both unauthorized access, such as from other users of the cloud, and illegal access, such as from malicious parties or administrators. Finally, some data stored in the VCS may be against local government regulations or company policies, in which case all references must be permanently destroyed. Fault Tolerance. Fault tolerance is always a benefit claimed for cloud-based offerings. Current VCSs do not support fault tolerance by themselves. Instead, backups of the systems where they reside is used to provide higher reliability to failures. We foresee the need of making fault tolerance an integral part of the VCS itself. Issues to be addressed in this process of integration include the methodology to create replicas of the repositories transparently, the granularity at which repositories are partitioned, the speed of propagation of updates, and the location of the copies retained. Scalability. As the number of developers using the VCS changes, the cloud needs to adapt by increasing or reducing the amount of resources allocated. In particular, codes and their revisions can easily increase to the extent that more nodes are needed even to hold a single project. A distributed storing architecture is therefore a natural option. Furthermore, scalability entails the efficient use of the allocated resources. If this was not the case, and too many resources were wasted, the scalability as a whole could be severely hindered. For example, if the development team of a project is geographically distributed, it would be beneficial to respond to requests from servers located nearby the specific users, and not from a single server. History Refine. As explained earlier, after changes are made to the source code, and these have been committed to the repository, developers may find it useful or necessary to improve their presentation. For example, if a bug was corrected late in the development of a feature, it is an unnecessary complication to show it explicitly to reviewers; instead, it could be integrated at the point in time where the bug first appeared. Naturally, the original history is also important, and it should be preserved alongside the modified one. In traditional VCSs this is impossible to accomplish, and even the simpler action of rewriting the history (without maintaining the old one) may not be easily accomplishable. Accountability. Every operation performed in the system should be accountable for, be it the creation of a new version, the modification of access permissions, the modification of the history, or the deletion of protected data. In traditional VCSs, only the first two are correctly accounted for, and only if the system is not tampered with. All the others are ignored; for example, if the history is modified, no trace is left of who modified it or why. Clearly, even who has access to the accounting information needs to be correctly regulated. **Concurrent Editing.** Simultaneous writes is a highly desirable feature in cloud versioning. When many developers of a project are working on the same items, it is likely that they will commit to the repository at the same time, especially if commits are automated, say every auto-save. In this case, there will be many simultaneous writes to the same objects on different branches. (To note that here we are not trying to automatically resolve merge conflicts, since different users will never be sharing the same branch. Merges are still resolved manually.) There are tradeoffs between whether we want a highly available system or a highly consistent one. The key, as we shall see, is to determine the minimum consistency required to match user expectations, and provide therefrom the highest availability possible. ## 4 SYSTEM DESIGN To address the requirements described above for our cloud VCS, we divided the design into three layers as shown in Figure 1. At the topmost level the access API is responsible for coordinating the interaction between system and external applications. At the core of the VCS, the logical structure defines how the data structures containing the users’ data and the relative privileges are implemented. At the lowest level the physical storage layer is responsible for mapping the logical structure onto the physical servers that embody the cloud infrastructure; it takes care of issues like replication and consistency. Each shall be described in the remainder of this section. ![System Design Layout](image) **Figure 1:** System design layout. ### 4.1 Access API This layer is the one visible to other tools interacting with the VCS. Its correct definition is essential to enable third-parties to easily build powerful tools. **Version Control Commands.** As any VCS, our cloud solution has a standard API to access the stored versions and parse the (potentially multiple) histories. This API is designed to allow not only traditional operations like creation of new commits and branches, but also more unconventional operations, like the creation of a new version of the history between two given commits. **File System Access.** Files in the cloud are generally accessed through web interfaces that do not require to see all the files in a project. For example, in a project with one thousand files, a developer may be editing only a dozen of them. The other files do not need to occupy explicit space in the cloud storage—other than that in the VCS itself. Therefore, the cloud VCS provides a POSIX-like interface to operate as a regular file system. This empowers the cloud to avoid unnecessary replications when files are only needed for a very short amount of time, such as during the compilation phase when executables are produced. ### 4.2 Logical Structure The logical structure of the system is the one that is responsible for the correct functionality of the VCS. It defines how data is laid out in the system to form the revision histories, and how users can access projects and share them. **Data Layout.** Our VCS is based on the concept of snapshot: each version represents an image of all the files at the moment the version was created. The basis of the system is a multi-branching approach, where each user is in charge of his own branches, and he has exclusive write access to them. This implies that during the update process, when new revisions are created, the amount of conflicts is reduced, allowing easier implementation of concurrent editing. Given the lack of a traditional master or trunk branch, any branch is a candidate to become a release or a specially tagged branch. Privileges and organization ranks will determine the workflow of information. Due to the requirement of history refining and accountability for its changes, each component stored in the system is updatable inline, without implying a change in its identifying key. In addition, to enable automatic tools, as well as human, to better interact with the system, each node has tags associated, which describe how the node was created and what privi- leges the node itself grants to different tools and users. An important feature of the data layout is the capability to protect its integrity. In this context, we need to worry only to detect a corruption in the system, since data correction can be performed with help from the lower layer, and in particular of the data replication mechanism. Corruption may occur due to bugs in the implementation of the API at the layer above, or to malfunctioning of the hardware devices. The latter problem can be solved by adding simple error detection codes like SHA or CRC. As for the former, automatic tools will check the status of the system after risky operations to ensure the integrity of the system. In all cases, when a corruption has occurred, another copy will be used to maintain the system after risky operations to ensure the integrity of the system. In all cases, when a corruption has occurred, another copy will be used to maintain the cloud service active, and an appropriate notification will be issued. **Privileges.** We define each individual user as a developer. Each developer can create his own projects and, at the same time, participate in collaborative projects. He can also link other projects to use as external libraries or plugins. Each collaborative project, or simply project, may involve a large number of developers, each with his own set of privileges. Some developers may be actively developing the software, and therefore be granted full access. Others may be simply reviewing the code or managing releases, in this case they will be granted read-only access. When a project is made available for others to use as a library, a new set of privileges can be expressed. For example, the code inherited may be read-only, with write permissions but without repackaging capability, or fully accessible. Moreover, some developers may also be allowed to completely delete some portion of the stored data, change the access privileges of other developers or of the project as a whole. Finally, automatic tools, even though not directly developers, have their specific access privileges. For example, they may be allowed to harvest anonymous information about the engineering process for statistical analysis, or propose new revision histories to better present the project to reviewers. ### 4.3 Physical Storage The underlying physical storage design is critical to the overall performance, especially to support a large scale VCS in the cloud, where failures are common. Our design considers the various aspects of a distributed storage architecture: scalability, high availability, consistency, and security. **Scalability.** Having a single large centralized storage for all the repositories would lead to poor performance and no scalability. Therefore, our design contemplates a distributed storage architecture. User repositories will be partitioned, and distributed across different storage nodes. As the number of users grows, new partitions can be easily added on-demand, thus allowing for quick scale out of the system. Naturally, the definition of the principles driving the repositories partitioning needs careful design, so that when new storage nodes are added, the data movement necessary to recreate a correct partitioning layout is minimized. **Replication.** Users expect continuous access to the cloud VCS. However, in cloud environments built with commodity hardware, failures occur frequently, leading to both disk storage unavailability and network splits—where certain geographical regions become temporarily inaccessible. To ensure fault tolerance and high availability, user repositories are replicated across different storage nodes and across data centers. Therefore, even with failures and some storage nodes unreachable, other copies can still be accessed and used to provide continuous service. The replication scheme eliminates the single point of failure present in other centralized, or even distributed, architectures. In our scheme, there are no master and slave storage nodes. **Consistency.** Software development involves frequent changes and updates to the repositories. In a traditional scenario, changes are immediately reflected globally (e.g., a `git push` command atomically updates the remote repository). Consistency becomes more complicated when the repository is distributed as multiple copies on different storage nodes, and multiple users have access to the same repository and perform operations simultaneously. If strong consistency is wanted, the system’s availability is damaged (Brewer, 2010). In many distributed cloud storage systems, eventual consistency has been used to enable high availability (meaning that if no new updates are made to an object, all accesses will eventually return the last updated copy of that object). In our scheme, we guarantee “read-your-write” consistency for any particular user. This means that old copies of an object will never be seen by a user after he has updated that object. We also guarantee “monotonic write consistency”: for a particular process all the writes are serialized, so that later writes supersede previous writes. Finally, we guarantee “causal consistency”, meaning 1) that if an object X written by process A is read by process B, which later writes object Y, then any other process C that reads Y must also read X; and 2) that once process A has informed process B about the update of object Z, B cannot read the old copy of Z anymore. **Encryption.** Source codes are valuable to developers and they should be unreadable to users who have not been explicitly granted access to them. We envision encryption deployed at all levels of the storage system to protect users’ information and source codes. At the user level, users create credentials to prevent unauthorized access (for example in the form of RSA keys). At the source code level, source codes are protected with an encryption algorithm before being written to persistent storage. At the hardware level, individual disks are encrypted to further prevent loss or disclosure of users’ data. 5 CONCLUSIONS In this paper we discussed how online collaboration, a key technology to enable productive software development and timely-to-market deliverables, can be enhanced to meet user needs. We surveyed current tools available for collaboration, and how these have shortcomings that can hinder their effectiveness. We proposed a new solution based on a cloud version control system, where the entire repository is hosted securely in the cloud. Users can access their codes from anywhere without requiring pre-installation of specific software, and seamlessly from different devices. Moreover, the cloud VCS is planned with flexibility at its foundation, thus enabling automatic tools, as well as humans, to modify the state of the history in consistent manners to provide better insight on how the software has been evolving. REFERENCES
{"Source-Url": "http://www.hpl.hp.com/techreports/2012/HPL-2012-43.pdf", "len_cl100k_base": 5177, "olmocr-version": "0.1.43", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 19621, "total-output-tokens": 6016, "length": "2e12", "weborganizer": {"__label__adult": 0.0002720355987548828, "__label__art_design": 0.00022327899932861328, "__label__crime_law": 0.00022220611572265625, "__label__education_jobs": 0.00040984153747558594, "__label__entertainment": 3.838539123535156e-05, "__label__fashion_beauty": 8.887052536010742e-05, "__label__finance_business": 0.00014722347259521484, "__label__food_dining": 0.0002493858337402344, "__label__games": 0.0002715587615966797, "__label__hardware": 0.0004363059997558594, "__label__health": 0.00026488304138183594, "__label__history": 0.00011044740676879884, "__label__home_hobbies": 4.6372413635253906e-05, "__label__industrial": 0.00015115737915039062, "__label__literature": 0.0001308917999267578, "__label__politics": 0.0001227855682373047, "__label__religion": 0.00023543834686279297, "__label__science_tech": 0.002605438232421875, "__label__social_life": 6.61611557006836e-05, "__label__software": 0.006092071533203125, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.00016880035400390625, "__label__transportation": 0.00021660327911376953, "__label__travel": 0.00014162063598632812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28754, 0.02496]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28754, 0.51477]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28754, 0.92711]], "google_gemma-3-12b-it_contains_pii": [[0, 1044, false], [1044, 5297, null], [5297, 10561, null], [10561, 15691, null], [15691, 20233, null], [20233, 25708, null], [25708, 28754, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1044, true], [1044, 5297, null], [5297, 10561, null], [10561, 15691, null], [15691, 20233, null], [20233, 25708, null], [25708, 28754, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28754, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28754, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28754, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28754, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28754, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28754, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28754, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28754, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28754, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28754, null]], "pdf_page_numbers": [[0, 1044, 1], [1044, 5297, 2], [5297, 10561, 3], [10561, 15691, 4], [15691, 20233, 5], [20233, 25708, 6], [25708, 28754, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28754, 0.0]]}
olmocr_science_pdfs
2024-11-22
2024-11-22
5b77c991462078d569aac46af92e654aff075e63
FastBERT: a Self-distilling BERT with Adaptive Inference Time Weijie Liu¹, Peng Zhou², Zhe Zhao², Zhiruo Wang³, Haotang Deng² and Qi Ju²,* ¹Peking University, Beijing, China ²Tencent Research, Beijing, China ³Beijing Normal University, Beijing, China dataliu@pku.edu.cn, {rickzhou, nlpzhezhao, haotangdeng, damonju}@tencent.com, SherronWang@gmail.com Abstract Pre-trained language models like BERT have proven to be highly performant. However, they are often computationally expensive in many practical scenarios, for such heavy models can hardly be readily implemented with limited resources. To improve their efficiency with an assured model performance, we propose a novel speed-tunable FastBERT with adaptive inference time. The speed at inference can be flexibly adjusted under varying demands, while redundant calculation of samples is avoided. Moreover, this model adopts a unique self-distillation mechanism at fine-tuning, further enabling a greater computational efficacy with minimal loss in performance. Our model achieves promising results in twelve English and Chinese datasets. It is able to speed up by a wide range from 1 to 12 times than BERT if given different speedup thresholds to make a speed-performance tradeoff. 1 Introduction Last two years have witnessed significant improvements brought by language pre-training, such as BERT (Devlin et al., 2019), GPT (Radford et al., 2018), and XLNet (Yang et al., 2019). By pre-training on unlabeled corpus and fine-tuning on labeled ones, BERT-like models achieved huge gains on many Natural Language Processing tasks. Despite this gain in accuracy, these models have greater costs in computation and slower speed at inference, which severely impairs their practicalities. Actual settings, especially with limited time and resources in the industry, can hardly enable such models into operation. For example, in tasks like sentence matching and text classification, one often requires to process billions of requests per second. What’s more, the number of requests varies with time. In the case of an online shopping site, the number of requests during the holidays is five to ten times more than that of the workdays. A large number of servers need to be deployed to enable BERT in industrial settings, and many spare servers need to be reserved to cope with the peak period of requests, demanding huge costs. To improve their usability, many attempts in model acceleration have been made, such as quantization (Gong et al., 2014), weights pruning (Han et al., 2015), and knowledge distillation (KD) (Romero et al., 2014). As one of the most popular methods, KD requires additional smaller student models that depend entirely on the bigger teacher model and trade task accuracy for ease in computation (Hinton et al., 2015). Reducing model sizes to achieve acceptable speed-accuracy balances, however, can only solve the problem halfway, for the model is still set as fixated, rendering them unable to cope with drastic changes in request amount. By inspecting many NLP datasets (Wang et al., 2018), we discerned that the samples have different levels of difficulty. Heavy models may over-calculate the simple inputs, while lighter ones are prone to fail in complex samples. As recent studies (Kovaleva et al., 2019) have shown redundancy in pre-training models, it is useful to design a one-size-fits-all model that caters to samples with varying complexity and gains computational efficacy with the least loss of accuracy. Based on this appeal, we propose FastBERT, a pre-trained model with a sample-wise adaptive mechanism. It can adjust the number of executed layers dynamically to reduce computational steps. This model also has a unique self-distillation process that requires minimal changes to the structure, achieving faster yet as accurate outcomes within a single framework. Our model not only reaches a comparable speedup (by 2 to 11 times) to the BERT model, but also attains competitive accuracy in comparison to heavier pre-training models. Experimental results on six Chinese and six English NLP tasks have demonstrated that FastBERT achieves a huge retrench in computation with very little loss in accuracy. The main contributions of this paper can be summarized as follows: - This paper proposes a practical speed-tunable BERT model, namely FastBERT, that balances the speed and accuracy in the response of varying request amounts; - The sample-wise adaptive mechanism and the self-distillation mechanism are proposed in this paper for the first time. Their efficacy is verified on twelve NLP datasets; - The code is publicly available at https://github.com/autoliuweijie/FastBERT. 2 Related work BERT (Devlin et al., 2019) can learn universal knowledge from mass unlabeled data and produce more performant outcomes. Many works have followed: RoBERTa (Liu et al., 2019) that uses larger corpus and longer training steps. T5 (Raffel et al., 2019) that scales up the model size even more; UER (Zhao et al., 2019) pre-trains BERT in different Chinese corpora; K-BERT (Liu et al., 2020) injects knowledge graph into BERT model. These models achieve increased accuracy with heavier settings and even more data. However, such unwieldy sizes are often hampered under stringent conditions. To be more specific, BERT-base contains 110 million parameters by stacking twelve Transformer blocks (Vaswani et al., 2017), while BERT-large expands its size to even 24 layers. ALBERT (Lan et al., 2019) shares the parameters of each layer to reduce the model size. Obviously, the inference speed for these models would be much slower than classic architectures (e.g., CNN (Kim, 2014), RNN (Wang, 2018), etc). We think a large proportion of computation is caused by redundant calculation. Knowledge distillation: Many attempts have been made to distill heavy models (teachers) into their lighter counterparts (students). PKD-BERT (Sun et al., 2019a) adopts an incremental extraction process that learns generalizations from intermediate layers of the teacher model. TinyBERT (Jiao et al., 2019) performs a two-stage learning involving both general-domain pre-training and task-specific fine-tuning. DistilBERT (Sanh et al., 2019) further leveraged the inductive bias within large models by introducing a triple loss. As shown in Figure 1, student model often require a separated structure, whose effect however, depends mainly on the gains of the teacher. They are as indiscriminate to individual cases as their teachers, and only get faster in the cost of degraded performance. Adaptive inference: Conventional approaches in adaptive computations are performed token-wise or patch-wise, who either adds recurrent steps to individual tokens (Graves, 2016) or dynamically adjusts the number of executed layers inside discrete regions of images (Figurnov et al.). These local adjustments, however, are no good for parallelization. To the best of our knowledge, there has been no work in applying adaptive mechanisms to pre-training language models for efficiency improvements so far. 3 Methodology Distinct to the above efforts, our approach fusions the adaptation and distillation into a novel speed-up approach, shown in Figure 2, achieving competitive results in both accuracy and efficiency. 3.1 Model architecture As shown in Figure 2, FastBERT consists of backbone and branches. The backbone is built upon 12-layers Transformer encoder with an additional teacher-classifier, while the branches include student-classifiers which are appended to each Transformer output to enable early outputs. 3.1.1 Backbone The backbone consists of three parts: the embedding layer, the encoder containing stacks of Transformer blocks (Vaswani et al., 2017), and the teacher classifier. The structure of the embedding layer and the encoder conform with those of BERT (Devlin et al., 2019). Given the sentence length \( n \), an input sentence \( s = [w_0, w_1, \ldots, w_n] \) will be transformed by the embedding layers to a sequence of vector representations \( e \) like (1), \[ e = \text{Embedding}(s), \] where \( e \) is the summation of word, position, and segment embeddings. Next, the transformer blocks in the encoder performs a layer-by-layer feature extraction as (2), \[ h_i = \text{Transformer}_i(h_{i-1}), \] where \( h_i \) (\( i = -1, 0, 1, \ldots, L - 1 \)) is the output features at the \( i \)th layer, and \( h_{-1} = e \). \( L \) is the number of Transformer layers. Following the final encoding output is a teacher classifier that extracts in-domain features for downstream inferences. It has a fully-connected layer narrowing the dimension from 768 to 128, a self-attention joining a fully-connected layer without changes in vector size, and a fully-connected layer with a softmax function projecting vectors to an \( N \)-class indicator \( p_t \) as in (3), where \( N \) is the task-specific number of classes. \[ p_t = \text{Teacher Classifier}(h_{L-1}). \] ### 3.1.2 Branches To provide FastBERT with more adaptability, multiple branches, i.e. the student classifiers, in the same architecture with the teacher are added to the output of each Transformer block to enable early outputs, especially in those simple cases. The student classifiers can be described as (4), \[ p_{s_i} = \text{Student Classifier}_i(h_i). \tag{4} \] The student classifier is designed carefully to balance model accuracy and inference speed, for simple networks may impair the performance, while a heavy attention module severely slows down the inference speed. Our classifier has proven to be lighter with ensured competitive accuracy, detailed verifications are showcased in Section 4.1. ### 3.2 Model training FastBERT requires respective training steps for the backbone and the student classifiers. The parameters in one module is always frozen while the other module is being trained. The model is trained in preparation for downstream inference with three steps: the major backbone pre-training, entire backbone fine-tuning, and self-distillation for student classifiers. #### 3.2.1 Pre-training The pre-training of backbone resembles that of BERT in the same way that our backbone resembles BERT. Any pre-training method used for BERT-like models (e.g., BERT-WWM (Cui et al., 2019), RoBERTa (Liu et al., 2019), and ERNIE... With the above steps, FastBERT is well-prepared to perform inference in an adaptive manner, which means we can adjust the number of executed encoding layers within the model according to the sample complexity. At each Transformer layer, we measure for each sample on whether the current inference is credible enough to be terminated. Given an input sequence, the uncertainty of a student classifier’s output \( p_s \) is computed with a normalized entropy in (7), \[ Uncertainty = \frac{\sum_{i=1}^{N} p_s(i) \log p_s(i)}{\log \frac{1}{N}}, \] where \( p_s \) is the distribution of output probability, and \( N \) is the number of labeled classes. With the definition of the uncertainty, we make an important hypothesis. **Hypothesis 1.** LUHA: the Lower the Uncertainty, the Higher the Accuracy. **Definition 1.** Speed: The threshold to distinguish high and low uncertainty. LUHA is verified in Section 4.4. Both Uncertainty and Speed range between 0 and 1. The adaptive inference mechanism can be described as: At each layer of FastBERT, the corresponding student classifier will predict the label of each sample with measured Uncertainty. Samples with Uncertainty below the Speed will be sifted to early outputs, while samples with Uncertainty above the Speed will move on to the next layer. Intuitively, with a higher Speed, fewer samples will be sent to higher layers, and overall inference speed will be faster, and vice versa. Therefore, Speed can be used as a halt value for weighing the inference accuracy and efficiency. ### Table 1: FLOPs of each operation within the FastBERT <table> <thead> <tr> <th>Operation</th> <th>Sub-operation</th> <th>FLOPs</th> <th>Total FLOPs</th> </tr> </thead> <tbody> <tr> <td>Transformer</td> <td>Self-attention (768 → 768)</td> <td>603.0M</td> <td>1809.9M</td> </tr> <tr> <td></td> <td>Feedforward (768 → 3072 → 768)</td> <td>1207.9M</td> <td></td> </tr> <tr> <td>Classifier</td> <td>Fully-connect (768 → 128)</td> <td>25.1M</td> <td></td> </tr> <tr> <td></td> <td>Self-attention (128 → 128)</td> <td>16.8M</td> <td>46.1M</td> </tr> <tr> <td></td> <td>Fully-connect (128 → 128)</td> <td>4.2M</td> <td></td> </tr> <tr> <td></td> <td>Fully-connect (128 → (N))</td> <td>-</td> <td></td> </tr> <tr> <td>Dataset/Model</td> <td>ChnSentiCorp</td> <td>Book review</td> <td>Shopping review</td> </tr> <tr> <td>---------------</td> <td>-------------</td> <td>-------------</td> <td>-----------------</td> </tr> <tr> <td></td> <td>Acc. FLOPs</td> <td>Acc. FLOPs</td> <td>Acc. FLOPs</td> </tr> <tr> <td>BERT</td> <td>95.25</td> <td>21785M (1.00x)</td> <td>86.88</td> </tr> <tr> <td>DistilBERT</td> <td>88.59</td> <td>10918M (2.00x)</td> <td>83.31</td> </tr> <tr> <td>(6 layers)</td> <td></td> <td></td> <td></td> </tr> <tr> <td>DistilBERT</td> <td>87.33</td> <td>5218M (4.01x)</td> <td>81.17</td> </tr> <tr> <td>(3 layers)</td> <td></td> <td></td> <td></td> </tr> <tr> <td>DistilBERT</td> <td>81.33</td> <td>1858M (11.72x)</td> <td>77.40</td> </tr> <tr> <td>(1 layers)</td> <td></td> <td></td> <td></td> </tr> <tr> <td>FastBERT</td> <td>95.25</td> <td>10741M (2.02x)</td> <td>86.88</td> </tr> <tr> <td>(speed=0.1)</td> <td></td> <td></td> <td></td> </tr> <tr> <td>FastBERT</td> <td>92.00</td> <td>3191M (6.82x)</td> <td>86.64</td> </tr> <tr> <td>(speed=0.5)</td> <td></td> <td></td> <td></td> </tr> <tr> <td>FastBERT</td> <td>89.75</td> <td>2315M (9.40x)</td> <td>85.14</td> </tr> <tr> <td>(speed=0.8)</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> **Table 2**: Comparison of accuracy (Acc.) and FLOPs (speedup) between FastBERT and Baselines in six Chinese datasets and six English datasets. 4 Experimental results In this section, we will verify the effectiveness of FastBERT on twelve NLP datasets (six in English and six in Chinese) with detailed explanations. 4.1 FLOPs analysis Floating-point operations (FLOPs) is a measure of the computational complexity of models, which indicates the number of floating-point operations that the model performs for a single process. The FLOPs has nothing to do with the model’s operating environment (CPU, GPU or TPU) and only reveals the computational complexity. Generally speaking, the bigger the model’s FLOPs is, the longer the inference time will be. With the same accuracy, models with low FLOPs are more efficient and more suitable for industrial uses. We list the measured FLOPs of both structures in Table 1, from which we can infer that, the calculation load (FLOPs) of the Classifier is much lighter than that of the Transformer. This is the basis of the speed-up of FastBERT, for although it adds additional classifiers, it achieves acceleration by reducing more computation in Transformers. 4.2 Baseline and dataset 4.2.1 Baseline In this section, we compare FastBERT against two baselines: - **BERT**\(^1\) The 12-layer BERT-base model was pre-trained on Wiki corpus and released by Google (Devlin et al., 2019). - **DistilBERT**\(^2\) The most famous distillation method of BERT with 6 layers was released by Huggingface (Sanh et al., 2019). In addition, we use the same method to distill the DistilBERT with 3 and 1 layer(s), respectively. 4.2.2 Dataset To verify the effectiveness of FastBERT, especially in industrial scenarios, six Chinese and six English datasets pressing closer to actual applications are used. The six Chinese datasets include \(^1\)https://github.com/google-research/bert \(^2\)https://github.com/huggingface/transformers/tree/master/examples/distillation the sentence classification tasks (ChnSentiCorp, Book review(Qiu et al., 2018), Shopping review, Weibo and THUCNews) and a sentences-matching task (LCQMC(Liu et al., 2018)). All the Chinese datasets are available at the FastBERT project. The six English datasets (Ag.News, Amz.F, DBpedia, Yahoo, Yelp.F, and Yelp.P) are sentence classification tasks and were released in (Zhang et al., 2015). ### 4.3 Performance comparison To perform a fair comparison, BERT / DistilBERT / FastBERT all adopt the same configuration as follows. In this paper, $L = 12$. The number of self-attention heads, the hidden dimension of embedding vectors, and the max length of the input sentence are set to 12, 768 and 128 respectively. Both FastBERT and BERT use pre-trained parameters provided by Google, while DistilBERT is pre-trained with (Sanh et al., 2019). We fine-tune these models using the AdamW (Loshchilov and Hutter) algorithm, a $2 \times 10^{-5}$ learning rate, and a 0.1 warmup. Then, we select the model with the best accuracy in 3 epochs. For the self-distillation of FastBERT, we increase the learning rate to $2 \times 10^{-4}$ and distill it for 5 epochs. We evaluate the text inference capabilities of these models on the twelve datasets and report their accuracy (Acc.) and sample-averaged FLOPs under different Speed values. The result of comparisons are shown in Table 2, where the Speedup is obtained by using BERT as the benchmark. It can be observed that with the setting of Speed = 0.1, FastBERT can speed up 2 to 5 times without losing accuracy for most datasets. If a little loss of accuracy is tolerated, FastBERT can be 7 to 11 times faster than BERT. Comparing to DistilBERT, FastBERT trades less accuracy to catch higher efficiency. Figure 3 illustrates FastBERT’s tradeoff in accuracy and efficiency. The speedup ratio of FastBERT are free to be adjusted between 1 and 12, while the loss of accuracy remains small, which is a very attractive feature in the industry. ![Figure 3: The trade-offs of FastBERT on twelve datasets (six in Chinese and six in English): (a) and (d) are Speed-Accuracy relations, showing changes of Speed (the threshold of Uncertainty) in dependence of the accuracy; (b) and (e) are Speed-Speedup relations, indicating that the Speed manages the adaptability of FastBERT; (c) and (f) are the Speedup-Accuracy relations, i.e. the trade-off between efficiency and accuracy.](image) <table> <thead> <tr> <th>Dataset</th> <th>Speedup (times)</th> <th>Speedup (times)</th> <th>Speedup (times)</th> <th>Speedup (times)</th> <th>Speedup (times)</th> <th>Speedup (times)</th> </tr> </thead> <tbody> <tr> <td>ChnSentiCorp</td> <td>0x</td> <td>2x</td> <td>4x</td> <td>6x</td> <td>8x</td> <td>10x</td> </tr> <tr> <td>Book review</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Shopping</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>review</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>LQM</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Weibo</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>THUCNews</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> ![Figure 4: The relation of classifier accuracy and average case uncertainty: Three classifiers at the bottom, in the middle, and on top of the FastBERT were analyzed, and their accuracy within various uncertainty intervals were calculated with the Book Review dataset.](image) 4.4 LUHA hypothesis verification As is described in the Section 3.3, the adaptive inference of FastBERT is based on the LUHA hypothesis, i.e., “the Lower the Uncertainty, the Higher the Accuracy”. Here, we prove this hypothesis using the book review dataset. We intercept the classification results of Student-Classifier0, Student-Classifier5, and Teacher-Classifier in FastBERT, then count their accuracy in each uncertainty interval statistically. As shown in Figure 4, the statistical indexes confirm that the classifier follows the LUHA hypothesis, no matter it sits at the bottom, in the middle or on top of the model. From Figure 4, it is easy to mistakenly conclude that Students has better performance than Teacher due to the fact that the accuracy of Student in each uncertainty range is higher than that of Teacher. This conclusion can be denied by analysis with Figure 6(a) together. For the Teacher, more samples are located in areas with lower uncertainty, while the Students’ samples are nearly uniformly distributed. Therefore the overall accuracy of the Teacher is still higher than that of Students. 4.5 In-depth study In this section, we conduct a set of in-depth analysis of FastBERT from three aspects: the distribution of exit layer, the distribution of sample uncertainty, and the convergence during self-distillation. Figure 5: The distribution of executed layers on average in the Book review dataset, with experiments at three different speeds (0.3, 0.5 and 0.8). Figure 6: The distribution of Uncertainty at different layers of FastBERT in the Book review dataset: (a) The speed is set to 0.0, which means that all samples will pass through all the twelve layers; (b) ~ (d): The Speed is set to 0.3, 0.5, and 0.8 respectively, and only the samples with Uncertainty higher than Speed will be sent to the next layer. 4.5.1 Layer distribution In FastBERT, each sample walks through a different number of Transformer layers due to varied complexity. For a certain condition, fewer executed layers often requires less computing resources. As illustrated in Figure 5, we investigate the distribution of exit layers under different constraint of Speeds (0.3, 0.5 and 0.8) in the book review dataset. Take Speed = 0.8 as an example, at the first layer Transformer0, 61% of the samples is able to complete the inference. This significantly eliminates unnecessary calculations in the next eleven layers. 4.5.2 Uncertainty distribution The distribution of sample uncertainty predicted by different student classifiers varies, as is illustrated in Figure 6. Observing these distributions help us to From Table 3, we have observed that: (1) At almost the same level of speedup, FastBERT without self-distillation or adaptation performs poorer; (2) When the model is accelerated more than five times, downstream accuracy degrades dramatically without adaptation. It is safe to conclude that both the adaptation and self-distillation play a key role in FastBERT, which achieves both significant speedups and favorable low losses of accuracy. 5 Conclusion In this paper, we propose a fast version of BERT, namely FastBERT. Specifically, FastBERT adopts a self-distillation mechanism during the training phase and an adaptive mechanism in the inference phase, achieving the goal of gaining more efficiency with less accuracy loss. Self-distillation and adaptive inference are first proposed in this paper. In addition, FastBERT has a very practical feature in industrial scenarios, i.e., its inference speed is tunable. Our experiments demonstrate promising results on twelve NLP datasets. Empirical results have shown that FastBERT can be 2 to 3 times faster than BERT without performance degradation. If we slack the tolerated loss in accuracy, the model is free to tune its speedup between 1 and 12 times. Besides, FastBERT remains compatible to the parameter settings of other BERT-like models (e.g., BERT-WWM, ERNIE, and RoBERTa), which means these public available models can be readily loaded. for FastBERT initialization. 6 Future work These promising results point to future works in (1) linearizing the Speed-Speedup curve; (2) extending this approach to other pre-training architectures such as XLNet (Yang et al., 2019) and ELMo (Peters et al., 2018); (3) applying FastBERT on a wider range of NLP tasks, such as named entity recognition and machine translation. Acknowledgments This work is funded by 2019 Tencent Rhino-Bird Elite Training Program. References Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. In Advances in NeurIPS, pages 1135–1143.
{"Source-Url": "https://export.arxiv.org/pdf/2004.02178", "len_cl100k_base": 5970, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 38190, "total-output-tokens": 8768, "length": "2e12", "weborganizer": {"__label__adult": 0.00055694580078125, "__label__art_design": 0.0007023811340332031, "__label__crime_law": 0.0007233619689941406, "__label__education_jobs": 0.0019702911376953125, "__label__entertainment": 0.0003743171691894531, "__label__fashion_beauty": 0.0003523826599121094, "__label__finance_business": 0.0005841255187988281, "__label__food_dining": 0.00048279762268066406, "__label__games": 0.001056671142578125, "__label__hardware": 0.001201629638671875, "__label__health": 0.0009975433349609375, "__label__history": 0.000354766845703125, "__label__home_hobbies": 0.00012058019638061523, "__label__industrial": 0.0006847381591796875, "__label__literature": 0.0013055801391601562, "__label__politics": 0.0005769729614257812, "__label__religion": 0.0007219314575195312, "__label__science_tech": 0.288330078125, "__label__social_life": 0.0002079010009765625, "__label__software": 0.03240966796875, "__label__software_dev": 0.6650390625, "__label__sports_fitness": 0.00037980079650878906, "__label__transportation": 0.0006461143493652344, "__label__travel": 0.00023376941680908203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30646, 0.05056]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30646, 0.29289]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30646, 0.87376]], "google_gemma-3-12b-it_contains_pii": [[0, 4032, false], [4032, 7794, null], [7794, 10328, null], [10328, 12492, null], [12492, 17072, null], [17072, 20899, null], [20899, 23522, null], [23522, 24922, null], [24922, 29154, null], [29154, 30646, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4032, true], [4032, 7794, null], [7794, 10328, null], [10328, 12492, null], [12492, 17072, null], [17072, 20899, null], [20899, 23522, null], [23522, 24922, null], [24922, 29154, null], [29154, 30646, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30646, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30646, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30646, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30646, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30646, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30646, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30646, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30646, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30646, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30646, null]], "pdf_page_numbers": [[0, 4032, 1], [4032, 7794, 2], [7794, 10328, 3], [10328, 12492, 4], [12492, 17072, 5], [17072, 20899, 6], [20899, 23522, 7], [23522, 24922, 8], [24922, 29154, 9], [29154, 30646, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30646, 0.19527]]}
olmocr_science_pdfs
2024-11-27
2024-11-27